915 resultados para importance analysis


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article highlights the potential benefits that the Kohonen method has for the classification of rivers with similar characteristics by determining regional ecological flows using the ELOHA (Ecological Limits of Hydrologic Alteration) methodology. Currently, there are many methodologies for the classification of rivers, however none of them include the characteristics found in Kohonen method such as (i) providing the number of groups that actually underlie the information presented, (ii) used to make variable importance analysis, (iii) which in any case can display two-dimensional classification process, and (iv) that regardless of the parameters used in the model the clustering structure remains. In order to evaluate the potential benefits of the Kohonen method, 174 flow stations distributed along the great river basin “Magdalena-Cauca” (Colombia) were analyzed. 73 variables were obtained for the classification process in each case. Six trials were done using different combinations of variables and the results were validated against reference classification obtained by Ingfocol in 2010, whose results were also framed using ELOHA guidelines. In the process of validation it was found that two of the tested models reproduced a level higher than 80% of the reference classification with the first trial, meaning that more than 80% of the flow stations analyzed in both models formed invariant groups of streams.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study presents new information on feeding habits of Guiana dolphins, Sotalia guianensis, in south-eastern Brazil, together with new regression equations to evaluate the weight and length of fish from otoliths, showing an overview on the knowledge about this species-diet in this area. Eighteen stomach contents had been analysed and compared to 180 samples collected in another eight feeding studies. The analysed specimens were either incidentally caught in gillnets used in coastal waters by the fleet based in the Cananéia main harbour (25°00âS 47°55âW), south of São Paulo State, or found dead in inner waters of the Cananéia estuary between 2003 and 2009. Based on the index of relative importance analysis, the most important fish species were the banded croaker, Paralonchurus brasiliensis. Doryteuthis plei was the most representative cephalopod species. Stellifer rastrifer was the most important fish species observed in dolphins in inner estuarine waters and P. brasiliensis in recovered dolphins from coastal waters. Loliguncula brevis is the only cephalopod species reported from dolphins found in inner estuarine waters up to date. Doryteuthis plei was the most important cephalopod species observed in coastal dolphins. When considering other feeding studies, the most representative fish family in the diet of S. guianensis was Sciaenidae, which is mainly represented by demersal fishes. The main preys of S. guianensis are abundant in the studied areas, which may indicate an opportunistic feeding habit. The majority of them are not the most important target species by the commercial fishery in south-eastern Brazil. © 2012 Marine Biological Association of the United Kingdom.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study presents new information on feeding habits of Guiana dolphins, Sotalia guianensis, in south-eastern Brazil, together with new regression equations to evaluate the weight and length of fish from otoliths, showing an overview on the knowledge about this species' diet in this area. Eighteen stomach contents had been analysed and compared to 180 samples collected in another eight feeding studies. The analysed specimens were either incidentally caught in gillnets used in coastal waters by the fleet based in the Cananeia main harbour (25 degrees 00'S 47 degrees 55'W), south of Sao Paulo State, or found dead in inner waters of the Cananeia estuary between 2003 and 2009. Based on the index of relative importance analysis, the most important fish species were the banded croaker, Paralonchurus brasiliensis. Doryteuthis plei was the most representative cephalopod species. Stellifer rastrifer was the most important fish species observed in dolphins in inner estuarine waters and P. brasiliensis in recovered dolphins from coastal waters. Loliguncula brevis is the only cephalopod species reported from dolphins found in inner estuarine waters up to date. Doryteuthis plei was the most important cephalopod species observed in coastal dolphins. When considering other feeding studies, the most representative fish family in the diet of S. guianensis was Sciaenidae, which is mainly represented by demersal fishes. The main preys of S. guianensis are abundant in the studied areas, which may indicate an opportunistic feeding habit. The majority of them are not the most important target species by the commercial fishery in south-eastern Brazil.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND Recurrent urticaria (RU) is a common skin disease of horses, but little is known about its pathogenesis. HYPOTHESIS/OBJECTIVE The aim of this study was to characterize the inflammatory cell infiltrate and cytokine expression pattern in the skin of horses with RU. ANIMALS   Biopsies of lesional and nonlesional skin of horses with RU (n = 8) and of skin from healthy control horses (n = 8) were evaluated. METHODS The inflammatory cell infiltrate was analysed by routine histology. Immunohistochemistry was used to identify T cells (CD3), B  ells (CD79), macrophages (MAC387) and mast cells (tryptase). Expression of T-helper 2 cytokines (interleukins IL-4, IL-5 and IL-13), a T-helper 1 cytokine (interferon-γ), IL-4 receptor α and thymic stromal lymphopoietin was assessed by quantitative RT-PCR. Results -  In subepidermal lesional skin of RU-affected horses, increased numbers of eosinophils (P ≤ 0.01), CD79-positive (P ≤ 0.01), MAC387-positive (P ≤ 0.01) and tryptase-positive cells (P ≤ 0.05) were found compared with healthy horses. Subepidermal lesional skin of RU-affected horses contained more eosinophils (P ≤ 0.05) and tryptase-positive cells (P ≤ 0.05) compared with nonlesional skin. There was no significant difference in infiltrating cells between nonlesional skin and skin of healthy horses. Expression of IL-4 (P ≤ 0.01), IL-13 (P ≤ 0.05), thymic stromal lymphopoietin (P ≤ 0.05) and IL-4 receptor α (P ≤ 0.05) was increased in lesional skin of RU-affected horses compared with control horses. Expression of IL-4 was higher (P ≤ 0.05) in lesional compared with nonlesional RU skin. CONCLUSIONS AND CLINICAL IMPORTANCE Analysis of cytokine expression and inflammatory infiltrate suggests that T-helper 2 cytokines, eosinophils, mast cells and presumptive macrophages play a role in the pathogenesis of equine RU.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El análisis determinista de seguridad (DSA) es el procedimiento que sirve para diseñar sistemas, estructuras y componentes relacionados con la seguridad en las plantas nucleares. El DSA se basa en simulaciones computacionales de una serie de hipotéticos accidentes representativos de la instalación, llamados escenarios base de diseño (DBS). Los organismos reguladores señalan una serie de magnitudes de seguridad que deben calcularse en las simulaciones, y establecen unos criterios reguladores de aceptación (CRA), que son restricciones que deben cumplir los valores de esas magnitudes. Las metodologías para realizar los DSA pueden ser de 2 tipos: conservadoras o realistas. Las metodologías conservadoras utilizan modelos predictivos e hipótesis marcadamente pesimistas, y, por ello, relativamente simples. No necesitan incluir un análisis de incertidumbre de sus resultados. Las metodologías realistas se basan en hipótesis y modelos predictivos realistas, generalmente mecanicistas, y se suplementan con un análisis de incertidumbre de sus principales resultados. Se les denomina también metodologías BEPU (“Best Estimate Plus Uncertainty”). En ellas, la incertidumbre se representa, básicamente, de manera probabilista. Para metodologías conservadores, los CRA son, simplemente, restricciones sobre valores calculados de las magnitudes de seguridad, que deben quedar confinados en una “región de aceptación” de su recorrido. Para metodologías BEPU, el CRA no puede ser tan sencillo, porque las magnitudes de seguridad son ahora variables inciertas. En la tesis se desarrolla la manera de introducción de la incertidumbre en los CRA. Básicamente, se mantiene el confinamiento a la misma región de aceptación, establecida por el regulador. Pero no se exige el cumplimiento estricto sino un alto nivel de certidumbre. En el formalismo adoptado, se entiende por ello un “alto nivel de probabilidad”, y ésta corresponde a la incertidumbre de cálculo de las magnitudes de seguridad. Tal incertidumbre puede considerarse como originada en los inputs al modelo de cálculo, y propagada a través de dicho modelo. Los inputs inciertos incluyen las condiciones iniciales y de frontera al cálculo, y los parámetros empíricos de modelo, que se utilizan para incorporar la incertidumbre debida a la imperfección del modelo. Se exige, por tanto, el cumplimiento del CRA con una probabilidad no menor a un valor P0 cercano a 1 y definido por el regulador (nivel de probabilidad o cobertura). Sin embargo, la de cálculo de la magnitud no es la única incertidumbre existente. Aunque un modelo (sus ecuaciones básicas) se conozca a la perfección, la aplicación input-output que produce se conoce de manera imperfecta (salvo que el modelo sea muy simple). La incertidumbre debida la ignorancia sobre la acción del modelo se denomina epistémica; también se puede decir que es incertidumbre respecto a la propagación. La consecuencia es que la probabilidad de cumplimiento del CRA no se puede conocer a la perfección; es una magnitud incierta. Y así se justifica otro término usado aquí para esta incertidumbre epistémica: metaincertidumbre. Los CRA deben incorporar los dos tipos de incertidumbre: la de cálculo de la magnitud de seguridad (aquí llamada aleatoria) y la de cálculo de la probabilidad (llamada epistémica o metaincertidumbre). Ambas incertidumbres pueden introducirse de dos maneras: separadas o combinadas. En ambos casos, el CRA se convierte en un criterio probabilista. Si se separan incertidumbres, se utiliza una probabilidad de segundo orden; si se combinan, se utiliza una probabilidad única. Si se emplea la probabilidad de segundo orden, es necesario que el regulador imponga un segundo nivel de cumplimiento, referido a la incertidumbre epistémica. Se denomina nivel regulador de confianza, y debe ser un número cercano a 1. Al par formado por los dos niveles reguladores (de probabilidad y de confianza) se le llama nivel regulador de tolerancia. En la Tesis se razona que la mejor manera de construir el CRA BEPU es separando las incertidumbres, por dos motivos. Primero, los expertos defienden el tratamiento por separado de incertidumbre aleatoria y epistémica. Segundo, el CRA separado es (salvo en casos excepcionales) más conservador que el CRA combinado. El CRA BEPU no es otra cosa que una hipótesis sobre una distribución de probabilidad, y su comprobación se realiza de forma estadística. En la tesis, los métodos estadísticos para comprobar el CRA BEPU en 3 categorías, según estén basados en construcción de regiones de tolerancia, en estimaciones de cuantiles o en estimaciones de probabilidades (ya sea de cumplimiento, ya sea de excedencia de límites reguladores). Según denominación propuesta recientemente, las dos primeras categorías corresponden a los métodos Q, y la tercera, a los métodos P. El propósito de la clasificación no es hacer un inventario de los distintos métodos en cada categoría, que son muy numerosos y variados, sino de relacionar las distintas categorías y citar los métodos más utilizados y los mejor considerados desde el punto de vista regulador. Se hace mención especial del método más utilizado hasta el momento: el método no paramétrico de Wilks, junto con su extensión, hecha por Wald, al caso multidimensional. Se decribe su método P homólogo, el intervalo de Clopper-Pearson, típicamente ignorado en el ámbito BEPU. En este contexto, se menciona el problema del coste computacional del análisis de incertidumbre. Los métodos de Wilks, Wald y Clopper-Pearson requieren que la muestra aleatortia utilizada tenga un tamaño mínimo, tanto mayor cuanto mayor el nivel de tolerancia exigido. El tamaño de muestra es un indicador del coste computacional, porque cada elemento muestral es un valor de la magnitud de seguridad, que requiere un cálculo con modelos predictivos. Se hace especial énfasis en el coste computacional cuando la magnitud de seguridad es multidimensional; es decir, cuando el CRA es un criterio múltiple. Se demuestra que, cuando las distintas componentes de la magnitud se obtienen de un mismo cálculo, el carácter multidimensional no introduce ningún coste computacional adicional. Se prueba así la falsedad de una creencia habitual en el ámbito BEPU: que el problema multidimensional sólo es atacable desde la extensión de Wald, que tiene un coste de computación creciente con la dimensión del problema. En el caso (que se da a veces) en que cada componente de la magnitud se calcula independientemente de los demás, la influencia de la dimensión en el coste no se puede evitar. Las primeras metodologías BEPU hacían la propagación de incertidumbres a través de un modelo sustitutivo (metamodelo o emulador) del modelo predictivo o código. El objetivo del metamodelo no es su capacidad predictiva, muy inferior a la del modelo original, sino reemplazar a éste exclusivamente en la propagación de incertidumbres. Para ello, el metamodelo se debe construir con los parámetros de input que más contribuyan a la incertidumbre del resultado, y eso requiere un análisis de importancia o de sensibilidad previo. Por su simplicidad, el modelo sustitutivo apenas supone coste computacional, y puede estudiarse exhaustivamente, por ejemplo mediante muestras aleatorias. En consecuencia, la incertidumbre epistémica o metaincertidumbre desaparece, y el criterio BEPU para metamodelos se convierte en una probabilidad simple. En un resumen rápido, el regulador aceptará con más facilidad los métodos estadísticos que menos hipótesis necesiten; los exactos más que los aproximados; los no paramétricos más que los paramétricos, y los frecuentistas más que los bayesianos. El criterio BEPU se basa en una probabilidad de segundo orden. La probabilidad de que las magnitudes de seguridad estén en la región de aceptación no sólo puede asimilarse a una probabilidad de éxito o un grado de cumplimiento del CRA. También tiene una interpretación métrica: representa una distancia (dentro del recorrido de las magnitudes) desde la magnitud calculada hasta los límites reguladores de aceptación. Esta interpretación da pie a una definición que propone esta tesis: la de margen de seguridad probabilista. Dada una magnitud de seguridad escalar con un límite superior de aceptación, se define el margen de seguridad (MS) entre dos valores A y B de la misma como la probabilidad de que A sea menor que B, obtenida a partir de las incertidumbres de A y B. La definición probabilista de MS tiene varias ventajas: es adimensional, puede combinarse de acuerdo con las leyes de la probabilidad y es fácilmente generalizable a varias dimensiones. Además, no cumple la propiedad simétrica. El término margen de seguridad puede aplicarse a distintas situaciones: distancia de una magnitud calculada a un límite regulador (margen de licencia); distancia del valor real de la magnitud a su valor calculado (margen analítico); distancia desde un límite regulador hasta el valor umbral de daño a una barrera (margen de barrera). Esta idea de representar distancias (en el recorrido de magnitudes de seguridad) mediante probabilidades puede aplicarse al estudio del conservadurismo. El margen analítico puede interpretarse como el grado de conservadurismo (GC) de la metodología de cálculo. Utilizando la probabilidad, se puede cuantificar el conservadurismo de límites de tolerancia de una magnitud, y se pueden establecer indicadores de conservadurismo que sirvan para comparar diferentes métodos de construcción de límites y regiones de tolerancia. Un tópico que nunca se abordado de manera rigurosa es el de la validación de metodologías BEPU. Como cualquier otro instrumento de cálculo, una metodología, antes de poder aplicarse a análisis de licencia, tiene que validarse, mediante la comparación entre sus predicciones y valores reales de las magnitudes de seguridad. Tal comparación sólo puede hacerse en escenarios de accidente para los que existan valores medidos de las magnitudes de seguridad, y eso ocurre, básicamente en instalaciones experimentales. El objetivo último del establecimiento de los CRA consiste en verificar que se cumplen para los valores reales de las magnitudes de seguridad, y no sólo para sus valores calculados. En la tesis se demuestra que una condición suficiente para este objetivo último es la conjunción del cumplimiento de 2 criterios: el CRA BEPU de licencia y un criterio análogo, pero aplicado a validación. Y el criterio de validación debe demostrarse en escenarios experimentales y extrapolarse a plantas nucleares. El criterio de licencia exige un valor mínimo (P0) del margen probabilista de licencia; el criterio de validación exige un valor mínimo del margen analítico (el GC). Esos niveles mínimos son básicamente complementarios; cuanto mayor uno, menor el otro. La práctica reguladora actual impone un valor alto al margen de licencia, y eso supone que el GC exigido es pequeño. Adoptar valores menores para P0 supone menor exigencia sobre el cumplimiento del CRA, y, en cambio, más exigencia sobre el GC de la metodología. Y es importante destacar que cuanto mayor sea el valor mínimo del margen (de licencia o analítico) mayor es el coste computacional para demostrarlo. Así que los esfuerzos computacionales también son complementarios: si uno de los niveles es alto (lo que aumenta la exigencia en el cumplimiento del criterio) aumenta el coste computacional. Si se adopta un valor medio de P0, el GC exigido también es medio, con lo que la metodología no tiene que ser muy conservadora, y el coste computacional total (licencia más validación) puede optimizarse. ABSTRACT Deterministic Safety Analysis (DSA) is the procedure used in the design of safety-related systems, structures and components of nuclear power plants (NPPs). DSA is based on computational simulations of a set of hypothetical accidents of the plant, named Design Basis Scenarios (DBS). Nuclear regulatory authorities require the calculation of a set of safety magnitudes, and define the regulatory acceptance criteria (RAC) that must be fulfilled by them. Methodologies for performing DSA van be categorized as conservative or realistic. Conservative methodologies make use of pessimistic model and assumptions, and are relatively simple. They do not need an uncertainty analysis of their results. Realistic methodologies are based on realistic (usually mechanistic) predictive models and assumptions, and need to be supplemented with uncertainty analyses of their results. They are also termed BEPU (“Best Estimate Plus Uncertainty”) methodologies, and are typically based on a probabilistic representation of the uncertainty. For conservative methodologies, the RAC are simply the restriction of calculated values of safety magnitudes to “acceptance regions” defined on their range. For BEPU methodologies, the RAC cannot be so simple, because the safety magnitudes are now uncertain. In the present Thesis, the inclusion of uncertainty in RAC is studied. Basically, the restriction to the acceptance region must be fulfilled “with a high certainty level”. Specifically, a high probability of fulfillment is required. The calculation uncertainty of the magnitudes is considered as propagated from inputs through the predictive model. Uncertain inputs include model empirical parameters, which store the uncertainty due to the model imperfection. The fulfillment of the RAC is required with a probability not less than a value P0 close to 1 and defined by the regulator (probability or coverage level). Calculation uncertainty is not the only one involved. Even if a model (i.e. the basic equations) is perfectly known, the input-output mapping produced by the model is imperfectly known (unless the model is very simple). This ignorance is called epistemic uncertainty, and it is associated to the process of propagation). In fact, it is propagated to the probability of fulfilling the RAC. Another term used on the Thesis for this epistemic uncertainty is metauncertainty. The RAC must include the two types of uncertainty: one for the calculation of the magnitude (aleatory uncertainty); the other one, for the calculation of the probability (epistemic uncertainty). The two uncertainties can be taken into account in a separate fashion, or can be combined. In any case the RAC becomes a probabilistic criterion. If uncertainties are separated, a second-order probability is used; of both are combined, a single probability is used. On the first case, the regulator must define a level of fulfillment for the epistemic uncertainty, termed regulatory confidence level, as a value close to 1. The pair of regulatory levels (probability and confidence) is termed the regulatory tolerance level. The Thesis concludes that the adequate way of setting the BEPU RAC is by separating the uncertainties. There are two reasons to do so: experts recommend the separation of aleatory and epistemic uncertainty; and the separated RAC is in general more conservative than the joint RAC. The BEPU RAC is a hypothesis on a probability distribution, and must be statistically tested. The Thesis classifies the statistical methods to verify the RAC fulfillment in 3 categories: methods based on tolerance regions, in quantile estimators and on probability (of success or failure) estimators. The former two have been termed Q-methods, whereas those in the third category are termed P-methods. The purpose of our categorization is not to make an exhaustive survey of the very numerous existing methods. Rather, the goal is to relate the three categories and examine the most used methods from a regulatory standpoint. Special mention deserves the most used method, due to Wilks, and its extension to multidimensional variables (due to Wald). The counterpart P-method of Wilks’ is Clopper-Pearson interval, typically ignored in the BEPU realm. The problem of the computational cost of an uncertainty analysis is tackled. Wilks’, Wald’s and Clopper-Pearson methods require a minimum sample size, which is a growing function of the tolerance level. The sample size is an indicator of the computational cost, because each element of the sample must be calculated with the predictive models (codes). When the RAC is a multiple criteria, the safety magnitude becomes multidimensional. When all its components are output of the same calculation, the multidimensional character does not introduce additional computational cost. In this way, an extended idea in the BEPU realm, stating that the multi-D problem can only be tackled with the Wald extension, is proven to be false. When the components of the magnitude are independently calculated, the influence of the problem dimension on the cost cannot be avoided. The former BEPU methodologies performed the uncertainty propagation through a surrogate model of the code, also termed emulator or metamodel. The goal of a metamodel is not the predictive capability, clearly worse to the original code, but the capacity to propagate uncertainties with a lower computational cost. The emulator must contain the input parameters contributing the most to the output uncertainty, and this requires a previous importance analysis. The surrogate model is practically inexpensive to run, so that it can be exhaustively analyzed through Monte Carlo. Therefore, the epistemic uncertainty due to sampling will be reduced to almost zero, and the BEPU RAC for metamodels includes a simple probability. The regulatory authority will tend to accept the use of statistical methods which need a minimum of assumptions: exact, nonparametric and frequentist methods rather than approximate, parametric and bayesian methods, respectively. The BEPU RAC is based on a second-order probability. The probability of the safety magnitudes being inside the acceptance region is a success probability and can be interpreted as a fulfillment degree if the RAC. Furthermore, it has a metric interpretation, as a distance (in the range of magnitudes) from calculated values of the magnitudes to acceptance regulatory limits. A probabilistic definition of safety margin (SM) is proposed in the thesis. The same from a value A to other value B of a safety magnitude is defined as the probability that A is less severe than B, obtained from the uncertainties if A and B. The probabilistic definition of SM has several advantages: it is nondimensional, ranges in the interval (0,1) and can be easily generalized to multiple dimensions. Furthermore, probabilistic SM are combined according to the probability laws. And a basic property: probabilistic SM are not symmetric. There are several types of SM: distance from a calculated value to a regulatory limit (licensing margin); or from the real value to the calculated value of a magnitude (analytical margin); or from the regulatory limit to the damage threshold (barrier margin). These representations of distances (in the magnitudes’ range) as probabilities can be applied to the quantification of conservativeness. Analytical margins can be interpreted as the degree of conservativeness (DG) of the computational methodology. Conservativeness indicators are established in the Thesis, useful in the comparison of different methods of constructing tolerance limits and regions. There is a topic which has not been rigorously tackled to the date: the validation of BEPU methodologies. Before being applied in licensing, methodologies must be validated, on the basis of comparisons of their predictions ad real values of the safety magnitudes. Real data are obtained, basically, in experimental facilities. The ultimate goal of establishing RAC is to verify that real values (aside from calculated values) fulfill them. In the Thesis it is proved that a sufficient condition for this goal is the conjunction of 2 criteria: the BEPU RAC and an analogous criterion for validation. And this las criterion must be proved in experimental scenarios and extrapolated to NPPs. The licensing RAC requires a minimum value (P0) of the probabilistic licensing margin; the validation criterion requires a minimum value of the analytical margin (i.e., of the DG). These minimum values are basically complementary; the higher one of them, the lower the other one. The regulatory practice sets a high value on the licensing margin, so that the required DG is low. The possible adoption of lower values for P0 would imply weaker exigence on the RCA fulfillment and, on the other hand, higher exigence on the conservativeness of the methodology. It is important to highlight that a higher minimum value of the licensing or analytical margin requires a higher computational cost. Therefore, the computational efforts are also complementary. If medium levels are adopted, the required DG is also medium, and the methodology does not need to be very conservative. The total computational effort (licensing plus validation) could be optimized.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This research examined to what extent and how leadership is related to organisational outcomes in healthcare. Based on the Job Demands-Resource model, a set of hypotheses was developed, which predicted that the effect of leadership on healthcare outcomes would be mediated by job design, employee engagement, work pressure, opportunity for involvement, and work-life balance. The research focused on the National Health Service (NHS) in England, and examined the relationships between senior leadership, first line supervisory leadership and outcomes. Three years of data (2008 – 2010) were gathered from four data sources: the NHS National Staff Survey, the NHS Inpatient Survey, the NHS Electronic Record, and the NHS Information Centre. The data were drawn from 390 healthcare organisations and over 285,000 staff annually for each of the three years. Parallel mediation regressions modelled both cross sectional and longitudinal designs. The findings revealed strong relationships between senior leadership and supervisor support respectively and job design, engagement, opportunity for involvement, and work-life balance, while senior leadership was also associated with work pressure. Except for job design, there were significant relationships between the mediating variables and the outcomes of patient satisfaction, employee job satisfaction, absenteeism, and turnover. Relative importance analysis showed that senior leadership accounted for significantly more variance in relationships with outcomes than supervisor support in the majority of models tested. Results are discussed in relation to theoretical and practical contributions. They suggest that leadership plays a significant role in organisational outcomes in healthcare and that previous research may have underestimated how influential senior leaders may be in relation to these outcomes. Moreover, the research suggests that leaders in healthcare may influence outcomes by the way they manage the work pressure, engagement, opportunity for involvement and work-life balance of those they lead.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A survey of the carrion fauna was made at two sites in Curitiba, State of Paraná, with the objective of describing the insects associated with carrion and setting up a preliminary data-base for medico-legal purposes in south Brazil. Vertebrate exclusion experiments were carried out in each season between 1994 and 1995 with a 250 g laboratory-bred rat (Rattus norvegicus). Five stages of decomposition were identified: fresh, bloated, decaying, dry and adipocere-like. Some species showed seasonal and site preference and so could be used to identify the probable place and season where death took place. Sarconesia chlorogaster (Diptera, Calliphoridae) was restricted to an open field site and to cooler months. Hemilucilia semidiaphana (Diptera, Calliphoridae) and Pattonella resona (Diptera, Sarcophagidae) were restricted to the forest site and warmer months. Phaenicia eximia (Diptera, Calliphoridae) and Oxyletrum discicolle (Coleoptera, Silphidae) were present at both sites throughout the year and could be useful for population level analysis. Dissochaetus murray (Coleoptera, Cholevidae) was present throughout the year at the forest site and was associated with the adipocere-like stage. Ants played an important role producing post-mortem injuries to the carcasses. Insects of 32 species are reported as being useful in community level approaches

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The incidence of over-education is here assessed by applying some standard subjective and objective indicators and a new skill-based indicator of over-education to the national samples of eight European countries in the REFLEX survey. With the exception of Spain, the results reveal that over-education is a minor risk amongst European tertiary graduates. Yet, the contrast between the standard indicators and the skill-based indicator reveals the existence of an over-education of a moderate kind in countries with high tertiary attainment rates (Norway, Finland and Netherlands). Such a type of over-education does not come to the surface when applying the standard indicators. Our results also reveal the importance of higher education differentiation (i.e. field of study and branch of higher education) for understanding the risk of over-education. Graduates from humanistic fields, bachelor courses and vocational colleges are more exposed to over-education, though their disadvantage varies across-nationally to a significant extent.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND: Articular surfaces reconstruction is essential in total shoulder arthroplasty. Because of the limited glenoid bone support, thin glenoid component could improve anatomical reconstruction, but adverse mechanical effects might appear. METHODS: With a numerical musculoskeletal shoulder model, we analysed and compared three values of thickness of a typical all-polyethylene glenoid component: 2, 4 (reference) and 6mm. A loaded movement of abduction in the scapular plane was simulated. We evaluated the humeral head translation, the muscle moment arms, the joint force, the articular contact pattern, and the polyethylene and cement stress. Findings Decreasing polyethylene thickness from 6 to 2mm slightly increased humeral head translation and muscle moment arms. This induced a small decreased of the joint reaction force, but important increase of stress within the polyethylene and the cement mantel. Interpretation The reference thickness of 4mm seems a good compromise to avoid stress concentration and joint stuffing.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

There is a wide range of values reported in volumetric studies of the amygdala. The use of single plane thick magnetic resonance imaging (MRI) may prevent the correct visualization of anatomic landmarks and yield imprecise results. To assess whether there is a difference between volumetric analysis of the amygdala performed with single plane MRI 3-mm slices and with multiplanar analysis of MRI 1-mm slices, we studied healthy subjects and patients with temporal lobe epilepsy. We performed manual delineation of the amygdala on T1-weighted inversion recovery, 3-mm coronal slices and manual delineation of the amygdala on three-dimensional volumetric T1-weighted images with 1-mm slice thickness. The data were compared using a dependent t-test. There was a significant difference between the volumes obtained by the coronal plane-based measurements and the volumes obtained by three-dimensional analysis (P < 0.001). An incorrect estimate of the amygdala volume may preclude a correct analysis of the biological effects of alterations in amygdala volume. Three-dimensional analysis is preferred because it is based on more extensive anatomical assessment and the results are similar to those obtained in post-mortem studies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Terahertz (THz) radiation is being developed as a tool for the analysis of cultural heritage, and due to recent advances in technology is now available commercially in systems which can be deployed for field analysis. The radiation is capable of penetrating up to one centimetre of wall plaster and is delivered in ultrafast pulses which are reflected from layers within this region. The technique is non-contact, non-invasive and non-destructive. While sub-surface radar is able to penetrate over a metre of wall plaster, producing details of internal structures, infrared and ultraviolet techniques produce information about the surface layers of wall plaster. THz radiation is able to provide information about the interim region of up to approximately one centimetre into the wall surface. Data from Chartres Cathedral, France, Riga Dome Cathedral, Latvia, and Chartreuse du Val de Bénédiction, France is presented each with different research questions. The presence of sub-surface paint layers was expected from documentary evidence, dating to the 13th Century, at Chartres Cathedral. In contrast, at the Riga Dome Cathedral surface painting had been obscured as recently as 1941 during the Russian occupation of Latvia using white lead-based paint. In the 13th Century, wall paintings at the Chapel of the Frescos, Chartreuse du Val de Benediction in Villeneuve les Avignon were constructed using sinopia under-painting on plaster covering uneven stonework.. This paper compares and contrasts the ability of THz radiation to provide information about sub-surface features in churches and Cathedrals across Europe by analysing depth based profiles gained from the reflected signal. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Bone tumor incidence in women peaks at age 50-60, coinciding with the menopause. That estrogen (E2) and triiodothyronine (T3) interact in bone metabolism has been well established. However, few data on the action of these hormones are available. Our purpose was to determine the role of E2 and T3 in the expression of bone activity markers, namely alkaline phosphatase (AP) and receptor activator of nuclear factor kappa B ligand (RANKL). Two osteosarcoma cell lines: MG-63 (which has both estrogen (ER) and thyroid hormone (TR) receptors) and SaOs-29 (ER receptors only) were treated with infraphysiological E2 associated with T3 at infraphysiological, physiological, and supraphysiological concentrations. Real-time RT-PCR was used for expression analysis. Our results show that, in MG-63 cells, infraphysiological E2 associated with supraphysiological T3 increases AP expression and decreases RANKL expression, while infraphysiological E2 associated with either physiological or supraphysiological T3 decreases both AP and RANKL expression. On the other hand, in SaOs-2 cells, the same hormone combinations had no significant effect on the markers` expression. Thus, the analysis of hormone receptors was shown to be crucial for the assessment of tumor potential growth in the face of hormonal changes. Special care should be provided to patients with T3 and E2 hormone receptors that may increase tumor growth. Copyright (C) 2007 John Wiley & Sons, Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Despite the commonly held belief that aggregate data display short-run comovement, there has been little discussion about the econometric consequences of this feature of the data. We use exhaustive Monte-Carlo simulations to investigate the importance of restrictions implied by common-cyclical features for estimates and forecasts based on vector autoregressive models. First, we show that the ìbestî empirical model developed without common cycle restrictions need not nest the ìbestî model developed with those restrictions. This is due to possible differences in the lag-lengths chosen by model selection criteria for the two alternative models. Second, we show that the costs of ignoring common cyclical features in vector autoregressive modelling can be high, both in terms of forecast accuracy and efficient estimation of variance decomposition coefficients. Third, we find that the Hannan-Quinn criterion performs best among model selection criteria in simultaneously selecting the lag-length and rank of vector autoregressions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Despite the belief, supported byrecentapplied research, thataggregate datadisplay short-run comovement, there has been little discussion about the econometric consequences ofthese data “features.” W e use exhaustive M onte-Carlo simulations toinvestigate theimportance ofrestrictions implied by common-cyclicalfeatures for estimates and forecasts based on vectorautoregressive and errorcorrection models. First, weshowthatthe“best” empiricalmodeldevelopedwithoutcommoncycles restrictions neednotnestthe“best” modeldevelopedwiththoserestrictions, duetothe use ofinformation criteria forchoosingthe lagorderofthe twoalternative models. Second, weshowthatthecosts ofignoringcommon-cyclicalfeatures inV A R analysis may be high in terms offorecastingaccuracy and e¢ciency ofestimates ofvariance decomposition coe¢cients. A lthough these costs are more pronounced when the lag orderofV A R modelsareknown, theyarealsonon-trivialwhenitis selectedusingthe conventionaltoolsavailabletoappliedresearchers. T hird, we…ndthatifthedatahave common-cyclicalfeatures andtheresearcherwants touseaninformationcriterium to selectthelaglength, theH annan-Q uinn criterium is themostappropriate, sincethe A kaike and theSchwarz criteriahave atendency toover- and under-predictthe lag lengthrespectivelyinoursimulations.