104 resultados para Radon measures.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND Assessment of the proportion of patients with well controlled cardiovascular risk factors underestimates the proportion of patients receiving high quality of care. Evaluating whether physicians respond appropriately to poor risk factor control gives a different picture of quality of care. We assessed physician response to control cardiovascular risk factors, as well as markers of potential overtreatment in Switzerland, a country with universal healthcare coverage but without systematic quality monitoring, annual report cards on quality of care or financial incentives to improve quality. METHODS We performed a retrospective cohort study of 1002 randomly selected patients aged 50-80 years from four university primary care settings in Switzerland. For hypertension, dyslipidemia and diabetes mellitus, we first measured proportions in control, then assessed therapy modifications among those in poor control. "Appropriate clinical action" was defined as a therapy modification or return to control without therapy modification within 12 months among patients with baseline poor control. Potential overtreatment of these conditions was defined as intensive treatment among low-risk patients with optimal target values. RESULTS 20% of patients with hypertension, 41% with dyslipidemia and 36% with diabetes mellitus were in control at baseline. When appropriate clinical action in response to poor control was integrated into measuring quality of care, 52 to 55% had appropriate quality of care. Over 12 months, therapy of 61% of patients with baseline poor control was modified for hypertension, 33% for dyslipidemia, and 85% for diabetes mellitus. Increases in number of drug classes (28-51%) and in drug doses (10-61%) were the most common therapy modifications. Patients with target organ damage and higher baseline values were more likely to have appropriate clinical action. We found low rates of potential overtreatment with 2% for hypertension, 3% for diabetes mellitus and 3-6% for dyslipidemia. CONCLUSIONS In primary care, evaluating whether physicians respond appropriately to poor risk factor control, in addition to assessing proportions in control, provide a broader view of the quality of care than relying solely on measures of proportions in control. Such measures could be more clinically relevant and acceptable to physicians than simply reporting levels of control.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVES Sensorineural hearing loss from sound overexposure has a considerable prevalence. Identification of sound hazards is crucial, as prevention, due to a lack of definitive therapies, is the sole alternative to hearing aids. One subjectively loud, yet little studied, potential sound hazard is movie theaters. This study uses smart phones to evaluate their applicability as a widely available, validated sound pressure level (SPL) meter. Therefore, this study measures sound levels in movie theaters to determine whether sound levels exceed safe occupational noise exposure limits and whether sound levels in movie theaters differ as a function of movie, movie theater, presentation time, and seat location within the theater. DESIGN Six smart phones with an SPL meter software application were calibrated with a precision SPL meter and validated as an SPL meter. Additionally, three different smart phone generations were measured in comparison to an integrating SPL meter. Two different movies, an action movie and a children's movie, were measured six times each in 10 different venues (n = 117). To maximize representativeness, movies were selected focusing on large release productions with probable high attendance. Movie theaters were selected in the San Francisco, CA, area based on whether they screened both chosen movies and to represent the largest variety of theater proprietors. Measurements were analyzed in regard to differences between theaters, location within the theater, movie, as well as presentation time and day as indirect indicator of film attendance. RESULTS The smart phone measurements demonstrated high accuracy and reliability. Overall, sound levels in movie theaters do not exceed safe exposure limits by occupational standards. Sound levels vary significantly across theaters and demonstrated statistically significant higher sound levels and exposures in the action movie compared to the children's movie. Sound levels decrease with distance from the screen. However, no influence on time of day or day of the week as indirect indicator of film attendance could be found. CONCLUSIONS Calibrated smart phones with an appropriate software application as used in this study can be utilized as a validated SPL meter. Because of the wide availability, smart phones in combination with the software application can provide high quantity recreational sound exposure measurements, which can facilitate the identification of potential noise hazards. Sound levels in movie theaters decrease with distance to the screen, but do not exceed safe occupational noise exposure limits. Additionally, there are significant differences in sound levels across movie theaters and movies, but not in presentation time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The unprecedented success of social networking sites (SNSs) has been recently overshadowed by concerns about privacy risks. As SNS users grow weary of privacy breaches and thus develop distrust, they may restrict or even terminate their platform activities. In the long run, these developments endanger SNS platforms’ financial viability and undermine their ability to create individual and social value. By applying a justice perspective, this study aims to understand the means at the disposal of SNS providers to leverage the privacy concerns and trusting beliefs of their users—two important determinants of user participation on SNSs. Considering that SNSs have a global appeal, empirical tests assess the effectiveness of justice measures for three culturally distinct countries: Germany, Russia and Morocco. The results indicate that these measures are particularly suited to address trusting beliefs of SNS audience. Specifically, in all examined countries, procedural justice and the awareness dimension of informational justice improve perceptions of trust in the SNS provider. Privacy concerns, however, are not as easy to manage, because the impact of justice-based measures on privacy concerns is not universal. Beyond theoretical value, this research offers valuable practical insights into the use of justice-based measures to promote trust and mitigate privacy concerns in a cross-cultural setting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Assessing and managing risks relating to the consumption of food stuffs for humans and to the environment has been one of the most complex legal issues in WTO law, ever since the Agreement on Sanitary and Phytosanitary Measures was adopted at the end of the Uruguay Round and entered into force in 1995. The problem was expounded in a number of cases. Panels and the Appellate Body adopted different philosophies in interpreting the agreement and the basic concept of risk assessment as defined in Annex A para. 4 of the Agreement. Risk assessment entails fundamental question on law and science. Different interpretations reflect different underlying perceptions of science and its relationship to the law. The present thesis supported by the Swiss National Research Foundation undertakes an in-depth analysis of these underlying perceptions. The author expounds the essence and differences of positivism and relativism in philosophy and natural sciences. He clarifies the relationship of fundamental concepts such as risk, hazards and probability. This investigation is a remarkable effort on the part of lawyer keen to learn more about the fundamentals based upon which the law – often unconsciously – is operated by the legal profession and the trade community. Based upon these insights, he turns to a critical assessment of jurisprudence both of panels and the Appellate Body. Extensively referring and discussing the literature, he deconstructs findings and decisions in light of implied and assumed underlying philosophies and perceptions as to the relationship of law and science, in particular in the field of food standards. Finding that both positivism and relativism does not provide adequate answers, the author turns critical rationalism and applies the methodologies of falsification developed by Karl R. Popper. Critical rationalism allows combining discourse in science and law and helps preparing the ground for a new approach to risk assessment and risk management. Linking the problem to the doctrine of multilevel governance the author develops a theory allocating risk assessment to international for a while leaving the matter of risk management to national and democratically accountable government. While the author throughout the thesis questions the possibility of separating risk assessment and risk management, the thesis offers new avenues which may assist in structuring a complex and difficult problem

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Numerous studies reported a strong link between working memory capacity (WMC) and fluid intelligence (Gf), although views differ in respect to how close these two constructs are related to each other. In the present study, we used a WMC task with five levels of task demands to assess the relationship between WMC and Gf by means of a new methodological approach referred to as fixed-links modeling. Fixed-links models belong to the family of confirmatory factor analysis (CFA) and are of particular interest for experimental, repeated-measures designs. With this technique, processes systematically varying across task conditions can be disentangled from processes unaffected by the experimental manipulation. Proceeding from the assumption that experimental manipulation in a WMC task leads to increasing demands on WMC, the processes systematically varying across task conditions can be assumed to be WMC-specific. Processes not varying across task conditions, on the other hand, are probably independent of WMC. Fixed-links models allow for representing these two kinds of processes by two independent latent variables. In contrast to traditional CFA where a common latent variable is derived from the different task conditions, fixed-links models facilitate a more precise or purified representation of the WMC-related processes of interest. By using fixed-links modeling to analyze data of 200 participants, we identified a non-experimental latent variable, representing processes that remained constant irrespective of the WMC task conditions, and an experimental latent variable which reflected processes that varied as a function of experimental manipulation. This latter variable represents the increasing demands on WMC and, hence, was considered a purified measure of WMC controlled for the constant processes. Fixed-links modeling showed that both the purified measure of WMC (β = .48) as well as the constant processes involved in the task (β = .45) were related to Gf. Taken together, these two latent variables explained the same portion of variance of Gf as a single latent variable obtained by traditional CFA (β = .65) indicating that traditional CFA causes an overestimation of the effective relationship between WMC and Gf. Thus, fixed-links modeling provides a feasible method for a more valid investigation of the functional relationship between specific constructs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Neuropsychologists often face interpretational difficulties when assessing cognitive deficits, particularly in cases of unclear cerebral etiology. How can we be sure whether a single test score below the population average is indicative of a pathological brain condition or normal? In the past few years, the topic of intra-individual performance variability has gained great interest. On the basis of a large normative sample, two measures of performance variability and their importance for neuropsychological interpretation will be presented in this paper: the number of low scores and the level of dispersion.We conclude that low scores are common in healthy individuals. On the other hand, the level of dispersion is relatively small. Here, base rate information about abnormally low scores and abnormally high dispersion across cognitive abilities are providedto improve the awareness of normal variability and to serve clinicians as additional interpretive measures in the diagnostic process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigated whether amygdala activation, autonomic responses, respiratory responses, and facial muscle activity (measured over the brow and cheek [fear grin] regions) are all sensitive to phobic versus nonphobic fear and, more importantly, whether effects in these variables vary as a function of both phobic and nonphobic fear intensity. Spider-phobic and comparably low spider-fearful control participants imagined encountering different animals and rated their subjective fear while their central and peripheral nervous system activity was measured. All measures included in our study were sensitive to variations in subjective fear, but were related to different ranges and positions on the subjective fear level continuum. Left amygdala activation, heart rate, and facial muscle activity over the cheek region captured fear intensity variations even within narrowly described regions on the fear level continuum (here within extremely low levels of fear and within considerable phobic fear). Skin conductance and facial muscle activity over the brow region did not capture fear intensity variations within low levels of fear: skin conductance mirrored only extreme levels of fear, and activity over the brow region distinguished phobic from nonphobic fear but also low-to-moderate and high phobic fear. Finally, respiratory measures distinguished phobic from nonphobic fear with no further differentiation within phobic and nonphobic fear. We conclude that a careful consideration of the measures to be used in an investigation and the population to be examined can be critical in order to obtain significant results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Soil carbon (C) storage is a key ecosystem service. Soil C stocks play a vital role in soil fertility and climate regulation, but the factors that control these stocks at regional and national scales are unknown, particularly when their composition and stability are considered. As a result, their mapping relies on either unreliable proxy measures or laborious direct measurements. Using data from an extensive national survey of English grasslands, we show that surface soil (0–7 cm) C stocks in size fractions of varying stability can be predicted at both regional and national scales from plant traits and simple measures of soil and climatic conditions. Soil C stocks in the largest pool, of intermediate particle size (50–250 μm), were best explained by mean annual temperature (MAT), soil pH and soil moisture content. The second largest C pool, highly stable physically and biochemically protected particles (0·45–50 μm), was explained by soil pH and the community abundance-weighted mean (CWM) leaf nitrogen (N) content, with the highest soil C stocks under N-rich vegetation. The C stock in the small active fraction (250–4000 μm) was explained by a wide range of variables: MAT, mean annual precipitation, mean growing season length, soil pH and CWM specific leaf area; stocks were higher under vegetation with thick and/or dense leaves. Testing the models describing these fractions against data from an independent English region indicated moderately strong correlation between predicted and actual values and no systematic bias, with the exception of the active fraction, for which predictions were inaccurate. Synthesis and applications. Validation indicates that readily available climate, soils and plant survey data can be effective in making local- to landscape-scale (1–100 000 km2) soil C stock predictions. Such predictions are a crucial component of effective management strategies to protect C stocks and enhance soil C sequestration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Elicitability has recently been discussed as a desirable property for risk measures. Kou and Peng (2014) showed that an elicitable distortion risk measure is either a Value-at-Risk or the mean. We give a concise alternative proof of this result, and discuss the conflict between comonotonic additivity and elicitability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the present contribution, we characterise law determined convex risk measures that have convex level sets at the level of distributions. By relaxing the assumptions in Weber (Math. Finance 16:419–441, 2006), we show that these risk measures can be identified with a class of generalised shortfall risk measures. As a direct consequence, we are able to extend the results in Ziegel (Math. Finance, 2014, http://onlinelibrary.wiley.com/doi/10.1111/mafi.12080/abstract) and Bellini and Bignozzi (Quant. Finance 15:725–733, 2014) on convex elicitable risk measures and confirm that expectiles are the only elicitable coherent risk measures. Further, we provide a simple characterisation of robustness for convex risk measures in terms of a weak notion of mixture continuity.