946 resultados para logarithmic mean
Resumo:
BACKGROUND: Cost effective means of assessing the levels of risk factors in the population have to be defined in order to monitor these factors over time and across populations. This study is aimed at analyzing the difference in population estimates of the mean levels of body mass index (BMI) and the prevalences of overweight, between health examination survey and telephone survey. METHODS: The study compares the results of two health surveys, one by telephone (N=820) and the other by physical examination (N=1318). The two surveys, based on independent random samples of the population, were carried out over the same period (1992-1993) in the same population (canton of Vaud, Switzerland). RESULTS: Overall participation rates were 67% and 53% for the health interview survey (HIS) and the health examination survey (HES) respectively. In the HIS, the reporting rate was over 98% for weight and height values. Self-reported weight was on average lower than measured weight, by 2.2 kg in men and 3.5 kg in women, while self-reported height was on average greater than measured height, by 1.2 cm in men and 1.9 cm in women. As a result, in comparison to HES, HIS led to substantially lower mean levels of BMI, and to a reduction of the prevalence rates of obesity (BMI>30 kg/m(2)) by more than a half. These differences are larger for women than for men. CONCLUSION: The two surveys were based on different sampling procedures. However, this difference in design is unlikely to explain the systematic bias observed between self-reported and measured values for height and weight. This bias entails the overall validity of BMI assessment from telephone surveys.
Resumo:
The workshop was attended by 13 people excluding facilitators. Most were from outside QUB (including Belfast City Council, NHSSB, BHSCT, Centre for Public Health, NICR, Institute of Agri-food and Land Use (QUB), etc).Programme was:Introductions Part 1: What’s “knowledge brokerage” all about?Presentation and Q&A (Kevin Balanda)Small group discussions Part 2: What the Centre of Excellence is doingPresentation and Q&A (Kevin Balanda)Small group discussions
Resumo:
Moderate alcohol consumption has been associated with lower coronary artery disease (CAD) risk. However, data on the CAD risk associated with high alcohol consumption are conflicting. The aim of this study was to examine the impact of heavier drinking on 10-year CAD risk in a population with high mean alcohol consumption. In a population-based study of 5,769 adults (aged 35 to 75 years) without cardiovascular disease in Switzerland, 1-week alcohol consumption was categorized as 0, 1 to 6, 7 to 13, 14 to 20, 21 to 27, 28 to 34, and > or =35 drinks/week or as nondrinkers (0 drinks/week), moderate (1 to 13 drinks/week), high (14 to 34 drinks/week), and very high (> or =35 drinks/week). Blood pressure and lipids were measured, and 10-year CAD risk was calculated according to the Framingham risk score. Seventy-three percent (n = 4,214) of the participants consumed alcohol; 16% (n = 909) were high drinkers and 2% (n = 119) very high drinkers. In multivariate analysis, increasing alcohol consumption was associated with higher high-density lipoprotein cholesterol (from a mean +/- SE of 1.57 +/- 0.01 mmol/L in nondrinkers to 1.88 +/- 0.03 mmol/L in very high drinkers); triglycerides (1.17 +/- 1.01 to 1.32 +/- 1.05 mmol/L), and systolic and diastolic blood pressure (127.4 +/- 0.4 to 132.2 +/- 1.4 mm Hg and 78.7 +/- 0.3 to 81.7 +/- 0.9 mm Hg, respectively) (all p values for trend <0.001). Ten-year CAD risk increased from 4.31 +/- 0.10% to 4.90 +/- 0.37% (p = 0.03) with alcohol use, with a J-shaped relation. Increasing wine consumption was more related to high-density lipoprotein cholesterol levels, whereas beer and spirits were related to increased triglyceride levels. In conclusion, as measured by 10-year CAD risk, the protective effect of alcohol consumption disappears in very high drinkers, because the beneficial increase in high-density lipoprotein cholesterol is offset by the increases in blood pressure levels.
Resumo:
A Guide for Staff
Resumo:
This paper conducts an empirical analysis of the relationship between wage inequality, employment structure, and returns to education in urban areas of Mexico during the past two decades (1987-2008). Applying Melly’s (2005) quantile regression based decomposition, we find that changes in wage inequality have been driven mainly by variations in educational wage premia. Additionally, we find that changes in employment structure, including occupation and firm size, have played a vital role. This evidence seems to suggest that the changes in wage inequality in urban Mexico cannot be interpreted in terms of a skill-biased change, but rather they are the result of an increasing demand for skills during that period.
Resumo:
This paper is concerned with the modeling and analysis of quantum dissipation phenomena in the Schrödinger picture. More precisely, we do investigate in detail a dissipative, nonlinear Schrödinger equation somehow accounting for quantum Fokker–Planck effects, and how it is drastically reduced to a simpler logarithmic equation via a nonlinear gauge transformation in such a way that the physics underlying both problems keeps unaltered. From a mathematical viewpoint, this allows for a more achievable analysis regarding the local wellposedness of the initial–boundary value problem. This simplification requires the performance of the polar (modulus–argument) decomposition of the wavefunction, which is rigorously attained (for the first time to the best of our knowledge) under quite reasonable assumptions.
Resumo:
Background. DNA-damage assays, quantifying the initial number of DNA double-strand breaks induced by radiation, have been proposed as a predictive test for radiation-induced toxicity. Determination of radiation-induced apoptosis in peripheral blood lymphocytes by flow cytometry analysis has also been proposed as an approach for predicting normal tissue responses following radiotherapy. The aim of the present study was to explore the association between initial DNA damage, estimated by the number of double-strand breaks induced by a given radiation dose, and the radio-induced apoptosis rates observed. Methods. Peripheral blood lymphocytes were taken from 26 consecutive patients with locally advanced breast carcinoma. Radiosensitivity of lymphocytes was quantified as the initial number of DNA double-strand breaks induced per Gy and per DNA unit (200 Mbp). Radio-induced apoptosis at 1, 2 and 8 Gy was measured by flow cytometry using annexin V/propidium iodide. Results. Radiation-induced apoptosis increased in order to radiation dose and data fitted to a semi logarithmic mathematical model. A positive correlation was found among radio-induced apoptosis values at different radiation doses: 1, 2 and 8 Gy (p < 0.0001 in all cases). Mean DSB/Gy/DNA unit obtained was 1.70 ± 0.83 (range 0.63-4.08; median, 1.46). A statistically significant inverse correlation was found between initial damage to DNA and radio-induced apoptosis at 1 Gy (p = 0.034). A trend toward 2 Gy (p = 0.057) and 8 Gy (p = 0.067) was observed after 24 hours of incubation. Conclusions. An inverse association was observed for the first time between these variables, both considered as predictive factors to radiation toxicity.
Resumo:
Kriging is an interpolation technique whose optimality criteria are based on normality assumptions either for observed or for transformed data. This is the case of normal, lognormal and multigaussian kriging.When kriging is applied to transformed scores, optimality of obtained estimators becomes a cumbersome concept: back-transformed optimal interpolations in transformed scores are not optimal in the original sample space, and vice-versa. This lack of compatible criteria of optimality induces a variety of problems in both point and block estimates. For instance, lognormal kriging, widely used to interpolate positivevariables, has no straightforward way to build consistent and optimal confidence intervals for estimates.These problems are ultimately linked to the assumed space structure of the data support: for instance, positive values, when modelled with lognormal distributions, are assumed to be embedded in the whole real space, with the usual real space structure and Lebesgue measure
Resumo:
The present study examines the Five-Factor Model (FFM) of personality and locus of control in French-speaking samples in Burkina Faso (N = 470) and Switzerland (Ns = 1,090, 361), using the Revised NEO Personality Inventory (NEO-PI-R) and Levenson's Internality, Powerful others, and Chance (IPC) scales. Alpha reliabilities were consistently lower in Burkina Faso, but the factor structure of the NEO-PI-R was replicated in both cultures. The intended three-factor structure of the IPC could not be replicated, although a two-factor solution was replicable across the two samples. Although scalar equivalence has not been demonstrated, mean level comparisons showed the hypothesized effects for most of the five factors and locus of control; Burkinabè scored higher in Neuroticism than anticipated. Findings from this African sample generally replicate earlier results from Asian and Western cultures, and are consistent with a biologically-based theory of personality.
Resumo:
Most people hold beliefs about personality characteristics typical members of their own and others' cultures. These perceptions of national character may be generalizations from personal experience, stereotypes with a "kernel of truth", or inaccurate stereotypes. We obtained national character ratings of 3989 people from 49 cultures and compared them with the average personality scores of culture members assessed by observer ratings and self-reports. National character ratings were reliable but did not converge with assessed traits. Perceptions of national character thus appear to be unfounded stereotypes that may serve the function of maintaining a national identity.
Resumo:
In this paper we present first results of the study of planktonic Foraminifera, large benthic Foraminifera and carbonate facies of La Désirade, aiming at a definition of the age and depositional environments of the Neogene carbonates of this island. The study of planktonic Foraminifera from the Detrital Offshore Limestones (DOL) of the Anciènne Carrière allows to constrain the biochronology of this formation to the lower Zone N19 and indicates a latest Miocene to early Pliocene (5.48 - 4.52 Ma) age. Large benthic Foraminifera were studied both as isolated and often naturally split specimens from the DOL, and in thin sections of limestones from the DOL and the Limestone Table (LT). The assemblages of Foraminifera include Nummulitidae, Amphisteginidae, Asterigerinidae, Peneroplidae, Soritidae, Rotalidae (Globigerinidae: Globigerinoides, Sphaeroidenellopsis, Orbulina) and incrusting Foraminifera (Homotrema and Sporadotrema). The genera Amphistegina, Archaias and Operculina are discussed. Concerning the Nummulitidae we include both "Paraspiroclypeus" chawneri and "Nummulites" cojimarensis, as well as a newly described species, Operculina desiradensis new species, in the genus Operculina, because the differences between these 3 species are rather on the specific than the generic level, while their morphology, studied by SEM, is compatible with the definition of the genus Operculina (D'Orbigny1826, emend. Hottinger 1977). The three species can be easily distinguished on the basis of their differences in spiral growth: while O. desiradensis has an overall logarithmic spiral growth, O. cojimarensis and especially O. chawneri show a tighter and more geometric spiral growth. O. cojimarensis and O. chawneri were originally described from Cuba in outcrops originally dated as Oligocene and later redated as early Pliocene. Therefore, O. chawneri was considered until now as restricted to the early Pliocene. However, in the absence of a detailed morphometric and biostratigraphic study of the Caribbean Neogene nummulitids, it is difficult to evaluate the biochronologic range of these species.The history of the carbonates begins with the initial tectonic uplift and erosion of the Jurassic igneous basement of La Désirade, that must have occurred at latest in late Miocene times, when sea-level oscillated around a long term stable mean. The rhythmic deposition of the Désirade Limestone Table (LT) can be explained by synsedimentary subsidence in a context of rapidly oscillating sea-level due to precession-driven (19-21 kyr) glacio-eustatic sea-level changes during the latest Miocene- Pliocene. Except for a thin reef cap present at the eastern edge of the LT, no other in-place reefal constructions have been observed in the LT. The DOL of western Désirade are interpreted as below wave base gravity deposits that accumulated beneath a steep fore-reef slope. They document the mobilisation of carbonate material (including Larger Foraminifera) from an adjacent carbonate platform by storms and their gravitational emplacement as debris and grain flows. The provenance of both the reefal carbonate debris and the tuffaceous components redeposited in the carbonates of La Désirade must be to the west, i. e. the carbonate platforms of Marie Galante and Grande Terre.
Resumo:
There is almost not a case in exploration geology, where the studied data doesn’tincludes below detection limits and/or zero values, and since most of the geological dataresponds to lognormal distributions, these “zero data” represent a mathematicalchallenge for the interpretation.We need to start by recognizing that there are zero values in geology. For example theamount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-existswith nepheline. Another common essential zero is a North azimuth, however we canalways change that zero for the value of 360°. These are known as “Essential zeros”, butwhat can we do with “Rounded zeros” that are the result of below the detection limit ofthe equipment?Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimeswe need to differentiate between a sodic and a potassic alteration. Pre-classification intogroups requires a good knowledge of the distribution of the data and the geochemicalcharacteristics of the groups which is not always available. Considering the zero valuesequal to the limit of detection of the used equipment will generate spuriousdistributions, especially in ternary diagrams. Same situation will occur if we replace thezero values by a small amount using non-parametric or parametric techniques(imputation).The method that we are proposing takes into consideration the well known relationshipsbetween some elements. For example, in copper porphyry deposits, there is always agood direct correlation between the copper values and the molybdenum ones, but whilecopper will always be above the limit of detection, many of the molybdenum values willbe “rounded zeros”. So, we will take the lower quartile of the real molybdenum valuesand establish a regression equation with copper, and then we will estimate the“rounded” zero values of molybdenum by their corresponding copper values.The method could be applied to any type of data, provided we establish first theircorrelation dependency.One of the main advantages of this method is that we do not obtain a fixed value for the“rounded zeros”, but one that depends on the value of the other variable.Key words: compositional data analysis, treatment of zeros, essential zeros, roundedzeros, correlation dependency
Resumo:
Stone groundwood (SGW) is a fibrous matter commonly prepared in a high yield process, and mainly used for papermaking applications. In this work, the use of SGW fibers is explored as reinforcing element of polypropylene (PP) composites. Due to its chemical and superficial features, the use of coupling agents is needed for a good adhesion and stress transfer across the fiber-matrix interface. The intrinsic strength of the reinforcement is a key parameter to predict the mechanical properties of the composite and to perform an interface analysis. The main objective of the present work was the determination of the intrinsic tensile strength of stone groundwood fibers. Coupled and non-coupled PP composites from stone groundwood fibers were prepared. The influence of the surface morphology and the quality at interface on the final properties of the composite was analyzed and compared to that of fiberglass PP composites. The intrinsic tensile properties of stone groundwood fibers, as well as the fiber orientation factor and the interfacial shear strength of the current composites were determined
Resumo:
This paper investigates a simple procedure to estimate robustly the mean of an asymmetric distribution. The procedure removes the observations which are larger or smaller than certain limits and takes the arithmetic mean of the remaining observations, the limits being determined with the help of a parametric model, e.g., the Gamma, the Weibull or the Lognormal distribution. The breakdown point, the influence function, the (asymptotic) variance, and the contamination bias of this estimator are explored and compared numerically with those of competing estimates.