915 resultados para MEAN-CURVATURE
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
BACKGROUND: Cost effective means of assessing the levels of risk factors in the population have to be defined in order to monitor these factors over time and across populations. This study is aimed at analyzing the difference in population estimates of the mean levels of body mass index (BMI) and the prevalences of overweight, between health examination survey and telephone survey. METHODS: The study compares the results of two health surveys, one by telephone (N=820) and the other by physical examination (N=1318). The two surveys, based on independent random samples of the population, were carried out over the same period (1992-1993) in the same population (canton of Vaud, Switzerland). RESULTS: Overall participation rates were 67% and 53% for the health interview survey (HIS) and the health examination survey (HES) respectively. In the HIS, the reporting rate was over 98% for weight and height values. Self-reported weight was on average lower than measured weight, by 2.2 kg in men and 3.5 kg in women, while self-reported height was on average greater than measured height, by 1.2 cm in men and 1.9 cm in women. As a result, in comparison to HES, HIS led to substantially lower mean levels of BMI, and to a reduction of the prevalence rates of obesity (BMI>30 kg/m(2)) by more than a half. These differences are larger for women than for men. CONCLUSION: The two surveys were based on different sampling procedures. However, this difference in design is unlikely to explain the systematic bias observed between self-reported and measured values for height and weight. This bias entails the overall validity of BMI assessment from telephone surveys.
Resumo:
The workshop was attended by 13 people excluding facilitators. Most were from outside QUB (including Belfast City Council, NHSSB, BHSCT, Centre for Public Health, NICR, Institute of Agri-food and Land Use (QUB), etc).Programme was:Introductions Part 1: What’s “knowledge brokerage” all about?Presentation and Q&A (Kevin Balanda)Small group discussions Part 2: What the Centre of Excellence is doingPresentation and Q&A (Kevin Balanda)Small group discussions
Resumo:
Moderate alcohol consumption has been associated with lower coronary artery disease (CAD) risk. However, data on the CAD risk associated with high alcohol consumption are conflicting. The aim of this study was to examine the impact of heavier drinking on 10-year CAD risk in a population with high mean alcohol consumption. In a population-based study of 5,769 adults (aged 35 to 75 years) without cardiovascular disease in Switzerland, 1-week alcohol consumption was categorized as 0, 1 to 6, 7 to 13, 14 to 20, 21 to 27, 28 to 34, and > or =35 drinks/week or as nondrinkers (0 drinks/week), moderate (1 to 13 drinks/week), high (14 to 34 drinks/week), and very high (> or =35 drinks/week). Blood pressure and lipids were measured, and 10-year CAD risk was calculated according to the Framingham risk score. Seventy-three percent (n = 4,214) of the participants consumed alcohol; 16% (n = 909) were high drinkers and 2% (n = 119) very high drinkers. In multivariate analysis, increasing alcohol consumption was associated with higher high-density lipoprotein cholesterol (from a mean +/- SE of 1.57 +/- 0.01 mmol/L in nondrinkers to 1.88 +/- 0.03 mmol/L in very high drinkers); triglycerides (1.17 +/- 1.01 to 1.32 +/- 1.05 mmol/L), and systolic and diastolic blood pressure (127.4 +/- 0.4 to 132.2 +/- 1.4 mm Hg and 78.7 +/- 0.3 to 81.7 +/- 0.9 mm Hg, respectively) (all p values for trend <0.001). Ten-year CAD risk increased from 4.31 +/- 0.10% to 4.90 +/- 0.37% (p = 0.03) with alcohol use, with a J-shaped relation. Increasing wine consumption was more related to high-density lipoprotein cholesterol levels, whereas beer and spirits were related to increased triglyceride levels. In conclusion, as measured by 10-year CAD risk, the protective effect of alcohol consumption disappears in very high drinkers, because the beneficial increase in high-density lipoprotein cholesterol is offset by the increases in blood pressure levels.
Resumo:
PURPOSE: To document the neurological outcome, spinal alignment and segmental range of movement after oblique cervical corpectomy (OCC) for cervical compressive myelopathy. METHODS: This retrospective study included 109 patients--93 with cervical spondylotic myelopathy and 16 with ossified posterior longitudinal ligament in whom spinal curvature and range of segmental movements were assessed on neutral and dynamic cervical radiographs. Neurological function was measured by Nurick's grade and modified Japanese Orthopedic Association (JOA) scores. Eighty-eight patients (81%) underwent either a single- or two-level corpectomy; the remaining (19%) undergoing three- or four-level corpectomies. The average duration of follow-up was 30.52 months. RESULTS: The Nurick's grade and the JOA scores showed statistically significant improvements after surgery (p < 0.001). The mean postoperative segmental angle in the neutral position straightened by 4.7 ± 6.5°. The residual segmental range of movement for a single-level corpectomy was 16.7° (59.7% of the preoperative value), for two-level corpectomy it was 20.0° (67.2%) and for three-level corpectomies it was 22.9° (74.3%). 63% of patients with lordotic spines continued to have lordosis postoperatively while only one became kyphotic without clinical worsening. Four patients with preoperative kyphotic spines showed no change in spine curvature. None developed spinal instability. CONCLUSIONS: The OCC preserves segmental motion in the short-term, however, the tendency towards straightening of the spine, albeit without clinical worsening, warrants serial follow-up imaging to determine whether this motion preservation is long lasting.
Resumo:
A Guide for Staff
Resumo:
This paper conducts an empirical analysis of the relationship between wage inequality, employment structure, and returns to education in urban areas of Mexico during the past two decades (1987-2008). Applying Melly’s (2005) quantile regression based decomposition, we find that changes in wage inequality have been driven mainly by variations in educational wage premia. Additionally, we find that changes in employment structure, including occupation and firm size, have played a vital role. This evidence seems to suggest that the changes in wage inequality in urban Mexico cannot be interpreted in terms of a skill-biased change, but rather they are the result of an increasing demand for skills during that period.
Resumo:
Objective: To compare pressure–volume (P–V) curves obtained with the Galileo ventilator with those obtained with the CPAP method in patients with ALI or ARDS receiving mechanical ventilation. P–V curves were fitted to a sigmoidal equation with a mean R2 of 0.994 ± 0.003. Lower (LIP) and upper inflection (UIP), and deflation maximum curvature (PMC) points calculated from the fitted variables showed a good correlation between methods with high intraclass correlation coefficients. Bias and limits of agreement for LIP, UIP and PMC obtained with the two methods in the same patient were clinically acceptable.
Resumo:
Kriging is an interpolation technique whose optimality criteria are based on normality assumptions either for observed or for transformed data. This is the case of normal, lognormal and multigaussian kriging.When kriging is applied to transformed scores, optimality of obtained estimators becomes a cumbersome concept: back-transformed optimal interpolations in transformed scores are not optimal in the original sample space, and vice-versa. This lack of compatible criteria of optimality induces a variety of problems in both point and block estimates. For instance, lognormal kriging, widely used to interpolate positivevariables, has no straightforward way to build consistent and optimal confidence intervals for estimates.These problems are ultimately linked to the assumed space structure of the data support: for instance, positive values, when modelled with lognormal distributions, are assumed to be embedded in the whole real space, with the usual real space structure and Lebesgue measure
Resumo:
The present study examines the Five-Factor Model (FFM) of personality and locus of control in French-speaking samples in Burkina Faso (N = 470) and Switzerland (Ns = 1,090, 361), using the Revised NEO Personality Inventory (NEO-PI-R) and Levenson's Internality, Powerful others, and Chance (IPC) scales. Alpha reliabilities were consistently lower in Burkina Faso, but the factor structure of the NEO-PI-R was replicated in both cultures. The intended three-factor structure of the IPC could not be replicated, although a two-factor solution was replicable across the two samples. Although scalar equivalence has not been demonstrated, mean level comparisons showed the hypothesized effects for most of the five factors and locus of control; Burkinabè scored higher in Neuroticism than anticipated. Findings from this African sample generally replicate earlier results from Asian and Western cultures, and are consistent with a biologically-based theory of personality.
Resumo:
Most people hold beliefs about personality characteristics typical members of their own and others' cultures. These perceptions of national character may be generalizations from personal experience, stereotypes with a "kernel of truth", or inaccurate stereotypes. We obtained national character ratings of 3989 people from 49 cultures and compared them with the average personality scores of culture members assessed by observer ratings and self-reports. National character ratings were reliable but did not converge with assessed traits. Perceptions of national character thus appear to be unfounded stereotypes that may serve the function of maintaining a national identity.
Resumo:
There is almost not a case in exploration geology, where the studied data doesn’tincludes below detection limits and/or zero values, and since most of the geological dataresponds to lognormal distributions, these “zero data” represent a mathematicalchallenge for the interpretation.We need to start by recognizing that there are zero values in geology. For example theamount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-existswith nepheline. Another common essential zero is a North azimuth, however we canalways change that zero for the value of 360°. These are known as “Essential zeros”, butwhat can we do with “Rounded zeros” that are the result of below the detection limit ofthe equipment?Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimeswe need to differentiate between a sodic and a potassic alteration. Pre-classification intogroups requires a good knowledge of the distribution of the data and the geochemicalcharacteristics of the groups which is not always available. Considering the zero valuesequal to the limit of detection of the used equipment will generate spuriousdistributions, especially in ternary diagrams. Same situation will occur if we replace thezero values by a small amount using non-parametric or parametric techniques(imputation).The method that we are proposing takes into consideration the well known relationshipsbetween some elements. For example, in copper porphyry deposits, there is always agood direct correlation between the copper values and the molybdenum ones, but whilecopper will always be above the limit of detection, many of the molybdenum values willbe “rounded zeros”. So, we will take the lower quartile of the real molybdenum valuesand establish a regression equation with copper, and then we will estimate the“rounded” zero values of molybdenum by their corresponding copper values.The method could be applied to any type of data, provided we establish first theircorrelation dependency.One of the main advantages of this method is that we do not obtain a fixed value for the“rounded zeros”, but one that depends on the value of the other variable.Key words: compositional data analysis, treatment of zeros, essential zeros, roundedzeros, correlation dependency
Resumo:
Stone groundwood (SGW) is a fibrous matter commonly prepared in a high yield process, and mainly used for papermaking applications. In this work, the use of SGW fibers is explored as reinforcing element of polypropylene (PP) composites. Due to its chemical and superficial features, the use of coupling agents is needed for a good adhesion and stress transfer across the fiber-matrix interface. The intrinsic strength of the reinforcement is a key parameter to predict the mechanical properties of the composite and to perform an interface analysis. The main objective of the present work was the determination of the intrinsic tensile strength of stone groundwood fibers. Coupled and non-coupled PP composites from stone groundwood fibers were prepared. The influence of the surface morphology and the quality at interface on the final properties of the composite was analyzed and compared to that of fiberglass PP composites. The intrinsic tensile properties of stone groundwood fibers, as well as the fiber orientation factor and the interfacial shear strength of the current composites were determined
Resumo:
In the static field limit, the vibrational hyperpolarizability consists of two contributions due to: (1) the shift in the equilibrium geometry (known as nuclear relaxation), and (2) the change in the shape of the potential energy surface (known as curvature). Simple finite field methods have previously been developed for evaluating these static field contributions and also for determining the effect of nuclear relaxation on dynamic vibrational hyperpolarizabilities in the infinite frequency approximation. In this paper the finite field approach is extended to include, within the infinite frequency approximation, the effect of curvature on the major dynamic nonlinear optical processes