861 resultados para optimal waist circumference
Resumo:
Objective: to assess the agreement between different anthropometric markers in defining obesity and the effect on the prevalence of obese subjects. Methods: population-based cross-sectional study including 3213 women and 2912 men aged 35-75 years. Body fat percentage (%BF) was assessed using electric bioimpedance. Obesity was defined using established cut-points for body mass index (BMI) and waist, and three population-defined cut-points for %BF. Between-criteria agreement was assessed by the kappa statistic. Results: in men, agreement between the %BF cut-points was significantly higher (kappa values in the range 0.78 - 0.86) than with BMI or waist (0.47 - 0.62), whereas no such differences were found in women (0.41 - 0.69). In both genders, prevalence of obesity varied considerably according to the criteria used: 17% and 24% according to BMI and waist in men, and 14% and 31%, respectively, in women. For %BF, the prevalence varied between 14% and 17% in men and between 19% and 36% in women according to the cut-point used. In the older age groups, a fourfold difference in the prevalence of obesity was found when different criteria were used. Among subjects with at least one criteria for obesity (increased BMI, waist or %BF), only one third fulfilled all three criteria and one quarter two criteria. Less than half of women and 64% of men were jointly classified as obese by the three population-defined cut-points for %BF. Conclusions: the different anthropometric criteria to define obesity show a relatively poor agreement between them, leading to considerable differences in the prevalence of obesity in the general population.
Resumo:
Recent genome-wide association (GWA) studies described 95 loci controlling serum lipid levels. These common variants explain ∼25% of the heritability of the phenotypes. To date, no unbiased screen for gene-environment interactions for circulating lipids has been reported. We screened for variants that modify the relationship between known epidemiological risk factors and circulating lipid levels in a meta-analysis of genome-wide association (GWA) data from 18 population-based cohorts with European ancestry (maximum N = 32,225). We collected 8 further cohorts (N = 17,102) for replication, and rs6448771 on 4p15 demonstrated genome-wide significant interaction with waist-to-hip-ratio (WHR) on total cholesterol (TC) with a combined P-value of 4.79×10(-9). There were two potential candidate genes in the region, PCDH7 and CCKAR, with differential expression levels for rs6448771 genotypes in adipose tissue. The effect of WHR on TC was strongest for individuals carrying two copies of G allele, for whom a one standard deviation (sd) difference in WHR corresponds to 0.19 sd difference in TC concentration, while for A allele homozygous the difference was 0.12 sd. Our findings may open up possibilities for targeted intervention strategies for people characterized by specific genomic profiles. However, more refined measures of both body-fat distribution and metabolic measures are needed to understand how their joint dynamics are modified by the newly found locus.
Resumo:
The stop-loss reinsurance is one of the most important reinsurance contracts in the insurance market. From the insurer point of view, it presents an interesting property: it is optimal if the criterion of minimizing the variance of the cost of the insurer is used. The aim of the paper is to contribute to the analysis of the stop-loss contract in one period from the point of view of the insurer and the reinsurer. Firstly, the influence of the parameters of the reinsurance contract on the correlation coefficient between the cost of the insurer and the cost of the reinsurer is studied. Secondly, the optimal stop-loss contract is obtained if the criterion used is the maximization of the joint survival probability of the insurer and the reinsurer in one period.
Resumo:
Rapport de synthèse Cette thèse consiste en trois essais sur les stratégies optimales de dividendes. Chaque essai correspond à un chapitre. Les deux premiers essais ont été écrits en collaboration avec les Professeurs Hans Ulrich Gerber et Elias S. W. Shiu et ils ont été publiés; voir Gerber et al. (2006b) ainsi que Gerber et al. (2008). Le troisième essai a été écrit en collaboration avec le Professeur Hans Ulrich Gerber. Le problème des stratégies optimales de dividendes remonte à de Finetti (1957). Il se pose comme suit: considérant le surplus d'une société, déterminer la stratégie optimale de distribution des dividendes. Le critère utilisé consiste à maximiser la somme des dividendes escomptés versés aux actionnaires jusqu'à la ruine2 de la société. Depuis de Finetti (1957), le problème a pris plusieurs formes et a été résolu pour différents modèles. Dans le modèle classique de théorie de la ruine, le problème a été résolu par Gerber (1969) et plus récemment, en utilisant une autre approche, par Azcue and Muler (2005) ou Schmidli (2008). Dans le modèle classique, il y a un flux continu et constant d'entrées d'argent. Quant aux sorties d'argent, elles sont aléatoires. Elles suivent un processus à sauts, à savoir un processus de Poisson composé. Un exemple qui correspond bien à un tel modèle est la valeur du surplus d'une compagnie d'assurance pour lequel les entrées et les sorties sont respectivement les primes et les sinistres. Le premier graphique de la Figure 1 en illustre un exemple. Dans cette thèse, seules les stratégies de barrière sont considérées, c'est-à-dire quand le surplus dépasse le niveau b de la barrière, l'excédent est distribué aux actionnaires comme dividendes. Le deuxième graphique de la Figure 1 montre le même exemple du surplus quand une barrière de niveau b est introduite, et le troisième graphique de cette figure montre, quand à lui, les dividendes cumulés. Chapitre l: "Maximizing dividends without bankruptcy" Dans ce premier essai, les barrières optimales sont calculées pour différentes distributions du montant des sinistres selon deux critères: I) La barrière optimale est calculée en utilisant le critère usuel qui consiste à maximiser l'espérance des dividendes escomptés jusqu'à la ruine. II) La barrière optimale est calculée en utilisant le second critère qui consiste, quant à lui, à maximiser l'espérance de la différence entre les dividendes escomptés jusqu'à la ruine et le déficit au moment de la ruine. Cet essai est inspiré par Dickson and Waters (2004), dont l'idée est de faire supporter aux actionnaires le déficit au moment de la ruine. Ceci est d'autant plus vrai dans le cas d'une compagnie d'assurance dont la ruine doit être évitée. Dans l'exemple de la Figure 1, le déficit au moment de la ruine est noté R. Des exemples numériques nous permettent de comparer le niveau des barrières optimales dans les situations I et II. Cette idée, d'ajouter une pénalité au moment de la ruine, a été généralisée dans Gerber et al. (2006a). Chapitre 2: "Methods for estimating the optimal dividend barrier and the probability of ruin" Dans ce second essai, du fait qu'en pratique on n'a jamais toute l'information nécessaire sur la distribution du montant des sinistres, on suppose que seuls les premiers moments de cette fonction sont connus. Cet essai développe et examine des méthodes qui permettent d'approximer, dans cette situation, le niveau de la barrière optimale, selon le critère usuel (cas I ci-dessus). Les approximations "de Vylder" et "diffusion" sont expliquées et examinées: Certaines de ces approximations utilisent deux, trois ou quatre des premiers moments. Des exemples numériques nous permettent de comparer les approximations du niveau de la barrière optimale, non seulement avec les valeurs exactes mais également entre elles. Chapitre 3: "Optimal dividends with incomplete information" Dans ce troisième et dernier essai, on s'intéresse à nouveau aux méthodes d'approximation du niveau de la barrière optimale quand seuls les premiers moments de la distribution du montant des sauts sont connus. Cette fois, on considère le modèle dual. Comme pour le modèle classique, dans un sens il y a un flux continu et dans l'autre un processus à sauts. A l'inverse du modèle classique, les gains suivent un processus de Poisson composé et les pertes sont constantes et continues; voir la Figure 2. Un tel modèle conviendrait pour une caisse de pension ou une société qui se spécialise dans les découvertes ou inventions. Ainsi, tant les approximations "de Vylder" et "diffusion" que les nouvelles approximations "gamma" et "gamma process" sont expliquées et analysées. Ces nouvelles approximations semblent donner de meilleurs résultats dans certains cas.
Resumo:
Abstract: The objective of this work was to identify polymorphic simple sequence repeat (SSR) markers for varietal identification of cotton and evaluation of the genetic distance among the varieties. Initially, 92 SSR markers were genotyped in 20 Brazilian cotton cultivars. Of this total, 38 loci were polymorphic, two of which were amplified by one primer pair; the mean number of alleles per locus was 2.2. The values of polymorphic information content (PIC) and discrimination power (DP) were, on average, 0.374 and 0.433, respectively. The mean genetic distance was 0.397 (minimum of 0.092 and maximum of 0.641). A panel of 96 varieties originating from different regions of the world was assessed by 21 polymorphic loci derived from 17 selected primer pairs. Among these varieties, the mean genetic distance was 0.387 (minimum of 0 and maximum of 0.786). The dendrograms generated by the unweighted pair group method with arithmetic average (UPGMA) did not reflect the regions of Brazil (20 genotypes) or around the world (96 genotypes), where the varieties or lines were selected. Bootstrap resampling shows that genotype identification is viable with 19 loci. The polymorphic markers evaluated are useful to perform varietal identification in a large panel of cotton varieties and may be applied in studies of the species diversity.
Resumo:
Tässä diplomityössä tutkittiin ja vertailtiin eukalyptuksen, akaasian ja koivun kemimekaanista kuiduttamista ja valkaisua. Yleensä näitä puulajeja käytetään sellun keittoon. Puulajit eroavat toisistaan kasvupaikan ja kuiturakenteen osalta. Eukalyptus ja akaasia ovat niin sanottuja trooppisia lehtipuita, kun taas koivu kasvaa pohjoisilla vyöhykkeillä. Koivulla on kookkaimmat kuidut ja akaasialla pienimmät kuidut. Myös näiden lajien putkilot eroavat toisistaan. Koivun putkilot ovat pitkiä ja kapeita, kun taas eukalyptuksen ja akaasian putkilot ovat lyhyitä ja leveitä. Prosessiksi valittiin kaksivaiheinen APMP-prosessi. Koeajot tehtiinKeskuslaboratorio Oy:ssä. Massoille asetettiin seuraavat tavoitteet: freeness 150-200 ml ja vaaleus 80 %ISO. Eukalyptukselle ja koivulle tehtiin kaksi erilaista impregnointisarjaa, mutta akaasialle vain yksi. Jauhatuksen viimeisessä vaiheessa kokeiltiin myös jauhinvalkaisua. Jauhatuksen energiankulutus oli korkea varsinkin eukalyptuksella ja akaasialla. Jotta energiankulutus saataisiin pienemmäksi, tulisi käyttää enemmän lipeää, mutta se johtaa alkalitummumiseen. Lopuksi massat valkaistiin laboratoriossa. Eukalyptus ja koivu pystyttiin valkaisemaan vaaleuteen 80 %ISO, mutta eukalyptuksen valkaisu vaati enemmän peroksidia kuin koivun valkaisu. Akaasian lähtövaaleus oli niin alhainen, ettei siinä päästy tavoitevaaleuteen. Eukalyptuksella on parempi valonsironta ja paremmat lujuusominaisuudet kuin koivulla. Kemimekaanista massaa voidaan käyttää hienopaperissa parantamassa jäykkyyttä, bulkkia ja valonsirontaa, mutta usein ongelmana on alhainen vaaleus ja huono vaaleuden pysyvyys. Kemimekaanista massaa voidaankäyttää missä tahansa mekaanisissa painopapereissa. Mekaanisissa painopapereissa kemimekaanisella lehtipuumassalla voidaan korvata mekaanista havupuumassaa. Akaasia on niin tummaa, ettei sitä voida käyttää korkeavaaleuksisiin papereihin. Eukalyptus ja koivu ovat vaaleampia ja helpompia valkaista kuin akaasia, mutta myös niillä on niin huono vaaleudenpysyvyys että käyttö hienopapereissa on rajoittunutta. Mekaanisille eukalyptus ja koivumassoille hienopaperia parempi käyttökohde on mekaaniset painopaperit, kuten MWC-paperi.
Resumo:
Even though patients who develop ischemic stroke despite taking antiplatelet drugs represent a considerable proportion of stroke hospital admissions, there is a paucity of data from investigational studies regarding the most suitable therapeutic intervention. There have been no clinical trials to test whether increasing the dose or switching antiplatelet agents reduces the risk for subsequent events. Certain issues have to be considered in patients managed for a first or recurrent stroke while receiving antiplatelet agents. Therapeutic failure may be due to either poor adherence to treatment, associated co-morbid conditions and diminished antiplatelet effects (resistance to treatment). A diagnostic work up is warranted to identify the etiology and underlying mechanism of stroke, thereby guiding further management. Risk factors (including hypertension, dyslipidemia and diabetes) should be treated according to current guidelines. Aspirin or aspirin plus clopidogrel may be used in the acute and early phase of ischemic stroke, whereas in the long-term, antiplatelet treatment should be continued with aspirin, aspirin/extended release dipyridamole or clopidogrel monotherapy taking into account tolerance, safety, adherence and cost issues. Secondary measures to educate patients about stroke, the importance of adherence to medication, behavioral modification relating to tobacco use, physical activity, alcohol consumption and diet to control excess weight should also be implemented.
Resumo:
Objective: to assess the diagnostic accuracy of different anthropometric markers in defining low aerobic fitness among adolescents. Methods: cross-sectional study on 2,331 boys and 2,366 girls aged 10 - 18 years. Body mass index (BMI) was measured using standardized methods; body fat (BF) was assessed by bioelectrical impedance. Low aerobic fitness was assessed by the 20-meter shuttle run using the FITNESSGRAMR criteria. Waist was measured in a subsample of 1,933 boys and 1,897 girls. Overweight, obesity and excess fat were defined according to the International Obesity Task Force (IOTF) or FITNESSGRAMR criteria. Results: 38.5% of boys and 46.5% of girls were considered as unfit according to the FITNESSGRAMR criteria. In boys, the area under the ROC curve (AUC) and 95% confidence interval were 66.7 (64.1 - 69.3), 67.1 (64.5 - 69.6) and 64.6 (61.9 - 67.2) for BMI, BF and waist, respectively (P<0.02). In girls, the values were 68.3 (65.9 - 70.8), 63.8 (61.3 - 66.3) and 65.9 (63.4 - 68.4), respectively (P<0.001). In boys, the sensitivity and specificity to diagnose low fitness were 13% and 99% for obesity (IOTF); 38% and 86% for overweight + obesity (IOTF); 28% and 94% for obesity (FITNESSGRAMR) and 42% and 81% for excess fat (FITNESSGRAMR). For girls, the values were 9% and 99% for obesity (IOTF); 33% and 82% for overweight + obesity (IOTF); 22% and 94% for obesity (FITNESSGRAMR) and 26% and 90% for excess fat (FITNESSGRAMR). Conclusions: BMI, not body fat or waist, should be used to define low aerobic fitness. The IOTF BMI cut-points to define obesity have a very low screening capacity and should not be used.
Resumo:
An attractive treatment of cancer consists in inducing tumor-eradicating CD8(+) CTL specific for tumor-associated Ags, such as NY-ESO-1 (ESO), a strongly immunogenic cancer germ line gene-encoded tumor-associated Ag, widely expressed on diverse tumors. To establish optimal priming of ESO-specific CTL and to define critical vaccine variables and mechanisms, we used HLA-A2/DR1 H-2(-/-) transgenic mice and sequential immunization with immunodominant DR1- and A2-restricted ESO peptides. Immunization of mice first with the DR1-restricted ESO(123-137) peptide and subsequently with mature dendritic cells (DCs) presenting this and the A2-restriced ESO(157-165) epitope generated abundant, circulating, high-avidity primary and memory CD8(+) T cells that efficiently killed A2/ESO(157-165)(+) tumor cells. This prime boost regimen was superior to other vaccine regimes and required strong Th1 cell responses, copresentation of MHC class I and MHC class II peptides by the same DC, and resulted in upregulation of sphingosine 1-phosphate receptor 1, and thus egress of freshly primed CD8(+) T cells from the draining lymph nodes into circulation. This well-defined system allowed detailed mechanistic analysis, which revealed that 1) the Th1 cytokines IFN-gamma and IL-2 played key roles in CTL priming, namely by upregulating on naive CD8(+) T cells the chemokine receptor CCR5; 2) the inflammatory chemokines CCL4 (MIP-1beta) and CCL3 (MIP-1alpha) chemoattracted primed CD4(+) T cells to mature DCs and activated, naive CD8(+) T cells to DC-CD4 conjugates, respectively; and 3) blockade of these chemokines or their common receptor CCR5 ablated priming of CD8(+) T cells and upregulation of sphingosine 1-phosphate receptor 1. These findings provide new opportunities for improving T cell cancer vaccines.
Resumo:
In this thesis membrane filtration of paper machnie clear filtrate was studied. The aim of the study was to find membrane processes which are able to produce economically water of sufficient purity from paper machine white water or its saveall clarified fractions for reuse in the paper machnie short circulation. Factors affecting membrane fouling in this application were also studied. The thesis gives an overview af experiments done on a laboratory and a pilot scale with several different membranes and membrane modules. The results were judged by the obtained flux, the fouling tendency and the permeate quality assessed with various chemical analyses. It was shown that membrane modules which used a turbulence promotor of some kind gave the highest fluexes. However, the results showed that the greater the reduction in the concentration polarisation layer caused by increased turbulence in the module, the smaller the reductions in measured substances. Out of the micro-, ultra- and nanofiltration membranes tested, only nanofiltration memebranes produced permeate whose quality was very close to that of the chemically treated raw water used as fresh water in most paper mills today and which should thus be well suited for reuse as shower water both in the wire and press section. It was also shown that a one stage nanofiltration process was more effective than processes in which micro- or ultrafiltration was used as pretreatment for nanofiltration. It was generally observed that acidic pH, high organic matter content, the presence of multivalent ions, hydrophobic membrane material and high membrane cutoff increased the fouling tendency of the membranes.
Resumo:
Enhanced Recovery After Surgery (ERAS) is a multimodal, standardized and evidence-based perioperative care pathway. With ERAS, postoperative complications are significantly lowered, and, as a secondary effect, length of hospital stay and health cost are reduced. The patient recovers better and faster allowing to reduce in addition the workload of healthcare providers. Despite the hospital discharge occurs sooner, there is no increased charge of the outpatient care. ERAS can be safely applied to any patient by a tailored approach. The general practitioner plays an essential role in ERAS by assuring the continuity of the information and the follow-up of the patient.
Resumo:
In the present research we have set forth a new, simple, Trade-Off model that would allow us to calculate how much debt and, by default, how much equity a company should have, using easily available information and calculating the cost of debt dynamically on the basis of the effect that the capital structure of the company has on the risk of bankruptcy; in an attempt to answer this question. The proposed model has been applied to the companies that make up the Dow Jones Industrial Average (DJIA) in 2007. We have used consolidated financial data from 1996 to 2006, published by Bloomberg. We have used simplex optimization method to find the debt level that maximizes firm value. Then, we compare the estimated debt with real debt of companies using statistical nonparametric Mann-Whitney. The results indicate that 63% of companies do not show a statistically significant difference between the real and the estimated debt.
Resumo:
Psychophysical studies suggest that humans preferentially use a narrow band of low spatial frequencies for face recognition. Here we asked whether artificial face recognition systems have an improved recognition performance at the same spatial frequencies as humans. To this end, we estimated recognition performance over a large database of face images by computing three discriminability measures: Fisher Linear Discriminant Analysis, Non-Parametric Discriminant Analysis, and Mutual Information. In order to address frequency dependence, discriminabilities were measured as a function of (filtered) image size. All three measures revealed a maximum at the same image sizes, where the spatial frequency content corresponds to the psychophysical found frequencies. Our results therefore support the notion that the critical band of spatial frequencies for face recognition in humans and machines follows from inherent properties of face images, and that the use of these frequencies is associated with optimal face recognition performance.