907 resultados para Models and Methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE Munc18c is associated with glucose metabolism and could play a relevant role in obesity. However, little is known about the regulation of Munc18c expression. We analyzed Munc18c gene expression in human visceral (VAT) and subcutaneous (SAT) adipose tissue and its relationship with obesity and insulin. MATERIALS AND METHODS We evaluated 70 subjects distributed in 12 non-obese lean subjects, 23 overweight subjects, 12 obese subjects and 23 nondiabetic morbidly obese patients (11 with low insulin resistance and 12 with high insulin resistance). RESULTS The lean, overweight and obese persons had a greater Munc18c gene expression in adipose tissue than the morbidly obese patients (p<0.001). VAT Munc18c gene expression was predicted by the body mass index (B = -0.001, p = 0.009). In SAT, no associations were found by different multiple regression analysis models. SAT Munc18c gene expression was the main determinant of the improvement in the HOMA-IR index 15 days after bariatric surgery (B = -2148.4, p = 0.038). SAT explant cultures showed that insulin produced a significant down-regulation of Munc18c gene expression (p = 0.048). This decrease was also obtained when explants were incubated with liver X receptor alpha (LXRα) agonist, either without (p = 0.038) or with insulin (p = 0.050). However, Munc18c gene expression was not affected when explants were incubated with insulin plus a sterol regulatory element-binding protein-1c (SREBP-1c) inhibitor (p = 0.504). CONCLUSIONS Munc18c gene expression in human adipose tissue is down-regulated in morbid obesity. Insulin may have an effect on the Munc18c expression, probably through LXRα and SREBP-1c.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Alcohol consumption leading to morbidity and mortality affects HIV-infected individuals. Here, we aimed to study self-reported alcohol consumption and to determine its association with adherence to antiretroviral therapy (ART) and HIV surrogate markers. METHODS: Cross-sectional data on daily alcohol consumption from August 2005 to August 2007 were analysed and categorized according to the World Health Organization definition (light, moderate or severe health risk). Multivariate logistic regression models and Pearson's chi(2) statistics were used to test the influence of alcohol use on endpoints. RESULTS: Of 6,323 individuals, 52.3% consumed alcohol less than once a week in the past 6 months. Alcohol intake was deemed light in 39.9%, moderate in 5.0% and severe in 2.8%. Higher alcohol consumption was significantly associated with older age, less education, injection drug use, being in a drug maintenance programme, psychiatric treatment, hepatitis C virus coinfection and with a longer time since diagnosis of HIV. Lower alcohol consumption was found in males, non-Caucasians, individuals currently on ART and those with more ART experience. In patients on ART (n=4,519), missed doses and alcohol consumption were positively correlated (P<0.001). Severe alcohol consumers, who were pretreated with ART, were more often off treatment despite having CD4+ T-cell count <200 cells/microl; however, severe alcohol consumption per se did not delay starting ART. In treated individuals, alcohol consumption was not associated with worse HIV surrogate markers. CONCLUSIONS: Higher alcohol consumption in HIV-infected individuals was associated with several psychosocial and demographic factors, non-adherence to ART and, in pretreated individuals, being off treatment despite low CD4+ T-cell counts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION AND AIMS: This study investigated the associations of alcohol outlet density with specific alcohol outcomes (consumption and consequences) among young men in Switzerland and assessed the possible geographically related variations. DESIGN AND METHODS: Alcohol consumption and drinking consequences were measured in a 2010-2011 study assessing substance use risk factors (Cohort Study on Substance Use Risk Factors) among 5519 young Swiss men. Outlet density was based on the number of on- and off-premise outlets in the district of residence. Linear regression models were run separately for drinking level, heavy episodic drinking (HED) and drinking consequences. Geographically weighted regression models were estimated when variations were recorded at the district level. RESULTS: No consistent association was found between outlet density and drinking consequences. A positive association between drinking level and HED with on-premise outlet density was found. Geographically weighted regressions were run for drinking level and HED. The predicted values for HED were higher in the southwest part of Switzerland (French-speaking part). DISCUSSION AND CONCLUSIONS: Among Swiss young men, the density of outlets and, in particular, the abundance of bars, clubs and other on-premise outlets was associated with drinking level and HED, even when drinking consequences were not significantly affected. These findings support the idea that outlet density needs to be considered when developing and implementing regional-based prevention initiatives. [Astudillo M, Kuendig H, Centeno-Gil A, Wicki M, Gmel G. Regional abundance of on-premise outlets and drinking patterns among Swiss young men: District level analyses and geographic adjustments. Drug Alcohol Rev 2014;33:526-33].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AIM: Atomic force microscopy nanoindentation of myofibers was used to assess and quantitatively diagnose muscular dystrophies from human patients. MATERIALS & METHODS: Myofibers were probed from fresh or frozen muscle biopsies from human dystrophic patients and healthy volunteers, as well as mice models, and Young's modulus stiffness values were determined. RESULTS: Fibers displaying abnormally low mechanical stability were detected in biopsies from patients affected by 11 distinct muscle diseases, and Young's modulus values were commensurate to the severity of the disease. Abnormal myofiber resistance was also observed from consulting patients whose muscle condition could not be detected or unambiguously diagnosed otherwise. DISCUSSION & CONCLUSION: This study provides a proof-of-concept that atomic force microscopy yields a quantitative read-out of human muscle function from clinical biopsies, and that it may thereby complement current muscular dystrophy diagnosis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE Streptozotocin (STZ) is the most widely used diabetogenic agent in animal models of islet transplantation. However, the immunomodifying effects of STZ and the ensuing hyperglycemia on lymphocyte subsets, particularly on T regulatory cells (Tregs), remain poorly understood. RESEARCH DESIGN AND METHODS This study evaluated how STZ-induced diabetes affects adaptive immunity and the consequences thereof on allograft rejection in murine models of islet and skin transplantation. The respective toxicity of STZ and hyperglycemia on lymphocyte subsets was tested in vitro. The effect of hyperglycemia was assessed independently of STZ in vivo by the removal of transplanted syngeneic islets, using an insulin pump, and with rat insulin promoter diphtheria toxin receptor transgenic mice. RESULTS Early lymphopenia in both blood and spleen was demonstrated after STZ administration. Direct toxicity of STZ on lymphocytes, particularly on CD8(+) cells and B cells, was shown in vitro. Hyperglycemia also correlated with blood and spleen lymphopenia in vivo but was not lymphotoxic in vitro. Independently of hyperglycemia, STZ led to a relative increase of Tregs in vivo, with the latter retaining their suppressive capacity in vitro. The higher frequency of Tregs was associated with Treg proliferation in the blood, but not in the spleen, and higher blood levels of transforming growth factor-β. Finally, STZ administration delayed islet and skin allograft rejection compared with naive mice. CONCLUSIONS These data highlight the direct and indirect immunosuppressive effects of STZ and acute hyperglycemia, respectively. Thus, these results have important implications for the future development of tolerance-based protocols and their translation from the laboratory to the clinic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES: Hypoglycaemia (glucose <2.2 mmol/l) is a defining feature of severe malaria, but the significance of other levels of blood glucose has not previously been studied in children with severe malaria. METHODS: A prospective study of 437 consecutive children with presumed severe malaria was conducted in Mali. We defined hypoglycaemia as <2.2 mmol/l, low glycaemia as 2.2-4.4 mmol/l and hyperglycaemia as >8.3 mmol/l. Associations between glycaemia and case fatality were analysed for 418 children using logistic regression models and a receiver operator curve (ROC). RESULTS: There was a significant difference between blood glucose levels in children who died (median 4.6 mmol/l) and survivors (median 7.6 mmol/l, P < 0.001). Case fatality declined from 61.5% of the hypoglycaemic children to 46.2% of those with low glycaemia, 13.4% of those with normal glycaemia and 7.6% of those with hyperglycaemia (P < 0.001). Logistic regression showed an adjusted odds ratio (AOR) of 0.75 (0.64-0.88) for case fatality per 1 mmol/l increase in baseline blood glucose. Compared to a normal blood glucose, hypoglycaemia and low glycaemia both significantly increased the odds of death (AOR 11.87, 2.10-67.00; and 5.21, 1.86-14.63, respectively), whereas hyperglycaemia reduced the odds of death (AOR 0.34, 0.13-0.91). The ROC [area under the curve at 0.753 (95% CI 0.684-0.820)] indicated that glycaemia had a moderate predictive value for death and identified an optimal threshold at glycaemia <6.1 mmol/l, (sensitivity 64.5% and specificity 75.1%). CONCLUSIONS: If there is a threshold of blood glucose which defines a worse prognosis, it is at a higher level than the current definition of 2.2 mmol/l.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Predicting which species will occur together in the future, and where, remains one of the greatest challenges in ecology, and requires a sound understanding of how the abiotic and biotic environments interact with dispersal processes and history across scales. Biotic interactions and their dynamics influence species' relationships to climate, and this also has important implications for predicting future distributions of species. It is already well accepted that biotic interactions shape species' spatial distributions at local spatial extents, but the role of these interactions beyond local extents (e.g. 10 km(2) to global extents) are usually dismissed as unimportant. In this review we consolidate evidence for how biotic interactions shape species distributions beyond local extents and review methods for integrating biotic interactions into species distribution modelling tools. Drawing upon evidence from contemporary and palaeoecological studies of individual species ranges, functional groups, and species richness patterns, we show that biotic interactions have clearly left their mark on species distributions and realised assemblages of species across all spatial extents. We demonstrate this with examples from within and across trophic groups. A range of species distribution modelling tools is available to quantify species environmental relationships and predict species occurrence, such as: (i) integrating pairwise dependencies, (ii) using integrative predictors, and (iii) hybridising species distribution models (SDMs) with dynamic models. These methods have typically only been applied to interacting pairs of species at a single time, require a priori ecological knowledge about which species interact, and due to data paucity must assume that biotic interactions are constant in space and time. To better inform the future development of these models across spatial scales, we call for accelerated collection of spatially and temporally explicit species data. Ideally, these data should be sampled to reflect variation in the underlying environment across large spatial extents, and at fine spatial resolution. Simplified ecosystems where there are relatively few interacting species and sometimes a wealth of existing ecosystem monitoring data (e.g. arctic, alpine or island habitats) offer settings where the development of modelling tools that account for biotic interactions may be less difficult than elsewhere.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Various test methods exist for measuring heat of cement hydration; however, most current methods require expensive equipment, complex testing procedures, and/or extensive time, thus not being suitable for field application. The objectives of this research are to identify, develop, and evaluate a standard test procedure for characterization and quality control of pavement concrete mixtures using a calorimetry technique. This research project has three phases. Phase I was designed to identify the user needs, including performance requirements and precision and bias limits, and to synthesize existing test methods for monitoring the heat of hydration, including device types, configurations, test procedures, measurements, advantages, disadvantages, applications, and accuracy. Phase II was designed to conduct experimental work to evaluate the calorimetry equipment recommended from the Phase I study and to develop a standard test procedure for using the equipment and interpreting the test results. Phase II also includes the development of models and computer programs for prediction of concrete pavement performance based on the characteristics of heat evolution curves. Phase III was designed to study for further development of a much simpler, inexpensive calorimeter for field concrete. In this report, the results from the Phase I study are presented, the plan for the Phase II study is described, and the recommendations for Phase III study are outlined. Phase I has been completed through three major activities: (1) collecting input and advice from the members of the project Technical Working Group (TWG), (2) conducting a literature survey, and (3) performing trials at the CP Tech Center’s research lab. The research results indicate that in addition to predicting maturity/strength, concrete heat evolution test results can also be used for (1) forecasting concrete setting time, (2) specifying curing period, (3) estimating risk of thermal cracking, (4) assessing pavement sawing/finishing time, (5) characterizing cement features, (6) identifying incompatibility of cementitious materials, (7) verifying concrete mix proportions, and (8) selecting materials and/or mix designs for given environmental conditions. Besides concrete materials and mix proportions, the configuration of the calorimeter device, sample size, mixing procedure, and testing environment (temperature) also have significant influences on features of concrete heat evolution process. The research team has found that although various calorimeter tests have been conducted for assorted purposes and the potential uses of calorimeter tests are clear, there is no consensus on how to utilize the heat evolution curves to characterize concrete materials and how to effectively relate the characteristics of heat evolution curves to concrete pavement performance. The goal of the Phase II study is to close these gaps.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The prognostic potential of individual clinical and molecular parameters in stage II/III colon cancer has been investigated, but a thorough multivariable assessment of their relative impact is missing. Methods Tumors from patients (N = 1404) in the PETACC3 adjuvant chemotherapy trial were examined for BRAF and KRAS mutations, microsatellite instability (MSI), chromosome 18q loss of heterozygosity (18qLOH), and SMAD4 expression. Their importance in predicting relapse-free survival (RFS) and overall survival (OS) was assessed by Kaplan-Meier analyses, Cox regression models, and recursive partitioning trees. All statistical tests were two-sided. Results MSI-high status and SMAD4 focal loss of expression were identified as independent prognostic factors with better RFS (hazard ratio [HR] of recurrence = 0.54, 95% CI = 0.37 to 0.81, P = .003) and OS (HR of death = 0.43, 95% CI = 0.27 to 0.70, P = .001) for MSI-high status and worse RFS (HR = 1.47, 95% CI = 1.19 to 1.81, P < .001) and OS (HR = 1.58, 95% CI = 1.23 to 2.01, P < .001) for SMAD4 loss. 18qLOH did not have any prognostic value in RFS or OS. Recursive partitioning identified refinements of TNM into new clinically interesting prognostic subgroups. Notably, T3N1 tumors with MSI-high status and retained SMAD4 expression had outcomes similar to stage II disease. Conclusions Concomitant assessment of molecular and clinical markers in multivariable analysis is essential to confirm or refute their independent prognostic value. Including molecular markers with independent prognostic value might allow more accurate prediction of prognosis than TNM staging alone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Folate deficiency leads to DNA damage and inadequate repair, caused by a decreased synthesis of thymidylate and purines. We analyzed the relationship between dietary folate intake and the risk of several cancers. Patients and methods The study is based on a network of case-control studies conducted in Italy and Switzerland in 1991-2009. The odds ratios (ORs) for dietary folate intake were estimated by multiple logistic regression models, adjusted for major identified confounding factors. Results For a few cancer sites, we found a significant inverse relation, with ORs for an increment of 100 μg/day of dietary folate of 0.65 for oropharyngeal (1467 cases), 0.58 for esophageal (505 cases), 0.83 for colorectal (2390 cases), 0.72 for pancreatic (326 cases), 0.67 for laryngeal (851 cases) and 0.87 for breast (3034 cases) cancers. The risk estimates were below unity, although not significantly, for cancers of the endometrium (OR = 0.87, 454 cases), ovary (OR = 0.86, 1031 cases), prostate (OR = 0.91, 1468 cases) and kidney (OR = 0.88, 767 cases), and was 1.00 for stomach cancer (230 cases). No material heterogeneity was found in strata of sex, age, smoking and alcohol drinking. Conclusions Our data support a real inverse association of dietary folate intake with the risk of several common cancers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE To develop a score predicting the risk of adverse events (AEs) in pediatric patients with cancer who experience fever and neutropenia (FN) and to evaluate its performance. PATIENTS AND METHODS Pediatric patients with cancer presenting with FN induced by nonmyeloablative chemotherapy were observed in a prospective multicenter study. A score predicting the risk of future AEs (ie, serious medical complication, microbiologically defined infection, radiologically confirmed pneumonia) was developed from a multivariate mixed logistic regression model. Its cross-validated predictive performance was compared with that of published risk prediction rules. Results An AE was reported in 122 (29%) of 423 FN episodes. In 57 episodes (13%), the first AE was known only after reassessment after 8 to 24 hours of inpatient management. Predicting AE at reassessment was better than prediction at presentation with FN. A differential leukocyte count did not increase the predictive performance. The score predicting future AE in 358 episodes without known AE at reassessment used the following four variables: preceding chemotherapy more intensive than acute lymphoblastic leukemia maintenance (weight = 4), hemoglobin > or = 90 g/L (weight = 5), leukocyte count less than 0.3 G/L (weight = 3), and platelet count less than 50 G/L (weight = 3). A score (sum of weights) > or = 9 predicted future AEs. The cross-validated performance of this score exceeded the performance of published risk prediction rules. At an overall sensitivity of 92%, 35% of the episodes were classified as low risk, with a specificity of 45% and a negative predictive value of 93%. CONCLUSION This score, based on four routinely accessible characteristics, accurately identifies pediatric patients with cancer with FN at risk for AEs after reassessment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This chapter highlights the problems that structural methods and SVAR approaches have when estimating DSGE models and examining their ability to capture important features of the data. We show that structural methods are subject to severe identification problems due, in large part, to the nature of DSGE models. The problems can be patched up in a number of ways but solved only if DSGEs are completely reparametrized or respecified. The potential misspecification of the structural relationships give Bayesian methods an hedge over classical ones in structural estimation. SVAR approaches may face invertibility problems but simple diagnostics can help to detect and remedy these problems. A pragmatic empirical approach ought to use the flexibility of SVARs against potential misspecificationof the structural relationships but must firmly tie SVARs to the class of DSGE models which could have have generated the data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When dealing with the design of service networks, such as healthand EMS services, banking or distributed ticket selling services, thelocation of service centers has a strong influence on the congestion ateach of them, and consequently, on the quality of service. In this paper,several models are presented to consider service congestion. The firstmodel addresses the issue of the location of the least number of single--servercenters such that all the population is served within a standard distance,and nobody stands in line for a time longer than a given time--limit, or withmore than a predetermined number of other clients. We then formulateseveral maximal coverage models, with one or more servers per service center.A new heuristic is developed to solve the models and tested in a 30--nodesnetwork.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper illustrates the philosophy which forms the basis of calibrationexercises in general equilibrium macroeconomic models and the details of theprocedure, the advantages and the disadvantages of the approach, with particularreference to the issue of testing ``false'' economic models. We provide anoverview of the most recent simulation--based approaches to the testing problemand compare them to standard econometric methods used to test the fit of non--lineardynamic general equilibrium models. We illustrate how simulation--based techniques can be used to formally evaluate the fit of a calibrated modelto the data and obtain ideas on how to improve the model design using a standardproblem in the international real business cycle literature, i.e. whether amodel with complete financial markets and no restrictions to capital mobility is able to reproduce the second order properties of aggregate savingand aggregate investment in an open economy.