29 resultados para cross likelihood ratio


Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND AND OBJECTIVE: Given the role of uncoupling protein 2 (UCP2) in the accumulation of fat in the hepatocytes and in the enhancement of protective mechanisms in acute ethanol intake, we hypothesised that UCP2 polymorphisms are likely to cause liver disease through their interactions with obesity and alcohol intake. To test this hypothesis, we investigated the interaction between tagging polymorphisms in the UCP2 gene (rs2306819, rs599277 and rs659366), alcohol intake and obesity traits such as BMI and waist circumference (WC) on alanine aminotransferase (ALT) and gamma glutamyl transferase (GGT) in a large meta-analysis of data sets from three populations (n=20 242). DESIGN AND METHODS: The study populations included the Northern Finland Birth Cohort 1966 (n=4996), Netherlands Study of Depression and Anxiety (n=1883) and LifeLines Cohort Study (n=13 363). Interactions between the polymorphisms and obesity and alcohol intake on dichotomised ALT and GGT levels were assessed using logistic regression and the likelihood ratio test. RESULTS: In the meta-analysis of the three cohorts, none of the three UCP2 polymorphisms were associated with GGT or ALT levels. There was no evidence for interaction between the polymorphisms and alcohol intake on GGT and ALT levels. In contrast, the association of WC and BMI with GGT levels varied by rs659366 genotype (Pinteraction=0.03 and 0.007, respectively; adjusted for age, gender, high alcohol intake, diabetes, hypertension and serum lipid concentrations). CONCLUSION: In conclusion, our findings in 20 242 individuals suggest that UCP2 gene polymorphisms may cause liver dysfunction through the interaction with body fat rather than alcohol intake.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Phylogenetic comparative methods are increasingly used to give new insights into the dynamics of trait evolution in deep time. For continuous traits the core of these methods is a suite of models that attempt to capture evolutionary patterns by extending the Brownian constant variance model. However, the properties of these models are often poorly understood, which can lead to the misinterpretation of results. Here we focus on one of these models – the Ornstein Uhlenbeck (OU) model. We show that the OU model is frequently incorrectly favoured over simpler models when using Likelihood ratio tests, and that many studies fitting this model use datasets that are small and prone to this problem. We also show that very small amounts of error in datasets can have profound effects on the inferences derived from OU models. Our results suggest that simulating fitted models and comparing with empirical results is critical when fitting OU and other extensions of the Brownian model. We conclude by making recommendations for best practice in fitting OU models in phylogenetic comparative analyses, and for interpreting the parameters of the OU model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A collection of 24 seawaters from various worldwide locations and differing depth was culled to measure their chlorine isotopic composition (delta(37)Cl). These samples cover all the oceans and large seas: Atlantic, Pacific, Indian and Antarctic oceans, Mediterranean and Red seas. This collection includes nine seawaters from three depth profiles down to 4560 mbsl. The standard deviation (2sigma) of the delta(37)Cl of this collection is +/-0.08 parts per thousand, which is in fact as large as our precision of measurement ( +/- 0.10 parts per thousand). Thus, within error, oceanic waters seem to be an homogeneous reservoir. According to our results, any seawater could be representative of Standard Mean Ocean Chloride (SMOC) and could be used as a reference standard. An extended international cross-calibration over a large range of delta(37)Cl has been completed. For this purpose, geological fluid samples of various chemical compositions and a manufactured CH3Cl gas sample, with delta(37)Cl from about -6 parts per thousand to +6 parts per thousand have been compared. Data were collected by gas source isotope ratio mass spectrometry (IRMS) at the Paris, Reading and Utrecht laboratories and by thermal ionization mass spectrometry (TIMS) at the Leeds laboratory. Comparison of IRMS values over the range -5.3 parts per thousand to +1.4 parts per thousand plots on the Y=X line, showing a very good agreement between the three laboratories. On 11 samples, the trend line between Paris and Reading Universities is: delta(37)Cl(Reading)= (1.007 +/- 0.009)delta(37)Cl(Paris) - (0.040 +/- 0.025), with a correlation coefficient: R-2 = 0.999. TIMS values from Leeds University have been compared to IRMS values from Paris University over the range -3.0 parts per thousand to +6.0 parts per thousand. On six samples, the agreement between these two laboratories, using different techniques is good: delta(37)Cl(Leeds)=(1.052 +/- 0.038)delta(37)Cl(Paris) + (0.058 +/- 0.099), with a correlation coefficient: R-2 = 0.995. The present study completes a previous cross-calibration between the Leeds and Reading laboratories to compare TIMS and IRMS results (Anal. Chem. 72 (2000) 2261). Both studies allow a comparison of IRMS and TIMS techniques between delta(37)Cl values from -4.4 parts per thousand to +6.0 parts per thousand and show a good agreement: delta(37)Cl(TIMS)=(1.039 +/- 0.023)delta(37)Cl(IRMS)+(0.059 +/- 0.056), with a correlation coefficient: R-2 = 0.996. Our study shows that, for fluid samples, if chlorine isotopic compositions are near 0 parts per thousand, their measurements either by IRMS or TIMS will give comparable results within less than +/- 0.10 parts per thousand, while for delta(37)Cl values as far as 10 parts per thousand (either positive or negative) from SMOC, both techniques will agree within less than +/- 0.30 parts per thousand. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been generally accepted that the method of moments (MoM) variogram, which has been widely applied in soil science, requires about 100 sites at an appropriate interval apart to describe the variation adequately. This sample size is often larger than can be afforded for soil surveys of agricultural fields or contaminated sites. Furthermore, it might be a much larger sample size than is needed where the scale of variation is large. A possible alternative in such situations is the residual maximum likelihood (REML) variogram because fewer data appear to be required. The REML method is parametric and is considered reliable where there is trend in the data because it is based on generalized increments that filter trend out and only the covariance parameters are estimated. Previous research has suggested that fewer data are needed to compute a reliable variogram using a maximum likelihood approach such as REML, however, the results can vary according to the nature of the spatial variation. There remain issues to examine: how many fewer data can be used, how should the sampling sites be distributed over the site of interest, and how do different degrees of spatial variation affect the data requirements? The soil of four field sites of different size, physiography, parent material and soil type was sampled intensively, and MoM and REML variograms were calculated for clay content. The data were then sub-sampled to give different sample sizes and distributions of sites and the variograms were computed again. The model parameters for the sets of variograms for each site were used for cross-validation. Predictions based on REML variograms were generally more accurate than those from MoM variograms with fewer than 100 sampling sites. A sample size of around 50 sites at an appropriate distance apart, possibly determined from variograms of ancillary data, appears adequate to compute REML variograms for kriging soil properties for precision agriculture and contaminated sites. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an analysis of trace gas correlations in the lowermost stratosphere. In‐situ aircraft measurements of CO, N2O, NOy and O3, obtained during the STREAM 1997 winter campaign, have been used to investigate the role of cross‐tropopause mass exchange on tracer‐tracer relations. At altitudes several kilometers above the local tropopause, undisturbed stratospheric air was found with NOy/NOy * ratios close to unity, NOy/O3 about 0.003–0.006 and CO mixing ratios as low as 20 ppbv (NOy * is a proxy for total reactive nitrogen derived from NOy–N2O relations measured in the stratosphere). Mixing of tropospheric air into the lowermost stratosphere has been identified by enhanced ratios of NOy/NOy * and NOy/O3, and from scatter plots of CO versus O3. The enhanced NOy/O3 ratio in the lowermost stratospheric mixing zone points to a reduced efficiency of O3 formation from aircraft NOx emissions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives We examined the characteristics and CHD risks of people who accessed the free Healthy Heart Assessment (HHA) service operated by a large UK pharmacy chain from August 2004 to April 2006. Methods Associations between participants’ gender, age, and socioeconomics were explored in relation to calculated 10-year CHD risks by cross-tabulation of the data. Specific associations were tested by forming contingency tables and using Pearson chi-square (χ2). Results Data from 8,287 records were analysable; 5,377 were at low and 2,910 at moderate-to-high CHD risk. The likelihood of moderate-to-high risk for a male versus female participant was significantly higher with a relative risk ratio (RRR) 1.72 (P < 0.001). A higher percentage of those in socioeconomic categories ‘constrained by circumstances’ (RRR 1.15; P < 0.05) and ‘blue collar communities’ (RRR 1.13; P < 0.05) were assessed with moderate-to-high risk compared to those in ‘prospering suburbs’. Conclusions People from ‘hard-to-reach’ sectors of the population, men and people from less advantaged communities, accessed the HHA service and were more likely to return moderate-to-high CHD risk. Pharmacists prioritised provision of lifestyle information above the sale of a product. Our study supports the notion that pharmacies can serve as suitable environments for the delivery of similar screening services.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pollutant plumes with enhanced concentrations of trace gases and aerosols were observed over the southern coast of West Africa during August 2006 as part of the AMMA wet season field campaign. Plumes were observed both in the mid and upper troposphere. In this study we examined the origin of these pollutant plumes, and their potential to photochemically produce ozone (O3) downwind over the Atlantic Ocean. Their possible contribution to the Atlantic O3 maximum is also discussed. Runs using the BOLAM mesoscale model including biomass burning carbon monoxide (CO) tracers were used to confirm an origin from central African biomass burning fires. The plumes measured in the mid troposphere (MT) had significantly higher pollutant concentrations over West Africa compared to the upper tropospheric (UT) plume. The mesoscale model reproduces these differences and the two different pathways for the plumes at different altitudes: transport to the north-east of the fire region, moist convective uplift and transport to West Africa for the upper tropospheric plume versus north-west transport over the Gulf of Guinea for the mid-tropospheric plume. Lower concentrations in the upper troposphere are mainly due to enhanced mixing during upward transport. Model simulations suggest that MT and UT plumes are 16 and 14 days old respectively when measured over West Africa. The ratio of tracer concentrations at 600 hPa and 250 hPa was estimated for 14–15 August in the region of the observed plumes and compares well with the same ratio derived from observed carbon dioxide (CO2) enhancements in both plumes. It is estimated that, for the period 1–15 August, the ratio of Biomass Burning (BB) tracer concentration transported in the UT to the ones transported in the MT is 0.6 over West Africa and the equatorial South Atlantic. Runs using a photochemical trajectory model, CiTTyCAT, initialized with the observations, were used to estimate in-situ net photochemical O3 production rates in these plumes during transport downwind of West Africa. The mid-troposphere plume spreads over altitude between 1.5 and 6 km over the Atlantic Ocean. Even though the plume was old, it was still very photochemically active (mean net O3 production rates over 10 days of 2.6 ppbv/day and up to 7 ppbv/day during the first days) above 3 km especially during the first few days of transport westward. It is also shown that the impact of high aerosol loads in the MT plume on photolysis rates serves to delay the peak in modelled O3 concentrations. These results suggest that a significant fraction of enhanced O3 in mid-troposphere over the Atlantic comes from BB sources during the summer monsoon period. According to simulated occurrence of such transport, BB may be the main source for O3 enhancement in the equatorial south Atlantic MT, at least in August 2006. The upper tropospheric plume was also still photochemically active, although mean net O3 production rates were slower (1.3 ppbv/day). The results suggest that, whilst the transport of BB pollutants to the UT is variable (as shown by the mesoscale model simulations), pollution from biomass burning can make an important contribution to additional photochemical production of O3 in addition to other important sources such as nitrogen oxides (NOx) from lightning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A retrospective cross-sectional study was conducted on 200 randomly selected smallholder farms from a mixed dairy farming system in Tanga, Tanzania, between January and April 1999. We estimated the frequency and determinants of long calving interval (LCI), retention of fetal membrane (RFM), dystocia, and abortion in smallholder crossbred cattle and explored birth trends. The mean calving interval was 500 days and birth rate was 65 per 100 cow-years. Dystocia was reported to affect 58% of calvings, and 17.2% of animals suffered RFM. Using mixed effect models, the variables associated with LCI, RFM and dystocia were breed, level of exotic blood and condition score. Zebu breeding was associated with LCI (odds ratio (OR) = 2.3, p = 0.041) and Friesian breeding with lower odds for RF (OR = 0.26, p = 0.020). Animals with higher levels of exotic blood had lower odds for evidence of dystocia (OR = 0.45, p = 0.021). Evidence of dystocia was significantly associated with poor condition score (beta = -1.10, p = 0.001). Our observations suggest that LCIs are common in smallholder dairy farms in this region and a likely source of economic loss. Dystocia, RFM, poor condition score and mineral deficiency were common problems and were possibly linked to LCI.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the comparison of two formulations in terms of average bioequivalence using the 2 × 2 cross-over design. In a bioequivalence study, the primary outcome is a pharmacokinetic measure, such as the area under the plasma concentration by time curve, which is usually assumed to have a lognormal distribution. The criterion typically used for claiming bioequivalence is that the 90% confidence interval for the ratio of the means should lie within the interval (0.80, 1.25), or equivalently the 90% confidence interval for the differences in the means on the natural log scale should be within the interval (-0.2231, 0.2231). We compare the gold standard method for calculation of the sample size based on the non-central t distribution with those based on the central t and normal distributions. In practice, the differences between the various approaches are likely to be small. Further approximations to the power function are sometimes used to simplify the calculations. These approximations should be used with caution, because the sample size required for a desirable level of power might be under- or overestimated compared to the gold standard method. However, in some situations the approximate methods produce very similar sample sizes to the gold standard method. Copyright © 2005 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: This contribution provides a unifying concept for meta-analysis integrating the handling of unobserved heterogeneity, study covariates, publication bias and study quality. It is important to consider these issues simultaneously to avoid the occurrence of artifacts, and a method for doing so is suggested here. METHODS: The approach is based upon the meta-likelihood in combination with a general linear nonparametric mixed model, which lays the ground for all inferential conclusions suggested here. RESULTS: The concept is illustrated at hand of a meta-analysis investigating the relationship of hormone replacement therapy and breast cancer. The phenomenon of interest has been investigated in many studies for a considerable time and different results were reported. In 1992 a meta-analysis by Sillero-Arenas et al. concluded a small, but significant overall effect of 1.06 on the relative risk scale. Using the meta-likelihood approach it is demonstrated here that this meta-analysis is due to considerable unobserved heterogeneity. Furthermore, it is shown that new methods are available to model this heterogeneity successfully. It is argued further to include available study covariates to explain this heterogeneity in the meta-analysis at hand. CONCLUSIONS: The topic of HRT and breast cancer has again very recently become an issue of public debate, when results of a large trial investigating the health effects of hormone replacement therapy were published indicating an increased risk for breast cancer (risk ratio of 1.26). Using an adequate regression model in the previously published meta-analysis an adjusted estimate of effect of 1.14 can be given which is considerably higher than the one published in the meta-analysis of Sillero-Arenas et al. In summary, it is hoped that the method suggested here contributes further to a good meta-analytic practice in public health and clinical disciplines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, numerical analyses of the thermal performance of an indirect evaporative air cooler incorporating a M-cycle cross-flow heat exchanger has been carried out. The numerical model was established from solving the coupled governing equations for heat and mass transfer between the product and working air, using the finite-element method. The model was developed using the EES (Engineering Equation Solver) environment and validated by published experimental data. Correlation between the cooling (wet-bulb) effectiveness, system COP and a number of air flow/exchanger parameters was developed. It is found that lower channel air velocity, lower inlet air relative humidity, and higher working-to-product air ratio yielded higher cooling effectiveness. The recommended average air velocities in dry and wet channels should not be greater than 1.77 m/s and 0.7 m/s, respectively. The optimum flow ratio of working-to-product air for this cooler is 50%. The channel geometric sizes, i.e. channel length and height, also impose significant impact to system performance. Longer channel length and smaller channel height contribute to increase of the system cooling effectiveness but lead to reduced system COP. The recommend channel height is 4 mm and the dimensionless channel length, i.e., ratio of the channel length to height, should be in the range 100 to 300. Numerical study results indicated that this new type of M-cycle heat and mass exchanger can achieve 16.7% higher cooling effectiveness compared with the conventional cross-flow heat and mass exchanger for the indirect evaporative cooler. The model of this kind is new and not yet reported in literatures. The results of the study help with design and performance analyses of such a new type of indirect evaporative air cooler, and in further, help increasing market rating of the technology within building air conditioning sector, which is currently dominated by the conventional compression refrigeration technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cross-layer techniques represent efficient means to enhance throughput and increase the transmission reliability of wireless communication systems. In this paper, a cross-layer design of aggressive adaptive modulation and coding (A-AMC), truncated automatic repeat request (T-ARQ), and user scheduling is proposed for multiuser multiple-input-multiple-output (MIMO) maximal ratio combining (MRC) systems, where the impacts of feedback delay (FD) and limited feedback (LF) on channel state information (CSI) are also considered. The A-AMC and T-ARQ mechanism selects the appropriate modulation and coding schemes (MCSs) to achieve higher spectral efficiency while satisfying the service requirement on the packet loss rate (PLR), profiting from the feasibility of using different MCSs to retransmit a packet, which is destined to a scheduled user selected to exploit multiuser diversity and enhance the system's performance in terms of both transmission efficiency and fairness. The system's performance is evaluated in terms of the average PLR, average spectral efficiency (ASE), outage probability, and average packet delay, which are derived in closed form, considering transmissions over Rayleigh-fading channels. Numerical results and comparisons are provided and show that A-AMC combined with T-ARQ yields higher spectral efficiency than the conventional scheme based on adaptive modulation and coding (AMC), while keeping the achieved PLR closer to the system's requirement and reducing delay. Furthermore, the effects of the number of ARQ retransmissions, numbers of transmit and receive antennas, normalized FD, and cardinality of the beamforming weight vector codebook are studied and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the determinants of cross-platform arbitrage profits. We develop a structural model that enables us to decompose the likelihood of an arbitrage opportunity into three distinct factors: the fixed cost to trade the opportunity, the level at which one of the platforms delays a price update and the impact of the order flow on the quoted prices (inventory and asymmetric information effects). We then investigate the predictions from the theoretical model for the European Bond market with the estimation of a probit model. Our main finding is that the results found in the empirical part corroborate strongly the predictions from the structural model. The event of a cross market arbitrage opportunity has a certain degree of predictability where an optimal ex ante scenario is represented by a low level of spreads on both platforms, a time of the day close to the end of trading hours and a high volume of trade.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper an equation is derived for the mean backscatter cross section of an ensemble of snowflakes at centimeter and millimeter wavelengths. It uses the Rayleigh–Gans approximation, which has previously been found to be applicable at these wavelengths due to the low density of snow aggregates. Although the internal structure of an individual snowflake is random and unpredictable, the authors find from simulations of the aggregation process that their structure is “self-similar” and can be described by a power law. This enables an analytic expression to be derived for the backscatter cross section of an ensemble of particles as a function of their maximum dimension in the direction of propagation of the radiation, the volume of ice they contain, a variable describing their mean shape, and two variables describing the shape of the power spectrum. The exponent of the power law is found to be −. In the case of 1-cm snowflakes observed by a 3.2-mm-wavelength radar, the backscatter is 40–100 times larger than that of a homogeneous ice–air spheroid with the same mass, size, and aspect ratio.