132 resultados para linear model
Resumo:
Atmospheric CO2 concentration is expected to continue rising in the coming decades, but natural or artificial processes may eventually reduce it. We show that, in the FAMOUS atmosphere-ocean general circulation model, the reduction of ocean heat content as radiative forcing decreases is greater than would be expected from a linear model simulation of the response to the applied forcings. We relate this effect to the behavior of the Atlantic meridional overturning circulation (AMOC): the ocean cools more efficiently with a strong AMOC. The AMOC weakens as CO2 rises, then strengthens as CO2 declines, but temporarily overshoots its original strength. This nonlinearity comes mainly from the accumulated advection of salt into the North Atlantic, which gives the system a longer memory. This implies that changes observed in response to different CO2 scenarios or from different initial states, such as from past changes, may not be a reliable basis for making projections.
Resumo:
Relations between the apparent electrical conductivity of the soil (ECa) and top- and sub-soil physical properties were examined for two arable fields in southern England (Crowmarsh Battle Farms and the Yattendon Estate). The spatial variation of ECa and the soil properties was explored geostatistically. The variogram ranges showed that ECa varied on a similar spatial scale to many of the soil physical properties in both fields. Several features in the map of kriged predictions of ECa were also evident in maps of the soil properties. In addition, the correlation coefficients showed a strong relation between ECa and several soil properties. A moving correlation analysis enabled differences in the relations between ECa and the soil properties to be examined within the fields. The results indicated that relations were inconsistent; they were stronger in some areas than others. A regression of ECa on the principal component scores of the leading components for both fields showed that the first two components accounted for a large proportion of the variance in ECa, whereas the others accounted for little or none. For Crowmarsh topsoil sand and clay, loss on ignition and volumetric water measured in the autumn had large correlations on the first component, and for Yattendon they were large for topsoil sand and clay, and autumn and spring volumetric water. The cross-variograms suggested strong coregionalization between ECa and several soil physical properties; in particular subsoil sand and silt at Crowmarsh, and subsoil sand and clay at Yattendon. The structural correlations from the linear model of coregionalization confirmed the strength of the relations between ECa and the subsoil properties. Nevertheless, no one property was consistently important for both fields. Although a map of ECa can indicate the general patterns of spatial variation in the soil, it is not a substitute for information on soil properties obtained by sampling and analysing the soil. Nevertheless, it could be used to guide further sampling. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Recent radar and rain-gauge observations from the island of Dominica, which lies in the eastern Caribbean sea at 15 N, show a strong orographic enhancement of trade-wind precipitation. The mechanisms behind this enhancement are investigated using idealized large-eddy simulations with a realistic representation of the shallow trade-wind cumuli over the open ocean upstream of the island. The dominant mechanism is found to be the rapid growth of convection by the bulk lifting of the inhomogenous impinging flow. When rapidly lifted by the terrain, existing clouds and other moist parcels gain buoyancy relative to rising dry air because of their different adiabatic lapse rates. The resulting energetic, closely-packed convection forms precipitation readily and brings frequent heavy showers to the high terrain. Despite this strong precipitation enhancement, only a small fraction (1%) of the impinging moisture flux is lost over the island. However, an extensive rain shadow forms to the lee of Dominica due to the convective stabilization, forced descent, and wave breaking. A linear model is developed to explain the convective enhancement over the steep terrain.
Resumo:
A combination of idealized numerical simulations and analytical theory is used to investigate the spacing between convective orographic rainbands over the Coastal Range of western Oregon. The simulations, which are idealized from an observed banded precipitation event over the Coastal Range, indicate that the atmospheric response to conditionally unstable flow over the mountain ridge depends strongly on the subridge-scale topographic forcing on the windward side of the ridge. When this small-scale terrain contains only a single scale (l) of terrain variability, the band spacing is identical to l, but when a spectrum of terrain scales are simultaneously present, the band spacing ranges between 5 and 10 km, a value that is consistent with observations. Based on the simulations, an inviscid linear model is developed to provide a physical basis for understanding the scale selection of the rainbands. This analytical model, which captures the transition from lee waves upstream of the orographic cloud to moist convection within it, reveals that the spacing of orographic rainbands depends on both the projection of lee-wave energy onto the unstable cap cloud and the growth rate of unstable perturbations within the cloud. The linear model is used in tandem with numerical simulations to determine the sensitivity of the band spacing to a number of environmental and terrain-related parameters.
Resumo:
The ECMWF ensemble weather forecasts are generated by perturbing the initial conditions of the forecast using a subset of the singular vectors of the linearised propagator. Previous results show that when creating probabilistic forecasts from this ensemble better forecasts are obtained if the mean of the spread and the variability of the spread are calibrated separately. We show results from a simple linear model that suggest that this may be a generic property for all singular vector based ensemble forecasting systems based on only a subset of the full set of singular vectors.
Resumo:
A 2-year longitudinal survey was carried out to investigate factors affecting milk yield in crossbred cows on smallholder farms in and around an urban centre. Sixty farms were visited at approximately 2-week intervals and details of milk yield, body condition score (BCS) and heart girth measurements were collected. Fifteen farms were within the town (U), 23 farms were approximately 5 km from town (SU), and 22 farms approximately 10 km from town (PU). Sources of variation in milk yield were investigated using a general linear model by a stepwise forward selection and backward elimination approach to judge important independent variables. Factors considered for the first step of formulation of the model included location (PU, SU and U), calving season, BCS at calving, at 3 months postpartum and at 6 months postpartum, calving year, herd size category, source of labour (hired and family labour), calf rearing method (bucket and partial suckling) and parity number of the cow. Daily milk yield (including milk sucked by calves) was determined by calving year (p < 0.0001), calf rearing method (p = 0.044) and BCS at calving (p < 0.0001). Only BCS at calving contributed to variation in volume of milk sucked by the calf, lactation length and lactation milk yield. BCS at 3 months after calving was improved on farms where labour was hired (p = 0.041) and BCS change from calving to 6 months was more than twice as likely to be negative on U than SU and PU farms. It was concluded that milk production was predominantly associated with BCS at calving, lactation milk yield increasing quadratically from score 1 to 3. BCS at calving may provide a simple, single indicator of the nutritional status of a cow population.
Resumo:
A 2-year longitudinal survey was carried out to investigate factors affecting reproduction in crossbred cows on smallholder farms in and around an urban centre. Sixty farms were visited at approximately 2-week intervals and details of reproductive traits and body condition score (BCS) were collected. Fifteen farms were within the town (U), 23 farms were approximately 5 km from town (SU), and 22 farms approximately 10 km from town (PU). Sources of variation in reproductive traits were investigated using a general linear model (GLM) by a stepwise forward selection and backward elimination approach to judge important independent variables. Factors considered for the first step of formulation of the model included location (PU, SU and U), type of insemination, calving season, BCS at calving, at 3 months postpartum and at 6 months postpartum, calving year, herd size category, source of labour (hired and family labour), calf rearing method (bucket and partial suckling) and parity number of the cow. The effects of the independent variables identified were then investigated using a non-parametric survival technique. The number of days to first oestrus was increased on the U site (p = 0.045) and when family labour was used (p = 0.02). The non-parametric test confirmed the effect of site (p = 0.059), but effect of labour was not significant. The number of days from calving to conception was reduced by hiring labour (p = 0.003) and using natural service (p = 0.028). The non-parametric test confirmed the effects of type of insemination (p = 0.0001) while also identifying extended calving intervals on U and SU sites (p = 0.014). Labour source was again non-significant. Calving interval was prolonged on U and SU sites (p = 0.021), by the use of AI (p = 0.031) and by the use of family labour (p = 0.001). The non-parametric test confirmed the effect of site (p = 0.008) and insemination type (p > 0.0001) but not of labour source. It was concluded that under favourable conditions (PU site, hired labour and natural service) calving intervals of around 440 days could be achieved.
Resumo:
Few studies have linked density dependence of parasitism and the tritrophic environment within which a parasitoid forages. In the non-crop plant-aphid, Centaurea nigra-Uroleucon jaceae system, mixed patterns of density-dependent parasitism by the parasitoids Aphidius funebris and Trioxys centaureae were observed in a survey of a natural population. Breakdown of density-dependent parasitism revealed that density dependence was inverse in smaller colonies but direct in large colonies (>20 aphids), suggesting there is a threshold effect in parasitoid response to aphid density. The CV2 of searching parasitoids was estimated from parasitism data using a hierarchical generalized linear model, and CV2>1 for A. funebris between plant patches, while for T. centaureae CV2>1 within plant patches. In both cases, density independent heterogeneity was more important than density-dependent heterogeneity in parasitism. Parasitism by T. centaureae increased with increasing plant patch size. Manipulation of aphid colony size and plant patch size revealed that parasitism by A. funebris was directly density dependent at the range of colony sizes tested (50-200 initial aphids), and had a strong positive relationship with plant patch size. The effects of plant patch size detected for both species indicate that the tritrophic environment provides a source of host density independent heterogeneity in parasitism, and can modify density-dependent responses. (c) 2007 Gessellschaft fur Okologie. Published by Elsevier GmbH. All rights reserved.
Resumo:
Pharmacogenetic trials investigate the effect of genotype on treatment response. When there are two or more treatment groups and two or more genetic groups, investigation of gene-treatment interactions is of key interest. However, calculation of the power to detect such interactions is complicated because this depends not only on the treatment effect size within each genetic group, but also on the number of genetic groups, the size of each genetic group, and the type of genetic effect that is both present and tested for. The scale chosen to measure the magnitude of an interaction can also be problematic, especially for the binary case. Elston et al. proposed a test for detecting the presence of gene-treatment interactions for binary responses, and gave appropriate power calculations. This paper shows how the same approach can also be used for normally distributed responses. We also propose a method for analysing and performing sample size calculations based on a generalized linear model (GLM) approach. The power of the Elston et al. and GLM approaches are compared for the binary and normal case using several illustrative examples. While more sensitive to errors in model specification than the Elston et al. approach, the GLM approach is much more flexible and in many cases more powerful. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
Bayesian decision procedures have already been proposed for and implemented in Phase I dose-escalation studies in healthy volunteers. The procedures have been based on pharmacokinetic responses reflecting the concentration of the drug in blood plasma and are conducted to learn about the dose-response relationship while avoiding excessive concentrations. However, in many dose-escalation studies, pharmacodynamic endpoints such as heart rate or blood pressure are observed, and it is these that should be used to control dose-escalation. These endpoints introduce additional complexity into the modeling of the problem relative to pharmacokinetic responses. Firstly, there are responses available following placebo administrations. Secondly, the pharmacodynamic responses are related directly to measurable plasma concentrations, which in turn are related to dose. Motivated by experience of data from a real study conducted in a conventional manner, this paper presents and evaluates a Bayesian procedure devised for the simultaneous monitoring of pharmacodynamic and pharmacokinetic responses. Account is also taken of the incidence of adverse events. Following logarithmic transformations, a linear model is used to relate dose to the pharmacokinetic endpoint and a quadratic model to relate the latter to the pharmacodynamic endpoint. A logistic model is used to relate the pharmacokinetic endpoint to the risk of an adverse event.
Resumo:
This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so. that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.
Resumo:
This paper addresses the need for accurate predictions on the fault inflow, i.e. the number of faults found in the consecutive project weeks, in highly iterative processes. In such processes, in contrast to waterfall-like processes, fault repair and development of new features run almost in parallel. Given accurate predictions on fault inflow, managers could dynamically re-allocate resources between these different tasks in a more adequate way. Furthermore, managers could react with process improvements when the expected fault inflow is higher than desired. This study suggests software reliability growth models (SRGMs) for predicting fault inflow. Originally developed for traditional processes, the performance of these models in highly iterative processes is investigated. Additionally, a simple linear model is developed and compared to the SRGMs. The paper provides results from applying these models on fault data from three different industrial projects. One of the key findings of this study is that some SRGMs are applicable for predicting fault inflow in highly iterative processes. Moreover, the results show that the simple linear model represents a valid alternative to the SRGMs, as it provides reasonably accurate predictions and performs better in many cases.
Resumo:
An input variable selection procedure is introduced for the identification and construction of multi-input multi-output (MIMO) neurofuzzy operating point dependent models. The algorithm is an extension of a forward modified Gram-Schmidt orthogonal least squares procedure for a linear model structure which is modified to accommodate nonlinear system modeling by incorporating piecewise locally linear model fitting. The proposed input nodes selection procedure effectively tackles the problem of the curse of dimensionality associated with lattice-based modeling algorithms such as radial basis function neurofuzzy networks, enabling the resulting neurofuzzy operating point dependent model to be widely applied in control and estimation. Some numerical examples are given to demonstrate the effectiveness of the proposed construction algorithm.
Resumo:
Three emissions inventories have been used with a fully Lagrangian trajectory model to calculate the stratospheric accumulation of water vapour emissions from aircraft, and the resulting radiative forcing. The annual and global mean radiative forcing due to present-day aviation water vapour emissions has been found to be 0.9 [0.3 to 1.4] mW m^2. This is around a factor of three smaller than the value given in recent assessments, and the upper bound is much lower than a recently suggested 20 mW m^2 upper bound. This forcing is sensitive to the vertical distribution of emissions, and, to a lesser extent, interannual variability in meteorology. Large differences in the vertical distribution of emissions within the inventories have been identified, which result in the choice of inventory being the largest source of differences in the calculation of the radiative forcing due to the emissions. Analysis of Northern Hemisphere trajectories demonstrates that the assumption of an e-folding time is not always appropriate for stratospheric emissions. A linear model is more representative for emissions that enter the stratosphere far above the tropopause.
Resumo:
A method is suggested for the calculation of the friction velocity for stable turbulent boundary-layer flow over hills. The method is tested using a continuous upstream mean velocity profile compatible with the propagation of gravity waves, and is incorporated into the linear model of Hunt, Leibovich and Richards with the modification proposed by Hunt, Richards and Brighton to include the effects of stability, and the reformulated solution of Weng for the near-surface region. Those theoretical results are compared with results from simulations using a non-hydrostatic microscale-mesoscale two-dimensional numerical model, and with field observations for different values of stability. These comparisons show a considerable improvement in the behaviour of the theoretical model when the friction velocity is calculated using the method proposed here, leading to a consistent variation of the boundary-layer structure with stability, and better agreement with observational and numerical data.