939 resultados para General linear models
Resumo:
Seasonal climate prediction offers the potential to anticipate variations in crop production early enough to adjust critical decisions. Until recently, interest in exploiting seasonal forecasts from dynamic climate models (e.g. general circulation models, GCMs) for applications that involve crop simulation models has been hampered by the difference in spatial and temporal scale of GCMs and crop models, and by the dynamic, nonlinear relationship between meteorological variables and crop response. Although GCMs simulate the atmosphere on a sub-daily time step, their coarse spatial resolution and resulting distortion of day-to-day variability limits the use of their daily output. Crop models have used daily GCM output with some success by either calibrating simulated yields or correcting the daily rainfall output of the GCM to approximate the statistical properties of historic observations. Stochastic weather generators are used to disaggregate seasonal forecasts either by adjusting input parameters in a manner that captures the predictable components of climate, or by constraining synthetic weather sequences to match predicted values. Predicting crop yields, simulated with historic weather data, as a statistical function of seasonal climatic predictors, eliminates the need for daily weather data conditioned on the forecast, but must often address poor statistical properties of the crop-climate relationship. Most of the work on using crop simulation with seasonal climate forecasts has employed historic analogs based on categorical ENSO indices. Other methods based on classification of predictors or weather types can provide daily weather inputs to crop models conditioned on forecasts. Advances in climate-based crop forecasting in the coming decade are likely to include more robust evaluation of the methods reviewed here, dynamically embedding crop models within climate models to account for crop influence on regional climate, enhanced use of remote sensing, and research in the emerging area of 'weather within climate'.
Resumo:
Previous attempts to apply statistical models, which correlate nutrient intake with methane production, have been of limited. value where predictions are obtained for nutrient intakes and diet types outside those. used in model construction. Dynamic mechanistic models have proved more suitable for extrapolation, but they remain computationally expensive and are not applied easily in practical situations. The first objective of this research focused on employing conventional techniques to generate statistical models of methane production appropriate to United Kingdom dairy systems. The second objective was to evaluate these models and a model published previously using both United Kingdom and North American data sets. Thirdly, nonlinear models were considered as alternatives to the conventional linear regressions. The United Kingdom calorimetry data used to construct the linear models also were used to develop the three. nonlinear alternatives that were ball of modified Mitscherlich (monomolecular) form. Of the linear models tested,, an equation from the literature proved most reliable across the full range of evaluation data (root mean square prediction error = 21.3%). However, the Mitscherlich models demonstrated the greatest degree of adaptability across diet types and intake level. The most successful model for simulating the independent data was a modified Mitscherlich equation with the steepness parameter set to represent dietary starch-to-ADF ratio (root mean square prediction error = 20.6%). However, when such data were unavailable, simpler Mitscherlich forms relating dry matter or metabolizable energy intake to methane production remained better alternatives relative to their linear counterparts.
Resumo:
A 2-year longitudinal survey was carried out to investigate factors affecting milk yield in crossbred cows on smallholder farms in and around an urban centre. Sixty farms were visited at approximately 2-week intervals and details of milk yield, body condition score (BCS) and heart girth measurements were collected. Fifteen farms were within the town (U), 23 farms were approximately 5 km from town (SU), and 22 farms approximately 10 km from town (PU). Sources of variation in milk yield were investigated using a general linear model by a stepwise forward selection and backward elimination approach to judge important independent variables. Factors considered for the first step of formulation of the model included location (PU, SU and U), calving season, BCS at calving, at 3 months postpartum and at 6 months postpartum, calving year, herd size category, source of labour (hired and family labour), calf rearing method (bucket and partial suckling) and parity number of the cow. Daily milk yield (including milk sucked by calves) was determined by calving year (p < 0.0001), calf rearing method (p = 0.044) and BCS at calving (p < 0.0001). Only BCS at calving contributed to variation in volume of milk sucked by the calf, lactation length and lactation milk yield. BCS at 3 months after calving was improved on farms where labour was hired (p = 0.041) and BCS change from calving to 6 months was more than twice as likely to be negative on U than SU and PU farms. It was concluded that milk production was predominantly associated with BCS at calving, lactation milk yield increasing quadratically from score 1 to 3. BCS at calving may provide a simple, single indicator of the nutritional status of a cow population.
Resumo:
A 2-year longitudinal survey was carried out to investigate factors affecting reproduction in crossbred cows on smallholder farms in and around an urban centre. Sixty farms were visited at approximately 2-week intervals and details of reproductive traits and body condition score (BCS) were collected. Fifteen farms were within the town (U), 23 farms were approximately 5 km from town (SU), and 22 farms approximately 10 km from town (PU). Sources of variation in reproductive traits were investigated using a general linear model (GLM) by a stepwise forward selection and backward elimination approach to judge important independent variables. Factors considered for the first step of formulation of the model included location (PU, SU and U), type of insemination, calving season, BCS at calving, at 3 months postpartum and at 6 months postpartum, calving year, herd size category, source of labour (hired and family labour), calf rearing method (bucket and partial suckling) and parity number of the cow. The effects of the independent variables identified were then investigated using a non-parametric survival technique. The number of days to first oestrus was increased on the U site (p = 0.045) and when family labour was used (p = 0.02). The non-parametric test confirmed the effect of site (p = 0.059), but effect of labour was not significant. The number of days from calving to conception was reduced by hiring labour (p = 0.003) and using natural service (p = 0.028). The non-parametric test confirmed the effects of type of insemination (p = 0.0001) while also identifying extended calving intervals on U and SU sites (p = 0.014). Labour source was again non-significant. Calving interval was prolonged on U and SU sites (p = 0.021), by the use of AI (p = 0.031) and by the use of family labour (p = 0.001). The non-parametric test confirmed the effect of site (p = 0.008) and insemination type (p > 0.0001) but not of labour source. It was concluded that under favourable conditions (PU site, hired labour and natural service) calving intervals of around 440 days could be achieved.
Resumo:
OBJECTIVES: This contribution provides a unifying concept for meta-analysis integrating the handling of unobserved heterogeneity, study covariates, publication bias and study quality. It is important to consider these issues simultaneously to avoid the occurrence of artifacts, and a method for doing so is suggested here. METHODS: The approach is based upon the meta-likelihood in combination with a general linear nonparametric mixed model, which lays the ground for all inferential conclusions suggested here. RESULTS: The concept is illustrated at hand of a meta-analysis investigating the relationship of hormone replacement therapy and breast cancer. The phenomenon of interest has been investigated in many studies for a considerable time and different results were reported. In 1992 a meta-analysis by Sillero-Arenas et al. concluded a small, but significant overall effect of 1.06 on the relative risk scale. Using the meta-likelihood approach it is demonstrated here that this meta-analysis is due to considerable unobserved heterogeneity. Furthermore, it is shown that new methods are available to model this heterogeneity successfully. It is argued further to include available study covariates to explain this heterogeneity in the meta-analysis at hand. CONCLUSIONS: The topic of HRT and breast cancer has again very recently become an issue of public debate, when results of a large trial investigating the health effects of hormone replacement therapy were published indicating an increased risk for breast cancer (risk ratio of 1.26). Using an adequate regression model in the previously published meta-analysis an adjusted estimate of effect of 1.14 can be given which is considerably higher than the one published in the meta-analysis of Sillero-Arenas et al. In summary, it is hoped that the method suggested here contributes further to a good meta-analytic practice in public health and clinical disciplines.
Resumo:
We argue that population modeling can add value to ecological risk assessment by reducing uncertainty when extrapolating from ecotoxicological observations to relevant ecological effects. We review other methods of extrapolation, ranging from application factors to species sensitivity distributions to suborganismal (biomarker and "-omics'') responses to quantitative structure activity relationships and model ecosystems, drawing attention to the limitations of each. We suggest a simple classification of population models and critically examine each model in an extrapolation context. We conclude that population models have the potential for adding value to ecological risk assessment by incorporating better understanding of the links between individual responses and population size and structure and by incorporating greater levels of ecological complexity. A number of issues, however, need to be addressed before such models are likely to become more widely used. In a science context, these involve challenges in parameterization, questions about appropriate levels of complexity, issues concerning how specific or general the models need to be, and the extent to which interactions through competition and trophic relationships can be easily incorporated.
Resumo:
A physically motivated statistical model is used to diagnose variability and trends in wintertime ( October - March) Global Precipitation Climatology Project (GPCP) pentad (5-day mean) precipitation. Quasi-geostrophic theory suggests that extratropical precipitation amounts should depend multiplicatively on the pressure gradient, saturation specific humidity, and the meridional temperature gradient. This physical insight has been used to guide the development of a suitable statistical model for precipitation using a mixture of generalized linear models: a logistic model for the binary occurrence of precipitation and a Gamma distribution model for the wet day precipitation amount. The statistical model allows for the investigation of the role of each factor in determining variations and long-term trends. Saturation specific humidity q(s) has a generally negative effect on global precipitation occurrence and with the tropical wet pentad precipitation amount, but has a positive relationship with the pentad precipitation amount at mid- and high latitudes. The North Atlantic Oscillation, a proxy for the meridional temperature gradient, is also found to have a statistically significant positive effect on precipitation over much of the Atlantic region. Residual time trends in wet pentad precipitation are extremely sensitive to the choice of the wet pentad threshold because of increasing trends in low-amplitude precipitation pentads; too low a choice of threshold can lead to a spurious decreasing trend in wet pentad precipitation amounts. However, for not too small thresholds, it is found that the meridional temperature gradient is an important factor for explaining part of the long-term trend in Atlantic precipitation.
Resumo:
Mathematical models have been vitally important in the development of technologies in building engineering. A literature review identifies that linear models are the most widely used building simulation models. The advent of intelligent buildings has added new challenges in the application of the existing models as an intelligent building requires learning and self-adjusting capabilities based on environmental and occupants' factors. It is therefore argued that the linearity is an impropriate basis for any model of either complex building systems or occupant behaviours for control or whatever purpose. Chaos and complexity theory reflects nonlinear dynamic properties of the intelligent systems excised by occupants and environment and has been used widely in modelling various engineering, natural and social systems. It is proposed that chaos and complexity theory be applied to study intelligent buildings. This paper gives a brief description of chaos and complexity theory and presents its current positioning, recent developments in building engineering research and future potential applications to intelligent building studies, which provides a bridge between chaos and complexity theory and intelligent building research.
Resumo:
The perspex machine arose from the unification of projective geometry with the Turing machine. It uses a total arithmetic, called transreal arithmetic, that contains real arithmetic and allows division by zero. Transreal arithmetic is redefined here. The new arithmetic has both a positive and a negative infinity which lie at the extremes of the number line, and a number nullity that lies off the number line. We prove that nullity, 0/0, is a number. Hence a number may have one of four signs: negative, zero, positive, or nullity. It is, therefore, impossible to encode the sign of a number in one bit, as floating-, point arithmetic attempts to do, resulting in the difficulty of having both positive and negative zeros and NaNs. Transrational arithmetic is consistent with Cantor arithmetic. In an extension to real arithmetic, the product of zero, an infinity, or nullity with its reciprocal is nullity, not unity. This avoids the usual contradictions that follow from allowing division by zero. Transreal arithmetic has a fixed algebraic structure and does not admit options as IEEE, floating-point arithmetic does. Most significantly, nullity has a simple semantics that is related to zero. Zero means "no value" and nullity means "no information." We argue that nullity is as useful to a manufactured computer as zero is to a human computer. The perspex machine is intended to offer one solution to the mind-body problem by showing how the computable aspects of mind and. perhaps, the whole of mind relates to the geometrical aspects of body and, perhaps, the whole of body. We review some of Turing's writings and show that he held the view that his machine has spatial properties. In particular, that it has the property of being a 7D lattice of compact spaces. Thus, we read Turing as believing that his machine relates computation to geometrical bodies. We simplify the perspex machine by substituting an augmented Euclidean geometry for projective geometry. This leads to a general-linear perspex-machine which is very much easier to pro-ram than the original perspex-machine. We then show how to map the whole of perspex space into a unit cube. This allows us to construct a fractal of perspex machines with the cardinality of a real-numbered line or space. This fractal is the universal perspex machine. It can solve, in unit time, the halting problem for itself and for all perspex machines instantiated in real-numbered space, including all Turing machines. We cite an experiment that has been proposed to test the physical reality of the perspex machine's model of time, but we make no claim that the physical universe works this way or that it has the cardinality of the perspex machine. We leave it that the perspex machine provides an upper bound on the computational properties of physical things, including manufactured computers and biological organisms, that have a cardinality no greater than the real-number line.
Resumo:
The combination of model predictive control based on linear models (MPC) with feedback linearization (FL) has attracted interest for a number of years, giving rise to MPC+FL control schemes. An important advantage of such schemes is that feedback linearizable plants can be controlled with a linear predictive controller with a fixed model. Handling input constraints within such schemes is difficult since simple bound contraints on the input become state dependent because of the nonlinear transformation introduced by feedback linearization. This paper introduces a technique for handling input constraints within a real time MPC/FL scheme, where the plant model employed is a class of dynamic neural networks. The technique is based on a simple affine transformation of the feasible area. A simulated case study is presented to illustrate the use and benefits of the technique.
Resumo:
The Stokes drift induced by surface waves distorts turbulence in the wind-driven mixed layer of the ocean, leading to the development of streamwise vortices, or Langmuir circulations, on a wide range of scales. We investigate the structure of the resulting Langmuir turbulence, and contrast it with the structure of shear turbulence, using rapid distortion theory (RDT) and kinematic simulation of turbulence. Firstly, these linear models show clearly why elongated streamwise vortices are produced in Langmuir turbulence, when Stokes drift tilts and stretches vertical vorticity into horizontal vorticity, whereas elongated streaky structures in streamwise velocity fluctuations (u) are produced in shear turbulence, because there is a cancellation in the streamwise vorticity equation and instead it is vertical vorticity that is amplified. Secondly, we develop scaling arguments, illustrated by analysing data from LES, that indicate that Langmuir turbulence is generated when the deformation of the turbulence by mean shear is much weaker than the deformation by the Stokes drift. These scalings motivate a quantitative RDT model of Langmuir turbulence that accounts for deformation of turbulence by Stokes drift and blocking by the air–sea interface that is shown to yield profiles of the velocity variances in good agreement with LES. The physical picture that emerges, at least in the LES, is as follows. Early in the life cycle of a Langmuir eddy initial turbulent disturbances of vertical vorticity are amplified algebraically by the Stokes drift into elongated streamwise vortices, the Langmuir eddies. The turbulence is thus in a near two-component state, with suppressed and . Near the surface, over a depth of order the integral length scale of the turbulence, the vertical velocity (w) is brought to zero by blocking of the air–sea interface. Since the turbulence is nearly two-component, this vertical energy is transferred into the spanwise fluctuations, considerably enhancing at the interface. After a time of order half the eddy decorrelation time the nonlinear processes, such as distortion by the strain field of the surrounding eddies, arrest the deformation and the Langmuir eddy decays. Presumably, Langmuir turbulence then consists of a statistically steady state of such Langmuir eddies. The analysis then provides a dynamical connection between the flow structures in LES of Langmuir turbulence and the dominant balance between Stokes production and dissipation in the turbulent kinetic energy budget, found by previous authors.
Resumo:
Previous studies have made use of simplified general circulation models (sGCMs) to investigate the atmospheric response to various forcings. In particular, several studies have investigated the tropospheric response to changes in stratospheric temperature. This is potentially relevant for many climate forcings. Here the impact of changing the tropospheric climatology on the modeled response to perturbations in stratospheric temperature is investigated by the introduction of topography into the model and altering the tropospheric jet structure. The results highlight the need for very long integrations so as to determine accurately the magnitude of response. It is found that introducing topography into the model and thus removing the zonally symmetric nature of the model’s boundary conditions reduces the magnitude of response to stratospheric heating. However, this reduction is of comparable size to the variability in the magnitude of response between different ensemble members of the same 5000-day experiment. Investigations into the impact of varying tropospheric jet structure reveal a trend with lower-latitude/narrower jets having a much larger magnitude response to stratospheric heating than higher-latitude/wider jets. The jet structures that respond more strongly to stratospheric heating also exhibit longer time scale variability in their control run simulations, consistent with the idea that a feedback between the eddies and the mean flow is both responsible for the persistence of the control run variability and important in producing the tropospheric response to stratospheric temperature perturbations.
Resumo:
Atmosphere–ocean general circulation models (AOGCMs) predict a weakening of the Atlantic meridional overturning circulation (AMOC) in response to anthropogenic forcing of climate, but there is a large model uncertainty in the magnitude of the predicted change. The weakening of the AMOC is generally understood to be the result of increased buoyancy input to the north Atlantic in a warmer climate, leading to reduced convection and deep water formation. Consistent with this idea, model analyses have shown empirical relationships between the AMOC and the meridional density gradient, but this link is not direct because the large-scale ocean circulation is essentially geostrophic, making currents and pressure gradients orthogonal. Analysis of the budget of kinetic energy (KE) instead of momentum has the advantage of excluding the dominant geostrophic balance. Diagnosis of the KE balance of the HadCM3 AOGCM and its low-resolution version FAMOUS shows that KE is supplied to the ocean by the wind and dissipated by viscous forces in the global mean of the steady-state control climate, and the circulation does work against the pressure-gradient force, mainly in the Southern Ocean. In the Atlantic Ocean, however, the pressure-gradient force does work on the circulation, especially in the high-latitude regions of deep water formation. During CO2-forced climate change, we demonstrate a very good temporal correlation between the AMOC strength and the rate of KE generation by the pressure-gradient force in 50–70°N of the Atlantic Ocean in each of nine contemporary AOGCMs, supporting a buoyancy-driven interpretation of AMOC changes. To account for this, we describe a conceptual model, which offers an explanation of why AOGCMs with stronger overturning in the control climate tend to have a larger weakening under CO2 increase.
Resumo:
Explosive volcanic eruptions cause episodic negative radiative forcing of the climate system. Using coupled atmosphere-ocean general circulation models (AOGCMs) subjected to historical forcing since the late nineteenth century, previous authors have shown that each large volcanic eruption is associated with a sudden drop in ocean heat content and sea-level from which the subsequent recovery is slow. Here we show that this effect may be an artefact of experimental design, caused by the AOGCMs not having been spun up to a steady state with volcanic forcing before the historical integrations begin. Because volcanic forcing has a long-term negative average, a cooling tendency is thus imposed on the ocean in the historical simulation. We recommend that an extra experiment be carried out in parallel to the historical simulation, with constant time-mean historical volcanic forcing, in order to correct for this effect and avoid misinterpretation of ocean heat content changes
Resumo:
Coupled atmosphere‐ocean general circulation models have a tendency to drift away from a realistic climatology. The modelled climate response to an increase of CO2 concentration may be incorrect if the simulation of the current climate has significant errors, so in many models, including ours, the drift is counteracted by applying prescribed fluxes of heat and fresh water at the ocean‐atmosphere interface in addition to the calculated surface exchanges. Since the additional fluxes do not have a physical basis, the use of this technique of “flux adjustment” itself introduces some uncertainty in the simulated response to increased CO2. We find that the global‐average temperature response of our model to CO2 increasing at 1% per year is about 30% less without flux adjustment than with flux adjustment. The geographical patterns of the response are similar, indicating that flux adjustment is not causing any gross distortion. The reduced size of the response is due to more effective vertical transport of heat into the ocean, and a somewhat smaller climate sensitivity. Although the response in both cases lies within the generally accepted range for the climate sensitivity, systematic uncertainties of this size are clearly undesirable, and the best strategy for future development is to improve the climate model in order to reduce the need for flux adjustment.