57 resultados para Linear analysis

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The current energy requirements system used in the United Kingdom for lactating dairy cows utilizes key parameters such as metabolizable energy intake (MEI) at maintenance (MEm), the efficiency of utilization of MEI for 1) maintenance, 2) milk production (k(l)), 3) growth (k(g)), and the efficiency of utilization of body stores for milk production (k(t)). Traditionally, these have been determined using linear regression methods to analyze energy balance data from calorimetry experiments. Many studies have highlighted a number of concerns over current energy feeding systems particularly in relation to these key parameters, and the linear models used for analyzing. Therefore, a database containing 652 dairy cow observations was assembled from calorimetry studies in the United Kingdom. Five functions for analyzing energy balance data were considered: straight line, two diminishing returns functions, (the Mitscherlich and the rectangular hyperbola), and two sigmoidal functions (the logistic and the Gompertz). Meta-analysis of the data was conducted to estimate k(g) and k(t). Values of 0.83 to 0.86 and 0.66 to 0.69 were obtained for k(g) and k(t) using all the functions (with standard errors of 0.028 and 0.027), respectively, which were considerably different from previous reports of 0.60 to 0.75 for k(g) and 0.82 to 0.84 for k(t). Using the estimated values of k(g) and k(t), the data were corrected to allow for body tissue changes. Based on the definition of k(l) as the derivative of the ratio of milk energy derived from MEI to MEI directed towards milk production, MEm and k(l) were determined. Meta-analysis of the pooled data showed that the average k(l) ranged from 0.50 to 0.58 and MEm ranged between 0.34 and 0.64 MJ/kg of BW0.75 per day. Although the constrained Mitscherlich fitted the data as good as the straight line, more observations at high energy intakes (above 2.4 MJ/kg of BW0.75 per day) are required to determine conclusively whether milk energy is related to MEI linearly or not.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We investigate baroclinic instability in flow conditions relevant to hot extrasolar planets. The instability is important for transporting and mixing heat, as well as for influencing large-scale variability on the planets. Both linear normal mode analysis and non-linear initial value cal- culations are carried out – focusing on the freely-evolving, adiabatic situation. Using a high- resolution general circulation model (GCM) which solves the traditional primitive equations, we show that large-scale jets similar to those observed in current GCM simulations of hot ex- trasolar giant planets are likely to be baroclinically unstable on a timescale of few to few tens of planetary rotations, generating cyclones and anticyclones that drive weather systems. The growth rate and scale of the most unstable mode obtained in the linear analysis are in qual- itative, good agreement with the full non-linear calculations. In general, unstable jets evolve differently depending on their signs (eastward or westward), due to the change in sign of the jet curvature. For jets located at or near the equator, instability is strong at the flanks – but not at the core. Crucially, the instability is either poorly or not at all captured in simulations with low resolution and/or high artificial viscosity. Hence, the instability has not been observed or emphasized in past circulation studies of hot extrasolar planets.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A limitation of small-scale dairy systems in central Mexico is that traditional feeding strategies are less effective when nutrient availability varies through the year. In the present work, a linear programming (LP) model that maximizes income over feed cost was developed, and used to evaluate two strategies: the traditional one used by the small-scale dairy producers in Michoacan State, based on fresh lucerne, maize grain and maize straw; and an alternative strategy proposed by the LIP model, based on ryegrass hay, maize silage and maize grain. Biological and economic efficiency for both strategies were evaluated. Results obtained with the traditional strategy agree with previously published work. The alternative strategy did not improve upon the performance of the traditional strategy because of low metabolizable protein content of the maize silage considered by the model. However, the Study recommends improvement of forage quality to increase the efficiency of small-scale dairy systems, rather than looking for concentrate supplementation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper considers the use of radial basis function and multi-layer perceptron networks for linear or linearizable, adaptive feedback control schemes in a discrete-time environment. A close look is taken at the model structure selected and the extent of the resulting parameterization. A comparison is made with standard, nonneural network algorithms, e.g. self-tuning control.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We provide a unified framework for a range of linear transforms that can be used for the analysis of terahertz spectroscopic data, with particular emphasis on their application to the measurement of leaf water content. The use of linear transforms for filtering, regression, and classification is discussed. For illustration, a classification problem involving leaves at three stages of drought and a prediction problem involving simulated spectra are presented. Issues resulting from scaling the data set are discussed. Using Lagrange multipliers, we arrive at the transform that yields the maximum separation between the spectra and show that this optimal transform is equivalent to computing the Euclidean distance between the samples. The optimal linear transform is compared with the average for all the spectra as well as with the Karhunen–Loève transform to discriminate a wet leaf from a dry leaf. We show that taking several principal components into account is equivalent to defining new axes in which data are to be analyzed. The procedure shows that the coefficients of the Karhunen–Loève transform are well suited to the process of classification of spectra. This is in line with expectations, as these coefficients are built from the statistical properties of the data set analyzed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

(ABR) is of fundamental importance to the investiga- tion of the auditory system behavior, though its in- terpretation has a subjective nature because of the manual process employed in its study and the clinical experience required for its analysis. When analyzing the ABR, clinicians are often interested in the identi- fication of ABR signal components referred to as Jewett waves. In particular, the detection and study of the time when these waves occur (i.e., the wave la- tency) is a practical tool for the diagnosis of disorders affecting the auditory system. In this context, the aim of this research is to compare ABR manual/visual analysis provided by different examiners. Methods: The ABR data were collected from 10 normal-hearing subjects (5 men and 5 women, from 20 to 52 years). A total of 160 data samples were analyzed and a pair- wise comparison between four distinct examiners was executed. We carried out a statistical study aiming to identify significant differences between assessments provided by the examiners. For this, we used Linear Regression in conjunction with Bootstrap, as a me- thod for evaluating the relation between the responses given by the examiners. Results: The analysis sug- gests agreement among examiners however reveals differences between assessments of the variability of the waves. We quantified the magnitude of the ob- tained wave latency differences and 18% of the inves- tigated waves presented substantial differences (large and moderate) and of these 3.79% were considered not acceptable for the clinical practice. Conclusions: Our results characterize the variability of the manual analysis of ABR data and the necessity of establishing unified standards and protocols for the analysis of these data. These results may also contribute to the validation and development of automatic systems that are employed in the early diagnosis of hearing loss.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present the extension of a methodology to solve moving boundary value problems from the second-order case to the case of the third-order linear evolution PDE qt + qxxx = 0. This extension is the crucial step needed to generalize this methodology to PDEs of arbitrary order. The methodology is based on the derivation of inversion formulae for a class of integral transforms that generalize the Fourier transform and on the analysis of the global relation associated with the PDE. The study of this relation and its inversion using the appropriate generalized transform are the main elements of the proof of our results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Water table response to rainfall was investigated at six sites in the Upper, Middle and Lower Chalk of southern England. Daily time series of rainfall and borehole water level were cross-corretated to investigate seasonal variations in groundwater-level response times, based on periods of 3-month duration. The time tags (in days) yielding significant correlations were compared with the average unsaturated zone thickness during each 3-month period. In general, for cases when the unsaturated zone was greater than 18 m thick, the time tag for a significant water-level response increased rapidly once the depth to the water table exceeded a critical value, which varied from site to site. For shallower water tables, a linear relationship between the depth to the water table and the water-level response time was evident. The observed variations in response time can only be partially accounted for using a diffusive model for propagation through the unsaturated matrix, suggesting that some fissure flow was occurring. The majority of rapid responses were observed during the winter/spring recharge period, when the unsaturated zone is thinnest and the unsaturated zone moisture content is highest, and were more likely to occur when the rainfall intensity exceeded 5 mm/day. At some sites, a very rapid response within 24 h of rainfall was observed in addition to the longer term responses even when the unsaturated zone was up to 64 m thick. This response was generally associated with the autumn period. The results of the cross-correlation analysis provide statistical support for the presence of fissure flow and for the contribution of multiple pathways through the unsaturated zone to groundwater recharge. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Models of the dynamics of nitrogen in soil (soil-N) can be used to aid the fertilizer management of a crop. The predictions of soil-N models can be validated by comparison with observed data. Validation generally involves calculating non-spatial statistics of the observations and predictions, such as their means, their mean squared-difference, and their correlation. However, when the model predictions are spatially distributed across a landscape the model requires validation with spatial statistics. There are three reasons for this: (i) the model may be more or less successful at reproducing the variance of the observations at different spatial scales; (ii) the correlation of the predictions with the observations may be different at different spatial scales; (iii) the spatial pattern of model error may be informative. In this study we used a model, parameterized with spatially variable input information about the soil, to predict the mineral-N content of soil in an arable field, and compared the results with observed data. We validated the performance of the N model spatially with a linear mixed model of the observations and model predictions, estimated by residual maximum likelihood. This novel approach allowed us to describe the joint variation of the observations and predictions as: (i) independent random variation that occurred at a fine spatial scale; (ii) correlated random variation that occurred at a coarse spatial scale; (iii) systematic variation associated with a spatial trend. The linear mixed model revealed that, in general, the performance of the N model changed depending on the spatial scale of interest. At the scales associated with random variation, the N model underestimated the variance of the observations, and the predictions were correlated poorly with the observations. At the scale of the trend, the predictions and observations shared a common surface. The spatial pattern of the error of the N model suggested that the observations were affected by the local soil condition, but this was not accounted for by the N model. In summary, the N model would be well-suited to field-scale management of soil nitrogen, but suited poorly to management at finer spatial scales. This information was not apparent with a non-spatial validation. (c),2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The management of a public sector project is analysed using a model developed from systems theory. Linear responsibility analysis is used to identify the primary and key decision structure of the project and to generate quantitative data regarding differentiation and integration of the operating system, the managing system and the client/project team. The environmental context of the project is identified. Conclusions are drawn regarding the project organization structure's ability to cope with the prevailing environmental conditions. It is found that the complexity of the managing system imposed on the project was unable to achieve this and created serious deficiencies in the outcome of the project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The principles of organization theory are applied to the organization of construction projects. This is done by proposing a framework for modelling the whole process of building procurement. This consists of a framework for describing the environments within which construction projects take place. This is followed by the development of a series of hypotheses about the organizational structure of construction projects. Four case studies are undertaken, and the extent to which their organizational structure matches the model is compared to the level of success achieved by each project. To this end there is a systematic method for evaluating the success of building project organizations, because any conclusions about the adequacy of a particular organization must be related to the degree of success achieved by that organization. In order to test these hypotheses, a mapping technique is developed. The technique offered is a development of a technique known as Linear Responsibility Analysis, and is called "3R analysis" as it deals with roles, responsibilities and relationships. The analysis of the case studies shows that they tended to suffer due to inappropriate organizational structure. One of the prevailing problems of public sector organization is that organizational structures are inadequately defined, and too cumbersome to respond to environmental demands on the project. The projects tended to be organized as rigid hierarchies, particularly at decision points, when what was required was a more flexible, dynamic and responsive organization. The study concludes with a series of recommendations; including suggestions for increasing the responsiveness of construction project organizations, and reducing the lead-in times for the inception periods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nitrogen oxide biogenic emissions from soils are driven by soil and environmental parameters. The relationship between these parameters and NO fluxes is highly non linear. A new algorithm, based on a neural network calculation, is used to reproduce the NO biogenic emissions linked to precipitations in the Sahel on the 6 August 2006 during the AMMA campaign. This algorithm has been coupled in the surface scheme of a coupled chemistry dynamics model (MesoNH Chemistry) to estimate the impact of the NO emissions on NOx and O3 formation in the lower troposphere for this particular episode. Four different simulations on the same domain and at the same period are compared: one with anthropogenic emissions only, one with soil NO emissions from a static inventory, at low time and space resolution, one with NO emissions from neural network, and one with NO from neural network plus lightning NOx. The influence of NOx from lightning is limited to the upper troposphere. The NO emission from soils calculated with neural network responds to changes in soil moisture giving enhanced emissions over the wetted soil, as observed by aircraft measurements after the passing of a convective system. The subsequent enhancement of NOx and ozone is limited to the lowest layers of the atmosphere in modelling, whereas measurements show higher concentrations above 1000 m. The neural network algorithm, applied in the Sahel region for one particular day of the wet season, allows an immediate response of fluxes to environmental parameters, unlike static emission inventories. Stewart et al (2008) is a companion paper to this one which looks at NOx and ozone concentrations in the boundary layer as measured on a research aircraft, examines how they vary with respect to the soil moisture, as indicated by surface temperature anomalies, and deduces NOx fluxes. In this current paper the model-derived results are compared to the observations and calculated fluxes presented by Stewart et al (2008).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Locality to other nodes on a peer-to-peer overlay network can be established by means of a set of landmarks shared among the participating nodes. Each node independently collects a set of latency measures to landmark nodes, which are used as a multi-dimensional feature vector. Each peer node uses the feature vector to generate a unique scalar index which is correlated to its topological locality. A popular dimensionality reduction technique is the space filling Hilbert’s curve, as it possesses good locality preserving properties. However, there exists little comparison between Hilbert’s curve and other techniques for dimensionality reduction. This work carries out a quantitative analysis of their properties. Linear and non-linear techniques for scaling the landmark vectors to a single dimension are investigated. Hilbert’s curve, Sammon’s mapping and Principal Component Analysis have been used to generate a 1d space with locality preserving properties. This work provides empirical evidence to support the use of Hilbert’s curve in the context of locality preservation when generating peer identifiers by means of landmark vector analysis. A comparative analysis is carried out with an artificial 2d network model and with a realistic network topology model with a typical power-law distribution of node connectivity in the Internet. Nearest neighbour analysis confirms Hilbert’s curve to be very effective in both artificial and realistic network topologies. Nevertheless, the results in the realistic network model show that there is scope for improvements and better techniques to preserve locality information are required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past decade, the amount of data in biological field has become larger and larger; Bio-techniques for analysis of biological data have been developed and new tools have been introduced. Several computational methods are based on unsupervised neural network algorithms that are widely used for multiple purposes including clustering and visualization, i.e. the Self Organizing Maps (SOM). Unfortunately, even though this method is unsupervised, the performances in terms of quality of result and learning speed are strongly dependent from the neuron weights initialization. In this paper we present a new initialization technique based on a totally connected undirected graph, that report relations among some intersting features of data input. Result of experimental tests, where the proposed algorithm is compared to the original initialization techniques, shows that our technique assures faster learning and better performance in terms of quantization error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A multivariate fit to the variation in global mean surface air temperature anomaly over the past half century is presented. The fit procedure allows for the effect of response time on the waveform, amplitude and lag of each radiative forcing input, and each is allowed to have its own time constant. It is shown that the contribution of solar variability to the temperature trend since 1987 is small and downward; the best estimate is -1.3% and the 2sigma confidence level sets the uncertainty range of -0.7 to -1.9%. The result is the same if one quantifies the solar variation using galactic cosmic ray fluxes (for which the analysis can be extended back to 1953) or the most accurate total solar irradiance data composite. The rise in the global mean air surface temperatures is predominantly associated with a linear increase that represents the combined effects of changes in anthropogenic well-mixed greenhouse gases and aerosols, although, in recent decades, there is also a considerable contribution by a relative lack of major volcanic eruptions. The best estimate is that the anthropogenic factors contribute 75% of the rise since 1987, with an uncertainty range (set by the 2sigma confidence level using an AR(1) noise model) of 49–160%; thus, the uncertainty is large, but we can state that at least half of the temperature trend comes from the linear term and that this term could explain the entire rise. The results are consistent with the intergovernmental panel on climate change (IPCC) estimates of the changes in radiative forcing (given for 1961–1995) and are here combined with those estimates to find the response times, equilibrium climate sensitivities and pertinent heat capacities (i.e. the depth into the oceans to which a given radiative forcing variation penetrates) of the quasi-periodic (decadal-scale) input forcing variations. As shown by previous studies, the decadal-scale variations do not penetrate as deeply into the oceans as the longer term drifts and have shorter response times. Hence, conclusions about the response to century-scale forcing changes (and hence the associated equilibrium climate sensitivity and the temperature rise commitment) cannot be made from studies of the response to shorter period forcing changes.