41 resultados para Linear Models
Resumo:
How animals manage time and expend energy has implications for survivorship. Being able to measure key metabolic costs of animals under natural conditions is therefore an important tool in behavioral ecology. One method for estimating activity-specific metabolic rate is via derived measures of acceleration, often 'overall dynamic body acceleration' (ODBA), recorded by an instrumented acceleration logger. ODBA has been shown to correlate well with rate of oxygen consumption (V ?o) in a range of species during activity in the laboratory. This study devised a method for attaching acceleration loggers to decapod crustaceans and then correlated ODBA against concurrent respirometry readings to assess accelerometry as a proxy for activity-specific energy expenditure in a model species, the American lobster Homarus americanus. Where the instrumented animals exhibited a sufficient range of activity levels, positive linear relationships were found between V ?o and ODBA over 20min periods at a range of ambient temperatures (6, 13 and 20°C). Mixed effect linear models based on these data and morphometrics provided reasonably strong predictive power for estimating activity-specific V ?o from ODBA. These V ?o-ODBA calibrations demonstrate the potential of accelerometry as an effective predictor of behavior-specific metabolic rate of crustaceans in the wild during periods of activity. © 2013 Elsevier Inc.
Resumo:
Objective
To examine age and gender specific trends in coronary heart disease (CHD) and stroke mortality in two neighbouring countries, the Republic of Ireland (ROI) and Northern Ireland (NI). Design Epidemiological study of time trends in CHD and stroke mortality.
Setting/patients
The populations of the ROI and NI, 1985–2010.
Interventions
None.
Main outcome measures
Directly age standardised CHD and stroke mortality rates were calculated and analysed using joinpoint regression to identify years where the slope of the linear trend changed significantly. This was performed separately for specific age groups (25–54, 55–64, 65–74 and 75–84 years) and by gender. Annual percentage change (APC) and 95% CIs are presented.
Results
There was a striking similarity between the two countries, with percentage change between 1985 and 1989 and between 2006 and 2010 of 67% and 69% in
CHD mortality, and 64% and 62% in stroke mortality for the ROI and NI, respectively. However, joinpoint analysis identified differences in the pace of change between the two countries. There was an accelerated pace of decline (negative APC) in mortality for both CHD and stroke in both countries from the mid-1990s (APC ROI −8% (95% CI −9.5 to 6.5) and NI −6.6% (−6.9 to −6.3)), but the accelerated decrease started later for CHD mortality in the ROI. In recent years, a levelling off in CHD mortality was observed in the 25–54 year age group in NI and in stroke mortality for men and women in the ROI.
Conclusions
While differences in the pace of change in mortality were observed at different time points, similar, substantial decreases in CHD and stroke mortality were achieved between 1985 and 1989 and between 2006 and 2010 in the ROI and NI despite important differences in health service structures. There is evidence of a levelling in mortality rates in some groups in recent years.
Resumo:
In this paper the evolution of a time domain dynamic identification technique based on a statistical moment approach is presented. This technique can be used in the case of structures under base random excitations in the linear state and in the non linear one. By applying Itoˆ stochastic calculus, special algebraic equations can be obtained depending on the statistical moments of the response of the system to be identified. Such equations can be used for the dynamic identification of the mechanical parameters and of the input. The above equations, differently from many techniques in the literature, show the possibility of obtaining the identification of the dissipation characteristics independently from the input. Through the paper the first formulation of this technique, applicable to non linear systems, based on the use of a restricted class of the potential models, is presented. Further a second formulation of the technique in object, applicable to each kind of linear systems and based on the use of a class of linear models, characterized by a mass proportional damping matrix, is described.
Resumo:
Extrusion is one of the major methods for processing polymeric materials and the thermal homogeneity of the process output is a major concern for manufacture of high quality extruded products. Therefore, accurate process thermal monitoring and control are important for product quality control. However, most industrial extruders use single point thermocouples for the temperature monitoring/control although their measurements are highly affected by the barrel metal wall temperature. Currently, no industrially established thermal profile measurement technique is available. Furthermore, it has been shown that the melt temperature changes considerably with the die radial position and hence point/bulk measurements are not sufficient for monitoring and control of the temperature across the melt flow. The majority of process thermal control methods are based on linear models which are not capable of dealing with process nonlinearities. In this work, the die melt temperature profile of a single screw extruder was monitored by a thermocouple mesh technique. The data obtained was used to develop a novel approach of modelling the extruder die melt temperature profile under dynamic conditions (i.e. for predicting the die melt temperature profile in real-time). These newly proposed models were in good agreement with the measured unseen data. They were then used to explore the effects of process settings, material and screw geometry on the die melt temperature profile. The results showed that the process thermal homogeneity was affected in a complex manner by changing the process settings, screw geometry and material.
Resumo:
Over 1 million km2 of seafloor experience permanent low-oxygen conditions within oxygen minimum zones (OMZs). OMZs are predicted to grow as a consequence of climate change, potentially affecting oceanic biogeochemical cycles. The Arabian Sea OMZ impinges upon the western Indian continental margin at bathyal depths (150 - 1500 m) producing a strong depth dependent oxygen gradient at the sea floor. The influence of the OMZ upon the short term processing of organic matter by sediment ecosystems was investigated using in situ stable isotope pulse chase experiments. These deployed doses of 13C:15N labeled organic matter onto the sediment surface at four stations from across the OMZ (water depth 540 - 1100 m; [O2] = 0.35 - 15 μM). In order to prevent experimentally anoxia, the mesocosms were not sealed. 13C and 15N labels were traced into sediment, bacteria, fauna and 13C into sediment porewater DIC and DOC. However, the DIC and DOC flux to the water column could not be measured, limiting our capacity to obtain mass-balance for C in each experimental mesocosm. Linear Inverse Modeling (LIM) provides a method to obtain a mass-balanced model of carbon flow that integrates stable-isotope tracer data with community biomass and biogeochemical flux data from a range of sources. Here we present an adaptation of the LIM methodology used to investigate how ecosystem structure influenced carbon flow across the Indian margin OMZ. We demonstrate how oxygen conditions affect food-web complexity, affecting the linkages between the bacteria, foraminifera and metazoan fauna, and their contributions to benthic respiration. The food-web models demonstrate how changes in ecosystem complexity are associated with oxygen availability across the OMZ and allow us to obtain a complete carbon budget for the stationa where stable-isotope labelling experiments were conducted.
Resumo:
One of the first attempts to develop a formal model of depth cue integration is to be found in Maloney and Landy's (1989) "human depth combination rule". They advocate that the combination of depth cues by the visual sysetem is best described by a weighted linear model. The present experiments tested whether the linear combination rule applies to the integration of texture and shading. As would be predicted by a linear combination rule, the weight assigned to the shading cue did vary as a function of its curvature value. However, the weight assigned to the texture cue varied systematically as a function of the curvature value of both cues. Here we descrive a non-linear model which provides a better fit to the data. Redescribing the stimuli in terms of depth rather than curvature reduced the goodness of fit for all models tested. These results support the hypothesis that the locus of cue integration is a curvature map, rather than a depth map. We conclude that the linear comination rule does not generalize to the integration of shading and texture, and that for these cues it is likely that integration occurs after the recovery of surface curvature.
Resumo:
Coloured effluents from textile industries are a problem in many rivers and waterways. Prediction of adsorption capacities of dyes by adsorbents is important in design considerations. The sorption of three basic dyes, namely Basic Blue 3, Basic Yellow 21 and Basic Red 22, onto peat is reported. Equilibrium sorption isotherms have been measured for the three single component systems. Equilibrium was achieved after twenty-one days. The experimental isotherm data were analysed using Langmuir, Freundlich, Redlich-Peterson, Temkin and Toth isotherm equations. A detailed error analysis has been undertaken to investigate the effect of using different error criteria for the determination of the single component isotherm parameters and hence obtain the best isotherm and isotherm parameters which describe the adsorption process. The linear transform model provided the highest R2 regression coefficient with the Redlich-Peterson model. The Redlich-Peterson model also yielded the best fit to experimental data for all three dyes using the non-linear error functions. An extended Langmuir model has been used to predict the isotherm data for the binary systems using the single component data. The correlation between theoretical and experimental data had only limited success due to competitive and interactive effects between the dyes and the dye-surface interactions.
Resumo:
High-resolution spectra for 24 SMC and Galactic B-type supergiants have been analysed to estimate the contributions of both macroturbulence and rotation to the broadening of their metal lines. Two different methodologies are considered, viz. goodness-of-fit comparisons between observed and theoretical line profiles and identifying zeros in the Fourier transforms of the observed profiles. The advantages and limitations of the two methods are briefly discussed with the latter techniques being adopted for estimating projected rotational velocities ( v sin i) but the former being used to estimate macroturbulent velocities. The projected rotational velocity estimates range from approximately 20 to 60 kms(-1), apart from one SMC supergiant, Sk 191, with a v sin i similar or equal to 90 km s(-1). Apart from Sk 191, the distribution of projected rotational velocities as a function of spectral type are similar in both our Galactic and SMC samples with larger values being found at earlier spectral types. There is marginal evidence for the projected rotational velocities in the SMC being higher than those in the Galactic targets but any differences are only of the order of 5 - 10 km s(-1), whilst evolutionary models predict differences in this effective temperature range of typically 20 to 70 km s(-1). The combined sample is consistent with a linear variation of projected rotational velocity with effective temperature, which would imply rotational velocities for supergiants of 70 kms(-1) at an effective temperature of 28 000 K ( approximately B0 spectral type) decreasing to 32 km s(-1) at 12 000 K (B8 spectral type). For all targets, the macroturbulent broadening would appear to be consistent with a Gaussian distribution ( although other distributions cannot be discounted) with an 1/e half-width varying from approximately 20 km s(-1) at B8 to 60 km s(-1) at B0 spectral types.
Resumo:
This paper presents two new approaches for use in complete process monitoring. The firstconcerns the identification of nonlinear principal component models. This involves the application of linear
principal component analysis (PCA), prior to the identification of a modified autoassociative neural network (AAN) as the required nonlinear PCA (NLPCA) model. The benefits are that (i) the number of the reduced set of linear principal components (PCs) is smaller than the number of recorded process variables, and (ii) the set of PCs is better conditioned as redundant information is removed. The result is a new set of input data for a modified neural representation, referred to as a T2T network. The T2T NLPCA model is then used for complete process monitoring, involving fault detection, identification and isolation. The second approach introduces a new variable reconstruction algorithm, developed from the T2T NLPCA model. Variable reconstruction can enhance the findings of the contribution charts still widely used in industry by reconstructing the outputs from faulty sensors to produce more accurate fault isolation. These ideas are illustrated using recorded industrial data relating to developing cracks in an industrial glass melter process. A comparison of linear and nonlinear models, together with the combined use of contribution charts and variable reconstruction, is presented.
Resumo:
A conventional local model (LM) network consists of a set of affine local models blended together using appropriate weighting functions. Such networks have poor interpretability since the dynamics of the blended network are only weakly related to the underlying local models. In contrast, velocity-based LM networks employ strictly linear local models to provide a transparent framework for nonlinear modelling in which the global dynamics are a simple linear combination of the local model dynamics. A novel approach for constructing continuous-time velocity-based networks from plant data is presented. Key issues including continuous-time parameter estimation, correct realisation of the velocity-based local models and avoidance of the input derivative are all addressed. Application results are reported for the highly nonlinear simulated continuous stirred tank reactor process.
Resumo:
The standard linear-quadratic (LQ) survival model for external beam radiotherapy is reviewed with particular emphasis on studying how different schedules of radiation treatment planning may be affected by different tumour repopulation kinetics. The LQ model is further examined in the context of tumour control probability (TCP) models. The application of the Zaider and Minerbo non-Poissonian TCP model incorporating the effect of cellular repopulation is reviewed. In particular the recent development of a cell cycle model within the original Zaider and Minerbo TCP formalism is highlighted. Application of this TCP cell-cycle model in clinical treatment plans is explored and analysed.
Resumo:
1. We collated information from the literature on life history traits of the roach (a generalist freshwater fish), and analysed variation in absolute fecundity, von Bertalanffy parameters, and reproductive lifespan in relation to latitude, using both linear and non-linear regression models. We hypothesized that because most life history traits are dependent on growth rate, and growth rate is non-linearly related with temperature, it was likely that when analysed over the whole distribution range of roach, variation in key life history traits would show non-linear patterns with latitude.
Resumo:
We propose simple models to predict the performance degradation of disk requests due to storage device contention in consolidated virtualized environments. Model parameters can be deduced from measurements obtained inside Virtual Machines (VMs) from a system where a single VM accesses a remote storage server. The parameterized model can then be used to predict the effect of storage contention when multiple VMs are consolidated on the same server. We first propose a trace-driven approach that evaluates a queueing network with fair share scheduling using simulation. The model parameters consider Virtual Machine Monitor level disk access optimizations and rely on a calibration technique. We further present a measurement-based approach that allows a distinct characterization of read/write performance attributes. In particular, we define simple linear prediction models for I/O request mean response times, throughputs and read/write mixes, as well as a simulation model for predicting response time distributions. We found our models to be effective in predicting such quantities across a range of synthetic and emulated application workloads.
Resumo:
The majority of reported learning methods for Takagi-Sugeno-Kang fuzzy neural models to date mainly focus on the improvement of their accuracy. However, one of the key design requirements in building an interpretable fuzzy model is that each obtained rule consequent must match well with the system local behaviour when all the rules are aggregated to produce the overall system output. This is one of the distinctive characteristics from black-box models such as neural networks. Therefore, how to find a desirable set of fuzzy partitions and, hence, to identify the corresponding consequent models which can be directly explained in terms of system behaviour presents a critical step in fuzzy neural modelling. In this paper, a new learning approach considering both nonlinear parameters in the rule premises and linear parameters in the rule consequents is proposed. Unlike the conventional two-stage optimization procedure widely practised in the field where the two sets of parameters are optimized separately, the consequent parameters are transformed into a dependent set on the premise parameters, thereby enabling the introduction of a new integrated gradient descent learning approach. A new Jacobian matrix is thus proposed and efficiently computed to achieve a more accurate approximation of the cost function by using the second-order Levenberg-Marquardt optimization method. Several other interpretability issues about the fuzzy neural model are also discussed and integrated into this new learning approach. Numerical examples are presented to illustrate the resultant structure of the fuzzy neural models and the effectiveness of the proposed new algorithm, and compared with the results from some well-known methods.
Resumo:
In this paper we propose a statistical model for detection and tracking of human silhouette and the corresponding 3D skeletal structure in gait sequences. We follow a point distribution model (PDM) approach using a Principal Component Analysis (PCA). The problem of non-lineal PCA is partially resolved by applying a different PDM depending of pose estimation; frontal, lateral and diagonal, estimated by Fisher's linear discriminant. Additionally, the fitting is carried out by selecting the closest allowable shape from the training set by means of a nearest neighbor classifier. To improve the performance of the model we develop a human gait analysis to take into account temporal dynamic to track the human body. The incorporation of temporal constraints on the model increase reliability and robustness.