947 resultados para dynamic parameters identification
Resumo:
The systems used for the procurement of buildings are organizational systems. They involve people in a series of strategic decisions, and a pattern of roles, responsibilities and relationships that combine to form the organizational structure of the project. To ensure effectiveness of the building team, this organizational structure needs to be contingent upon the environment within which the construction project takes place. In addition, a changing environment means that the organizational structure within a project needs to be responsive, and dynamic. These needs are often not satisfied in the construction industry, due to the lack of analytical tools with which to analyse the environment and to design appropriate temporary organizations. This paper presents two techniques. First is the technique of "Environmental Complexity Analysis", which identifies the key variables in the environment of the construction project. These are classified as Financial, Legal, Technological, Aesthetic and Policy. It is proposed that their identification will set the parameters within which the project has to be managed. This provides a basis for the project managers to define the relevant set of decision points that will be required for the project. The Environmental Complexity Analysis also identifies the project's requirements for control systems concerning Budget, Contractual, Functional, Quality and Time control. The process of environmental scanning needs to be done at regular points during the procurement process to ensure that the organizational structure is adaptive to the changing environment. The second technique introduced is the technique of "3R analysis", being a graphical technique for describing and modelling Roles, Responsibilities and Relationships. A list of steps is introduced that explains the procedure recommended for setting up a flexible organizational structure that is responsive to the environment of the project. This is by contrast with the current trend towards predetermined procurement paths that may not always be in the best interests of the client.
Resumo:
Climate change science is increasingly concerned with methods for managing and integrating sources of uncertainty from emission storylines, climate model projections, and ecosystem model parameterizations. In tropical ecosystems, regional climate projections and modeled ecosystem responses vary greatly, leading to a significant source of uncertainty in global biogeochemical accounting and possible future climate feedbacks. Here, we combine an ensemble of IPCC-AR4 climate change projections for the Amazon Basin (eight general circulation models) with alternative ecosystem parameter sets for the dynamic global vegetation model, LPJmL. We evaluate LPJmL simulations of carbon stocks and fluxes against flux tower and aboveground biomass datasets for individual sites and the entire basin. Variability in LPJmL model sensitivity to future climate change is primarily related to light and water limitations through biochemical and water-balance-related parameters. Temperature-dependent parameters related to plant respiration and photosynthesis appear to be less important than vegetation dynamics (and their parameters) for determining the magnitude of ecosystem response to climate change. Variance partitioning approaches reveal that relationships between uncertainty from ecosystem dynamics and climate projections are dependent on geographic location and the targeted ecosystem process. Parameter uncertainty from the LPJmL model does not affect the trajectory of ecosystem response for a given climate change scenario and the primary source of uncertainty for Amazon 'dieback' results from the uncertainty among climate projections. Our approach for describing uncertainty is applicable for informing and prioritizing policy options related to mitigation and adaptation where long-term investments are required.
Resumo:
Sensitivity, specificity, and reproducibility are vital to interpret neuroscientific results from functional magnetic resonance imaging (fMRI) experiments. Here we examine the scan–rescan reliability of the percent signal change (PSC) and parameters estimated using Dynamic Causal Modeling (DCM) in scans taken in the same scan session, less than 5 min apart. We find fair to good reliability of PSC in regions that are involved with the task, and fair to excellent reliability with DCM. Also, the DCM analysis uncovers group differences that were not present in the analysis of PSC, which implies that DCM may be more sensitive to the nuances of signal changes in fMRI data.
Resumo:
In a knowledge-based economy and dynamic work environment retaining competitiveness is increasingly dependent on creativity, skills, individual abilities and appropriate motivation. For instance, the UK government explicitly stated in the recent "Review of Employee Engagement and Investment" report that new ways are required through which British companies could boost employee engagement at work, improving staff commitment and, thereby, increase workplace productivity. Although creativity and innovation have been studied extensively, little is known about employees' intrinsic willingness to contribute novel ideas and solutions (defined here as creative participation). For instance, the same individual can thrive in one organisation but be completely isolated in another and the question is to what extent this depends on individual characteristics and organisational settings. The main aim of this research is, therefore, to provide a conceptual framework for identification of individual characteristics that influence employees' willingness to contribute new ideas. In order to achieve this aim the investigation will be based on a developed psychological experiment, and will include personal-profiling inventory and a questionnaire. Understanding how these parameters influence willingness of an individual to put forward created ideas would offer an opportunity for companies to improve motivation practices and team efficiency, and can consequently lead to better overall performance.
Resumo:
The adrenal cortex is a dynamic organ in which the cells of the outer cortex continually divide. It is well known that this cellular proliferation is dependent on constant stimulation from peptides derived from the ACTH precursor pro-opiomelanocortin (POMC) because disruption of pituitary corticotroph function results in rapid atrophy of the gland. Previous results from our laboratory have suggested that the adrenal mitogen is a fragment derived from the N-terminal of POMC not containing the gamma-MSH sequence. Because such a peptide is not generated during processing of POMC in the pituitary, we proposed that the mitogen is generated from circulating pro-gamma-MSH by an adrenal protease. Using degenerate oligonucleotides, we identified a secreted serine protease expressed by the adrenal gland that we named adrenal secretory protease (ASP). In the adrenal cortex, expression of ASP is limited to the outer zona glomerulosa/fasciculata, the region where cortical cells are believed to be derived, and is significantly up-regulated during compensatory growth. Y1 adrenocortical cells transfected with a vector expressing an antisense RNA (and thus having reduced levels of endogenous ASP) were found to grow slower than sense controls while also losing their ability to utilize exogenous pro-gamma-MSH in the media supporting a role for ASP in adrenal growth. Digestion of an N-POMC peptide substrate encompassing the residues around the dibasic cleavage site at positions 49/50 with affinity-purified ASP showed cleavage not to occur at the dibasic site but two residues downstream leading us to propose the identity of the adrenal mitogen to be N-POMC (1-52).
Resumo:
We model the large scale fading of wireless THz communications links deployed in a metropolitan area taking into account reception through direct line of sight, ground or wall reflection and diffraction. The movement of the receiver in the three dimensions is modelled by an autonomous dynamic linear system in state-space whereas the geometric relations involved in the attenuation and multi-path propagation of the electric field are described by a static non-linear mapping. A subspace algorithm in conjunction with polynomial regression is used to identify a Wiener model from time-domain measurements of the field intensity.
Apodisation, denoising and system identification techniques for THz transients in the wavelet domain
Resumo:
This work describes the use of a quadratic programming optimization procedure for designing asymmetric apodization windows to de-noise THz transient interferograms and compares these results to those obtained when wavelet signal processing algorithms are adopted. A systems identification technique in the wavelet domain is also proposed for the estimation of the complex insertion loss function. The proposed techniques can enhance the frequency dependent dynamic range of an experiment and should be of particular interest to the THz imaging and tomography community. Future advances in THz sources and detectors are likely to increase the signal-to-noise ratio of the recorded THz transients and high quality apodization techniques will become more important, and may set the limit on the achievable accuracy of the deduced spectrum.
Resumo:
The large scale fading of wireless mobile communications links is modelled assuming the mobile receiver motion is described by a dynamic linear system in state-space. The geometric relations involved in the attenuation and multi-path propagation of the electric field are described by a static non-linear mapping. A Wiener system subspace identification algorithm in conjunction with polynomial regression is used to identify a model from time-domain estimates of the field intensity assuming a multitude of emitters and an antenna array at the receiver end.
Resumo:
An efficient model identification algorithm for a large class of linear-in-the-parameters models is introduced that simultaneously optimises the model approximation ability, sparsity and robustness. The derived model parameters in each forward regression step are initially estimated via the orthogonal least squares (OLS), followed by being tuned with a new gradient-descent learning algorithm based on the basis pursuit that minimises the l(1) norm of the parameter estimate vector. The model subset selection cost function includes a D-optimality design criterion that maximises the determinant of the design matrix of the subset to ensure model robustness and to enable the model selection procedure to automatically terminate at a sparse model. The proposed approach is based on the forward OLS algorithm using the modified Gram-Schmidt procedure. Both the parameter tuning procedure, based on basis pursuit, and the model selection criterion, based on the D-optimality that is effective in ensuring model robustness, are integrated with the forward regression. As a consequence the inherent computational efficiency associated with the conventional forward OLS approach is maintained in the proposed algorithm. Examples demonstrate the effectiveness of the new approach.
Resumo:
In this correspondence new robust nonlinear model construction algorithms for a large class of linear-in-the-parameters models are introduced to enhance model robustness via combined parameter regularization and new robust structural selective criteria. In parallel to parameter regularization, we use two classes of robust model selection criteria based on either experimental design criteria that optimizes model adequacy, or the predicted residual sums of squares (PRESS) statistic that optimizes model generalization capability, respectively. Three robust identification algorithms are introduced, i.e., combined A- and D-optimality with regularized orthogonal least squares algorithm, respectively; and combined PRESS statistic with regularized orthogonal least squares algorithm. A common characteristic of these algorithms is that the inherent computation efficiency associated with the orthogonalization scheme in orthogonal least squares or regularized orthogonal least squares has been extended such that the new algorithms are computationally efficient. Numerical examples are included to demonstrate effectiveness of the algorithms.
Resumo:
The recursive least-squares algorithm with a forgetting factor has been extensively applied and studied for the on-line parameter estimation of linear dynamic systems. This paper explores the use of genetic algorithms to improve the performance of the recursive least-squares algorithm in the parameter estimation of time-varying systems. Simulation results show that the hybrid recursive algorithm (GARLS), combining recursive least-squares with genetic algorithms, can achieve better results than the standard recursive least-squares algorithm using only a forgetting factor.
Resumo:
A neural network enhanced proportional, integral and derivative (PID) controller is presented that combines the attributes of neural network learning with a generalized minimum-variance self-tuning control (STC) strategy. The neuro PID controller is structured with plant model identification and PID parameter tuning. The plants to be controlled are approximated by an equivalent model composed of a simple linear submodel to approximate plant dynamics around operating points, plus an error agent to accommodate the errors induced by linear submodel inaccuracy due to non-linearities and other complexities. A generalized recursive least-squares algorithm is used to identify the linear submodel, and a layered neural network is used to detect the error agent in which the weights are updated on the basis of the error between the plant output and the output from the linear submodel. The procedure for controller design is based on the equivalent model, and therefore the error agent is naturally functioned within the control law. In this way the controller can deal not only with a wide range of linear dynamic plants but also with those complex plants characterized by severe non-linearity, uncertainties and non-minimum phase behaviours. Two simulation studies are provided to demonstrate the effectiveness of the controller design procedure.
Resumo:
The problem of identification of a nonlinear dynamic system is considered. A two-layer neural network is used for the solution of the problem. Systems disturbed with unmeasurable noise are considered, although it is known that the disturbance is a random piecewise polynomial process. Absorption polynomials and nonquadratic loss functions are used to reduce the effect of this disturbance on the estimates of the optimal memory of the neural-network model.
Resumo:
The use of data reconciliation techniques can considerably reduce the inaccuracy of process data due to measurement errors. This in turn results in improved control system performance and process knowledge. Dynamic data reconciliation techniques are applied to a model-based predictive control scheme. It is shown through simulations on a chemical reactor system that the overall performance of the model-based predictive controller is enhanced considerably when data reconciliation is applied. The dynamic data reconciliation techniques used include a combined strategy for the simultaneous identification of outliers and systematic bias.