850 resultados para Model Identification
Resumo:
This thesis is concerned with the study of a non-sequential identification technique, so that it may be applied to the identification of process plant mathematical models from process measurements with the greatest degree of accuracy and reliability. In order to study the accuracy of the technique under differing conditions, simple mathematical models were set up on a parallel hybrid. computer and these models identified from input/output measurements by a small on-line digital computer. Initially, the simulated models were identified on-line. However, this method of operation was found not suitable for a thorough study of the technique due to equipment limitations. Further analysis was carried out in a large off-line computer using data generated by the small on-line computer. Hence identification was not strictly on-line. Results of the work have shovm that the identification technique may be successfully applied in practice. An optimum sampling period is suggested, together with noise level limitations for maximum accuracy. A description of a double-effect evaporator is included in this thesis. It is proposed that the next stage in the work will be the identification of a mathematical model of this evaporator using the teclmique described.
Resumo:
We examined the relationship between group boundary spanners’ work group identification and effective (i.e., harmonious and productive) intergroup relations in 53 work groups in five health care organizations. The data suggest this relationship was moderated by boundary spanners’ levels of organizational identification, thus supporting a dual identity model. Limited support was found for the moderating effect of intergroup contact. Finally, if boundary spanners displayed frequent intergroup contact and identified highly with their organization, group identification was most strongly related to effective intergroup relations.
Resumo:
Methods of dynamic modelling and analysis of structures, for example the finite element method, are well developed. However, it is generally agreed that accurate modelling of complex structures is difficult and for critical applications it is necessary to validate or update the theoretical models using data measured from actual structures. The techniques of identifying the parameters of linear dynamic models using Vibration test data have attracted considerable interest recently. However, no method has received a general acceptance due to a number of difficulties. These difficulties are mainly due to (i) Incomplete number of Vibration modes that can be excited and measured, (ii) Incomplete number of coordinates that can be measured, (iii) Inaccuracy in the experimental data (iv) Inaccuracy in the model structure. This thesis reports on a new approach to update the parameters of a finite element model as well as a lumped parameter model with a diagonal mass matrix. The structure and its theoretical model are equally perturbed by adding mass or stiffness and the incomplete number of eigen-data is measured. The parameters are then identified by an iterative updating of the initial estimates, by sensitivity analysis, using eigenvalues or both eigenvalues and eigenvectors of the structure before and after perturbation. It is shown that with a suitable choice of the perturbing coordinates exact parameters can be identified if the data and the model structure are exact. The theoretical basis of the technique is presented. To cope with measurement errors and possible inaccuracies in the model structure, a well known Bayesian approach is used to minimize the least squares difference between the updated and the initial parameters. The eigen-data of the structure with added mass or stiffness is also determined using the frequency response data of the unmodified structure by a structural modification technique. Thus, mass or stiffness do not have to be added physically. The mass-stiffness addition technique is demonstrated by simulation examples and Laboratory experiments on beams and an H-frame.
Resumo:
This thesis deals with the problem of Information Systems design for Corporate Management. It shows that the results of applying current approaches to Management Information Systems and Corporate Modelling fully justify a fresh look to the problem. The thesis develops an approach to design based on Cybernetic principles and theories. It looks at Management as an informational process and discusses the relevance of regulation theory to its practice. The work proceeds around the concept of change and its effects on the organization's stability and survival. The idea of looking at organizations as viable systems is discussed and a design to enhance survival capacity is developed. It takes Ashby's theory of adaptation and developments on ultra-stability as a theoretical framework and considering conditions for learning and foresight deduces that a design should include three basic components: A dynamic model of the organization- environment relationships; a method to spot significant changes in the value of the essential variables and in a certain set of parameters; and a Controller able to conceive and change the other two elements and to make choices among alternative policies. Further considerations of the conditions for rapid adaptation in organisms composed of many parts, and the law of Requisite Variety determine that successful adaptive behaviour requires certain functional organization. Beer's model of viable organizations is put in relation to Ashby's theory of adaptation and regulation. The use of the Ultra-stable system as abstract unit of analysis permits developing a rigorous taxonomy of change; it starts distinguishing between change with in behaviour and change of behaviour to complete the classification with organizational change. It relates these changes to the logical categories of learning connecting the topic of Information System design with that of organizational learning.
Resumo:
Safety enforcement practitioners within Europe and marketers, designers or manufacturers of consumer products need to determine compliance with the legal test of "reasonable safety" for consumer goods, to reduce the "risks" of injury to the minimum. To enable freedom of movement of products, a method for safety appraisal is required for use as an "expert" system of hazard analysis by non-experts in safety testing of consumer goods for implementation consistently throughout Europe. Safety testing approaches and the concept of risk assessment and hazard analysis are reviewed in developing a model for appraising consumer product safety which seeks to integrate the human factors contribution of risk assessment, hazard perception, and information processing. The model develops a system of hazard identification, hazard analysis and risk assessment which can be applied to a wide range of consumer products through use of a series of systematic checklists and matrices and applies alternative numerical and graphical methods for calculating a final product safety risk assessment score. It is then applied in its pilot form by selected "volunteer" Trading Standards Departments to a sample of consumer products. A series of questionnaires is used to select participating Trading Standards Departments, to explore the contribution of potential subjective influences, to establish views regarding the usability and reliability of the model and any preferences for the risk assessment scoring system used. The outcome of the two stage hazard analysis and risk assessment process is considered to determine consistency in results of hazard analysis, final decisions regarding the safety of the sample product and to determine any correlation in the decisions made using the model and alternative scoring methods of risk assessment. The research also identifies a number of opportunities for future work, and indicates a number of areas where further work has already begun.
Resumo:
The production and uses of coal tar are reviewed as are the uses of steroids and cytotoxic agents in the treatment of psoriasis with a review of the condition also. An attempt was made to improve the efficaciousness and cosmetic acceptability of a low temperature tar, by screening fractions of this tar, derived from a variety of separation procedures. The most efficacious fraction was the highest boiling acid fraction, which is believed to consist mainly of mono- and di-hydric phenols. A time and concentration study showed that the optimum regime was the application of a 10% concentration in 5% wool fat in soft, yellow paraffin daily for 21 days. The mouse tail skin was selected as an experimental model, to ascertain the efficaciousness of fractions, because of the similarities between this skin and the psoriatic lesion. The activity of a fraction was monitored by the inducement of a granular layer in the mouse tail epidermis. Because coal tar is not an easy medium to work with, and the active fractions showed no increase in cosmetic acceptability over the parent coal tar, likely coal tar constituents were selected for screening on the basis of phenolic character, and the molecular weight range elucidated by mass spectroscopy. 32 potential anti-psoriatic agents were screened on mouse tail. Two catechols, 3,5-di-t-butyl and 4-t-butyl catechols were active. Other structures showed little or no activity. 24 catechols were screened and two extremely active catechols were discovered, 3-methyl-5-t-octyl and 5-methyl-3-t-octyl catechols. The screening of catechol-rich coal tar fractions and a coal tar fraction which had had the catechols removed by oxidation, showed that some anti-psoriatic activity was contained in the catechol fraction of coal tar. Attempts to elucidate the mode of action of these two compounds met with little success, but two modes of action are suggested.
Resumo:
The research examines the deposition of airborne particles which contain heavy metals and investigates the methods that can be used to identify their sources. The research focuses on lead and cadmium because these two metals are of growing public and scientific concern on environmental health grounds. The research consists of three distinct parts. The first is the development and evaluation of a new deposition measurement instrument - the deposit cannister - designed specifically for large-scale surveys in urban areas. The deposit cannister is specifically designed to be cheap, robust, and versatile and therefore to permit comprehensive high-density urban surveys. The siting policy reduces contamination from locally resuspended surface-dust. The second part of the research has involved detailed surveys of heavy metal deposition in Walsall, West Midlands, using the new high-density measurement method. The main survey, conducted over a six-week period in November - December 1982, provided 30-day samples of deposition at 250 different sites. The results have been used to examine the magnitude and spatial variability of deposition rates in the case-study area, and to evaluate the performance of the measurement method. The third part of the research has been to conduct a 'source-identification' exercise. The methods used have been Receptor Models - Factor Analysis and Cluster Analysis - and a predictive source-based deposition model. The results indicate that there are six main source processes contributing to deposition of metals in the Walsall area: coal combustion, vehicle emissions, ironfounding, copper refining and two general industrial/urban processes. |A source-based deposition model has been calibrated using facctorscores for one source factor as the dependent variable, rather than metal deposition rates, thus avoiding problems traditionally encountered in calibrating models in complex multi-source areas. Empirical evidence supports the hypothesised associatlon of this factor with emissions of metals from the ironfoundry industry.
Resumo:
The study developed statistical techniques to evaluate visual field progression for use with the Humphrey Field Analyzer (HFA). The long-term fluctuation (LF) was evaluated in stable glaucoma. The magnitude of both LF components showed little relationship with MD, CPSD and SF. An algorithm was proposed for determining the clinical necessity for a confirmatory follow-up examination. The between-examination variability was determined for the HFA Standard and FASTPAC algorithms in glaucoma. FASTPAC exhibited greater between-examination variability than the Standard algorithm across the range of sensitivities and with increasing eccentricity. The difference in variability between the algorithms had minimal clinical significance. The effect of repositioning the baseline in the Glaucoma Change Probability Analysis (GCPA) was evaluated. The global baseline of the GCPA limited the detection of progressive change at a single stimulus location. A new technique, pointwise univariate linear regressions (ULR), of absolute sensitivity and, of pattern deviation, against time to follow-up was developed. In each case, pointwise ULR was more sensitive to localised progressive changes in sensitivity than ULR of MD, alone. Small changes in sensitivity were more readily determined by the pointwise ULR than by the GCPA. A comparison between the outcome of pointwise ULR for all fields and for the last six fields manifested linear and curvilinear declines in the absolute sensitivity and the pattern deviation. A method for delineating progressive loss in glaucoma, based upon the error in the forecasted sensitivity of a multivariate model, was developed. Multivariate forecasting exhibited little agreement with GCPA in glaucoma but showed promise for monitoring visual field progression in OHT patients. The recovery of sensitivity in optic neuritis over time was modelled with a Cumulative Gaussian function. The rate and level of recovery was greater in the peripheral than the central field. Probability models to forecast the field of recovery were proposed.
Resumo:
Objective: Biomedical events extraction concerns about events describing changes on the state of bio-molecules from literature. Comparing to the protein-protein interactions (PPIs) extraction task which often only involves the extraction of binary relations between two proteins, biomedical events extraction is much harder since it needs to deal with complex events consisting of embedded or hierarchical relations among proteins, events, and their textual triggers. In this paper, we propose an information extraction system based on the hidden vector state (HVS) model, called HVS-BioEvent, for biomedical events extraction, and investigate its capability in extracting complex events. Methods and material: HVS has been previously employed for extracting PPIs. In HVS-BioEvent, we propose an automated way to generate abstract annotations for HVS training and further propose novel machine learning approaches for event trigger words identification, and for biomedical events extraction from the HVS parse results. Results: Our proposed system achieves an F-score of 49.57% on the corpus used in the BioNLP'09 shared task, which is only 2.38% lower than the best performing system by UTurku in the BioNLP'09 shared task. Nevertheless, HVS-BioEvent outperforms UTurku's system on complex events extraction with 36.57% vs. 30.52% being achieved for extracting regulation events, and 40.61% vs. 38.99% for negative regulation events. Conclusions: The results suggest that the HVS model with the hierarchical hidden state structure is indeed more suitable for complex event extraction since it could naturally model embedded structural context in sentences.
Resumo:
From a Social Identity Theory perspective, organisational identification arises through a cognitive process of self-categorisation. As a consequence a person need not have a formal relationship with an organisation in order to identify with it. In this conceptual paper, the authors draw on this proposal to argue that future members are capable of identifying with an organisation prior to entry, and that this initial pre-entry identification could contribute to a person’s subsequent post-entry organisational identification. The paper further suggests that because no distinction need be drawn between organisational identification in current and future members, we might expect to find the same antecedents of identification in both instances. The group engagement model (Tyler and Blader 2003) is called on to propose that when a future member experiences pride in, and respect from, an organisation before they join, this should positively influence their pre-entry organisational identification. The authors explore the managerial implications of these propositions, and argue that an organisation’s actions and practices that have been shown to influence a post-entry organisational identification should have an equivalent impact on future members’ organisational identification when observed during the pre-entry period. Two examples of such practices, organisational support and organisational communication, are used to illustrate this suggestion and a number of ways are discussed through which these practices may be experienced by a person before they join an organisation.
Resumo:
Over the past forty years the corporate identity literature has developed to a point of maturity where it currently contains many definitions and models of the corporate identity construct at the organisational level. The literature has evolved by developing models of corporate identity or in considering corporate identity in relation to new and developing themes, e.g. corporate social responsibility. It has evolved into a multidisciplinary domain recently incorporating constructs from other literature to further its development. However, the literature has a number of limitations. It remains that an overarching and universally accepted definition of corporate identity is elusive, potentially leaving the construct with a lack of clear definition. Only a few corporate identity definitions and models, at the corporate level, have been empirically tested. The corporate identity construct is overwhelmingly defined and theoretically constructed at the corporate level, leaving the literature without a detailed understanding of its influence at an individual stakeholder level. Front-line service employees (FLEs), form a component in a number of corporate identity models developed at the organisational level. FLEs deliver the services of an organisation to its customers, as well as represent the organisation by communicating and transporting its core defining characteristics to customers through continual customer contact and interaction. This person-to-person contact between an FLE and the customer is termed a service encounter, where service encounters influence a customer’s perception of both the service delivered and the associated level of service quality. Therefore this study for the first time defines, theoretically models and empirically tests corporate identity at the individual FLE level, termed FLE corporate identity. The study uses the services marketing literature to characterise an FLE’s operating environment, arriving at five potential dimensions to the FLE corporate identity construct. These are scrutinised against existing corporate identity definitions and models to arrive at a definition for the construct. In reviewing the corporate identity, services marketing, branding and organisational psychology literature, a theoretical model is developed for FLE corporate identity, which is empirically and quantitatively tested, with FLEs in seven stores of a major national retailer. Following rigorous construct reliability and validity testing, the 601 usable responses are used to estimate a confirmatory factor analysis and structural equation model for the study. The results for the individual hypotheses and the structural model are very encouraging, as they fit the data well and support a definition of FLE corporate identity. This study makes contributions to the branding, services marketing and organisational psychology literature, but its principal contribution is to extend the corporate identity literature into a new area of discourse and research, that of FLE corporate identity
Resumo:
The appraisal and relative performance evaluation of nurses are very important and beneficial for both nurses and employers in an era of clinical governance, increased accountability and high standards of health care services. They enhance and consolidate the knowledge and practical skills of nurses by identification of training and career development plans as well as improvement in health care quality services, increase in job satisfaction and use of cost-effective resources. In this paper, a data envelopment analysis (DEA) model is proposed for the appraisal and relative performance evaluation of nurses. The model is validated on thirty-two nurses working at an Intensive Care Unit (ICU) at one of the most recognized hospitals in Lebanon. The DEA was able to classify nurses into efficient and inefficient ones. The set of efficient nurses was used to establish an internal best practice benchmark to project career development plans for improving the performance of other inefficient nurses. The DEA result confirmed the ranking of some nurses and highlighted injustice in other cases that were produced by the currently practiced appraisal system. Further, the DEA model is shown to be an effective talent management and motivational tool as it can provide clear managerial plans related to promoting, training and development activities from the perspective of nurses, hence increasing their satisfaction, motivation and acceptance of appraisal results. Due to such features, the model is currently being considered for implementation at ICU. Finally, the ratio of the number DEA units to the number of input/output measures is revisited with new suggested values on its upper and lower limits depending on the type of DEA models and the desired number of efficient units from a managerial perspective.
Resumo:
This study examined an integrated model of the antecedents and outcomes of organisational and overall justice using a sample of Indian Call Centre employees (n = 458). Results of structural equation modelling (SEM) revealed that the four organisational justice dimensions relate to overall justice. Further, work group identification mediated the influence of overall justice on counterproductive work behaviors, such as presenteeism and social loafing, while conscientiousness was a significant moderator between work group identification and presenteeism and social loafing. Theoretical and practical implications are discussed.
Resumo:
We propose a model documenting the relationship between interpersonal attachment style and identification with groups. We hypothesized that following threat to a romantic interpersonal relationship higher attachment anxiety would be associated with lowered tendencies to identify with groups. In two studies using varied social groups we observed support for this hypothesis. In Experiment 1 we found that participants higher in attachment anxiety identified less with a salient ingroup after imagining a distressing argument with their romantic partner. In Experiment 2 we replicated these findings using an implicit measure of social identification and additionally observed a moderating role for attachment avoidance. We discuss the implications of these findings for theoretical models of interpersonal attachment and social identification. © 2008 Elsevier Inc. All rights reserved.
Resumo:
The twin arginine translocation (TAT) system ferries folded proteins across the bacterial membrane. Proteins are directed into this system by the TAT signal peptide present at the amino terminus of the precursor protein, which contains the twin arginine residues that give the system its name. There are currently only two computational methods for the prediction of TAT translocated proteins from sequence. Both methods have limitations that make the creation of a new algorithm for TAT-translocated protein prediction desirable. We have developed TATPred, a new sequence-model method, based on a Nave-Bayesian network, for the prediction of TAT signal peptides. In this approach, a comprehensive range of models was tested to identify the most reliable and robust predictor. The best model comprised 12 residues: three residues prior to the twin arginines and the seven residues that follow them. We found a prediction sensitivity of 0.979 and a specificity of 0.942.