910 resultados para Andersen and Newman model
Resumo:
Cardboard and balsa model as seen from above.
Resumo:
Brand extensions are increasingly used by multinational corporations in emerging markets such as China. However, understanding how consumers in the emerging markets evaluate brand extensions is hampered by a lack of research in the emerging markets contexts. To address the knowledge void, we built on an established brand extension evaluation framework in the West, namely Aaker and Keller (1990)1. Aaker , D. A. and Keller , K. L. 1990 . Consumer evaluations of brand extensions . Journal of Marketing , 54 ( 1 ) : 27 – 41 . [CrossRef], [Web of Science ®] View all references, and extended the model by incorporating two new factors: perceived fit based on brand image consistency and competition intensity in the brand extension category. The additions of two factors are made in recognition of the uniqueness of the considerations of consumers in the emerging markets in their brand extension evaluations. The extended model was tested by an empirical experiment using consumers in China. The results partly validated the Aaker and Keller model, and evidence that both newly added factors were significant in influencing consumers' evaluation of brand extensions was also found. More important, one new factor proposed, namely, consumer-perceived fit based on brand image consistency, was found to be more significant than all the factors in Aaker and Keller's original model, suggesting that the Aaker and Keller model may be limited in explaining how consumers in the emerging markets evaluate brand extensions. Further research implications and limitations are discussed in the paper.
Resumo:
In this paper we propose a data envelopment analysis (DEA) based method for assessing the comparative efficiencies of units operating production processes where input-output levels are inter-temporally dependent. One cause of inter-temporal dependence between input and output levels is capital stock which influences output levels over many production periods. Such units cannot be assessed by traditional or 'static' DEA which assumes input-output correspondences are contemporaneous in the sense that the output levels observed in a time period are the product solely of the input levels observed during that same period. The method developed in the paper overcomes the problem of inter-temporal input-output dependence by using input-output 'paths' mapped out by operating units over time as the basis of assessing them. As an application we compare the results of the dynamic and static model for a set of UK universities. The paper is suggested that dynamic model capture the efficiency better than static model. © 2003 Elsevier Inc. All rights reserved.
Resumo:
The purpose of this thesis is to conduct empirical research in corporate Thailand in order to (1) validate the Spirit at Work Scale (2) investigate the relationships between individual spirit at work and three employee work attitudinal variables (job satisfaction, organisational identification and psychological well-being) and three organisational outcomes (in-role performance, organisational citizenship behaviours (OCB), and turnover intentions) (3) further examine causal relations among these organisational behaviour variables with a longitudinal design (4) examine three employee work attitudes as mediator variables between individual spirit at work and three organisational outcomes and (5) explore the potential antecedents of organisational conditions that foster employee experienced individual spirit at work. The two pilot studies with 155 UK and 175, 715 Thai samples were conducted for validation testing of the main measure used in this study: Spirit at Work Scale (Kinjerski & Skrypnek, 2006a). The results of the two studies including discriminant validity analyses strongly provided supportive evidence that Spirit at Work Scale (SAWS) is a sound psychometric measure and also a distinct construct from the three work attitude constructs. The final model of SAWS contains a total of twelve items; a three factor structure (meaning in work, sense of community, and spiritual connection) in which the sub-factors loaded on higher order factors and also had very acceptable reliability. In line with these results it was decided to use the second-order of SAWS model for Thai samples in the main study and subsequent analysis. The 715 completed questionnaires were received from the first wave of data collection during July - August 2008 and the second wave was conducted again within the same organisations and 501 completed questionnaires were received during March - April 2009. Data were obtained through 49 organisations which were from three types of organisations within Thailand: public organisations, for-profit organisations, and notfor-profit organisations. Confirmatory factor analysis of all measures used in the study and hypothesised model were tested with structural equation modelling techniques. The results were greatly supportive for the direct structural model and partially supportive for the fully mediated model. Moreover, there were different findings across self report and supervisor rating on performance and OCB models. Additionally, the antecedent conditions that fostered employees experienced individual spirit at work and the implications of these findings for research and practice are discussed.
Resumo:
Adapting to blurred images makes in-focus images look too sharp, and vice-versa (Webster et al, 2002 Nature Neuroscience 5 839 - 840). We asked how such blur adaptation is related to contrast adaptation. Georgeson (1985 Spatial Vision 1 103 - 112) found that grating contrast adaptation followed a subtractive rule: perceived (matched) contrast of a grating was fairly well predicted by subtracting some fraction k(~0.3) of the adapting contrast from the test contrast. Here we apply that rule to the responses of a set of spatial filters at different scales and orientations. Blur is encoded by the pattern of filter response magnitudes over scale. We tested two versions - the 'norm model' and 'fatigue model' - against blur-matching data obtained after adaptation to sharpened, in-focus or blurred images. In the fatigue model, filter responses are simply reduced by exposure to the adapter. In the norm model, (a) the visual system is pre-adapted to a focused world and (b) discrepancy between observed and expected responses to the experimental adapter leads to additional reduction (or enhancement) of filter responses during experimental adaptation. The two models are closely related, but only the norm model gave a satisfactory account of results across the four experiments analysed, with one free parameter k. This model implies that the visual system is pre-adapted to focused images, that adapting to in-focus or blank images produces no change in adaptation, and that adapting to sharpened or blurred images changes the state of adaptation, leading to changes in perceived blur or sharpness.
Resumo:
Information systems have developed to the stage that there is plenty of data available in most organisations but there are still major problems in turning that data into information for management decision making. This thesis argues that the link between decision support information and transaction processing data should be through a common object model which reflects the real world of the organisation and encompasses the artefacts of the information system. The CORD (Collections, Objects, Roles and Domains) model is developed which is richer in appropriate modelling abstractions than current Object Models. A flexible Object Prototyping tool based on a Semantic Data Storage Manager has been developed which enables a variety of models to be stored and experimented with. A statistical summary table model COST (Collections of Objects Statistical Table) has been developed within CORD and is shown to be adequate to meet the modelling needs of Decision Support and Executive Information Systems. The COST model is supported by a statistical table creator and editor COSTed which is also built on top of the Object Prototyper and uses the CORD model to manage its metadata.
Resumo:
Pin on disc wear machines were used to study the boundary lubricated friction and wear of AISI 52100 steel sliding partners. Boundary conditions were obtained by using speed and load combinations which resulted in friction coefficients in excess of 0.1. Lubrication was achieved using zero, 15 and 1000 ppm concentrations of an organic dimeric acid additive in a hydrocarbon base stock. Experiments were performed for sliding speeds of 0.2, 0.35 and 0.5 m/s for a range of loads up to 220 N. Wear rate, frictional force and pin temperature were continually monitored throughout tests and where possible complementary methods of measurement were used to improve accuracy. A number of analytical techniques were used to examine wear surfaces, debris and lubricants, namely: Scanning Electron Microscopy (SEM), Auger Electron Spectroscopy (AES), Powder X-ray Diffraction (XRD), X-ray Photoelectron Spectroscopy (XPS), optical microscopy, Back scattered Electron Detection (BSED) and several metallographic techniques. Friction forces and wear rates were found to vary linearly with load for any given combination of speed and additive concentration. The additive itself was found to act as a surface oxidation inhibitor and as a lubricity enhancer, particularly in the case of the higher (1000 ppm) concentration. Wear was found to be due to a mild oxidational mechanism at low additive concentrations and a more severe metallic mechanism at higher concentrations with evidence of metallic delamination in the latter case. Scuffing loads were found to increase with increasing additive concentration and decrease with increasing speed as would be predicted by classical models of additive behaviour as an organo-metallic soap film. Heat flow considerations tended to suggest that surface temperature was not the overriding controlling factor in oxidational wear and a model is proposed which suggests oxygen concentration in the lubricant is the controlling factor in oxide growth and wear.
Resumo:
Recent debates about national identity, belonging and community cohesion can appear to suggest that ethnicity is a static entity and that ethnic difference is a source of conflict in itself. "Ethnicities and Values in a Changing World" presents an alternative account of ethnicity and calls into question models of community cohesion that present ethnicity as the source of antagonisms and differences that must be overcome. It suggests instead that ethnicity is itself multiple and changing and is unlikely to be a basis for articulating shared values. This volume brings together an international team of leading scholars in the field of ethnic studies in order to examine innovative articulations of ethnicity and challenge the contention that ethnicity is static or that it necessarily represents traditional values and cultures. Asserting that ethnicity is deployed in part as an expression of values and a model of ethical practice, this book examines displays of ethnicity as assertions of identity and statements about way of life, sense of entitlement and manner of connection to others. "Ethnicities and Values in a Changing World" draws together debates about the articulation of ethnic identity, the nature of our relation to each other and discussions of everyday ethics, thus engaging with discussions of racism, multiculturalism and community cohesion. As such, it will appeal not only to sociologists, but to anyone working in the fields of cultural studies, race and ethnicity, globalization, migration and anthropology. Table of Contents: Introduction: ethnicities, values and old-fashioned racism, Gargi Bhattacharyya; Teaching race and racism in the 21st century: thematic considerations, Howard Winant; Diaspora conversations: ethics, ethicality, work and life; Migrant women's networking: new articulations of transnational ethnicity, Ronit Lentin; 'The people do what the political class isn't able to do': antigypsyism, ethnicity denial and the politics of racism without racism, Robbie McVeigh; Violent urban protest - identities, ethics and Islamism, Max Farrar; Beliefs, boundaries and belonging: African Pentecostals in Ireland, Abel Ugba; On being a 'good' refugee, John Gabriel and Jenny Harding; Narrating lived experience in a binational community in Costa Rica, Carlos Sandoval Garcia; Conclusion: ethnicity and ethicality in an unequal world, Gargi Bhattacharyya; Index.
Resumo:
Open-loop operatlon of the stepping motor exploits the inherent advantages of the machine. For near optimum operation: in this mode, however, an accurate system model is required to facilitate controller design. Such a model must be comprehensive and take account of the non-linearities inherent in the system. The result is a complex formulation which can be made manageable with a computational aid. A digital simulation of a hybrid type stepping motor and its associated drive circuit is proposed. The simulation is based upon a block diagram model which includes reasonable approximations to the major non-linearities. The simulation is shown to yield accurate performance predictions. The determination of the transfer functions is based upon the consideration of the physical processes involved rather than upon direct input-outout measurements. The effects of eddy currents, saturation, hysteresis, drive circuit characteristics and non-linear torque displacement characteristics are considered and methods of determining transfer functions, which take account of these effects, are offered. The static torque displacement characteristic is considered in detail and a model is proposed which predicts static torque for any combination of phase currents and shaft position. Methods of predicting the characteristic directly from machine geometry are investigated. Drive circuit design for high efficiency operation is considered and a model of a bipolar, bilevel circuit is proposed. The transfers between stator voltage and stator current and between stator current and air gap flux are complicated by the effects of eddy currents, saturation and hysteresis. Frequency response methods, combined with average inductance measurements, are shown to yield reasonable transfer functions. The modelling procedure and subsequent digital simulation is concluded to be a powerful method of non-linear analysis.
Resumo:
Liquid-liquid extraction has long been known as a unit operation that plays an important role in industry. This process is well known for its complexity and sensitivity to operation conditions. This thesis presents an attempt to explore the dynamics and control of this process using a systematic approach and state of the art control system design techniques. The process was studied first experimentally under carefully selected. operation conditions, which resembles the ranges employed practically under stable and efficient conditions. Data were collected at steady state conditions using adequate sampling techniques for the dispersed and continuous phases as well as during the transients of the column with the aid of a computer-based online data logging system and online concentration analysis. A stagewise single stage backflow model was improved to mimic the dynamic operation of the column. The developed model accounts for the variation in hydrodynamics, mass transfer, and physical properties throughout the length of the column. End effects were treated by addition of stages at the column entrances. Two parameters were incorporated in the model namely; mass transfer weight factor to correct for the assumption of no mass transfer in the. settling zones at each stage and the backmixing coefficients to handle the axial dispersion phenomena encountered in the course of column operation. The parameters were estimated by minimizing the differences between the experimental and the model predicted concentration profiles at steady state conditions using non-linear optimisation technique. The estimated values were then correlated as functions of operating parameters and were incorporated in·the model equations. The model equations comprise a stiff differential~algebraic system. This system was solved using the GEAR ODE solver. The calculated concentration profiles were compared to those experimentally measured. A very good agreement of the two profiles was achieved within a percent relative error of ±2.S%. The developed rigorous dynamic model of the extraction column was used to derive linear time-invariant reduced-order models that relate the input variables (agitator speed, solvent feed flowrate and concentration, feed concentration and flowrate) to the output variables (raffinate concentration and extract concentration) using the asymptotic method of system identification. The reduced-order models were shown to be accurate in capturing the dynamic behaviour of the process with a maximum modelling prediction error of I %. The simplicity and accuracy of the derived reduced-order models allow for control system design and analysis of such complicated processes. The extraction column is a typical multivariable process with agitator speed and solvent feed flowrate considered as manipulative variables; raffinate concentration and extract concentration as controlled variables and the feeds concentration and feed flowrate as disturbance variables. The control system design of the extraction process was tackled as multi-loop decentralised SISO (Single Input Single Output) as well as centralised MIMO (Multi-Input Multi-Output) system using both conventional and model-based control techniques such as IMC (Internal Model Control) and MPC (Model Predictive Control). Control performance of each control scheme was. studied in terms of stability, speed of response, sensitivity to modelling errors (robustness), setpoint tracking capabilities and load rejection. For decentralised control, multiple loops were assigned to pair.each manipulated variable with each controlled variable according to the interaction analysis and other pairing criteria such as relative gain array (RGA), singular value analysis (SVD). Loops namely Rotor speed-Raffinate concentration and Solvent flowrate Extract concentration showed weak interaction. Multivariable MPC has shown more effective performance compared to other conventional techniques since it accounts for loops interaction, time delays, and input-output variables constraints.
Resumo:
Visualising data for exploratory analysis is a major challenge in many applications. Visualisation allows scientists to gain insight into the structure and distribution of the data, for example finding common patterns and relationships between samples as well as variables. Typically, visualisation methods like principal component analysis and multi-dimensional scaling are employed. These methods are favoured because of their simplicity, but they cannot cope with missing data and it is difficult to incorporate prior knowledge about properties of the variable space into the analysis; this is particularly important in the high-dimensional, sparse datasets typical in geochemistry. In this paper we show how to utilise a block-structured correlation matrix using a modification of a well known non-linear probabilistic visualisation model, the Generative Topographic Mapping (GTM), which can cope with missing data. The block structure supports direct modelling of strongly correlated variables. We show that including prior structural information it is possible to improve both the data visualisation and the model fit. These benefits are demonstrated on artificial data as well as a real geochemical dataset used for oil exploration, where the proposed modifications improved the missing data imputation results by 3 to 13%.
Resumo:
A literature review of work carried out on batch and continuous chromatographic biochemical reactor-separators has been made. The major part of this work has involved the development of a batch chromatographic reactor-separator for the production of dextran and fructose by the enzymatic action of the enzyme dextransucrase on sucrose. In this reactor, simultaneous reaction and separation occurs thus reducing downstream processing and isolation of products as compared to the existing industrial process. The chromatographic reactor consisted of a glass column packed with a stationary phase consisting of cross linked polysytrene resin in the calcium form. The mobile phase consisted of diluted dextransucrase in deionised water. Initial experiments were carried out on a reactor separtor which had an internal diameter of 0.97cm and length of 1.5m. To study the effect of scale up the reactor diameter was doubled to 1.94cm and length increased to 1.75m. The results have shown that the chromatographic reactor uses more enzyme than a conventional batch reactor for a given conversion of sucrose and that an increase in void volume results in higher conversions of sucrose. A comparison of the molecular weight distribution of dextran produced by the chromatographic reactor was made with that from a conventional batch reactor. The results have shown that the chromatographic reactor produces 30% more dextran of molecular weight greater than 150,000 daltons at 20% w/v sucrose concentration than conventional reactors. This is because some of the fructose molecules are prevented as acting as acceptors in the chromatographic reactor due to their removal from the reaction zone. In the conventional reactor this is not possible and therefore a greater proportion of low molecular weight dextran is produced which does not have much clinical use. A theoretical model was developed to describe the behaviour of the reactor separator and this model was simulated using a computer. The simulation predictions showed good agreement with experimental results at high eluent flowrates and low conversions.
Resumo:
This work is concerned with the nature of liquid flow across industrial sieve trays operating in the spray, mixed, and the emulsified flow regimes. In order to overcome the practical difficulties of removing many samples from a commercial tray, the mass transfer process was investigated in an air water simulator column by heat transfer analogy. The temperature of the warm water was measured by many thermocouples as the water flowed across the single pass 1.2 m diameter sieve tray. The thermocouples were linked to a mini computer for the storage of the data. The temperature data were then transferred to a main frame computer to generate temperature profiles - analogous to concentration profiles. A comprehensive study of the existing tray efficiency models was carried out using computerised numerical solutions. The calculated results were compared with experimental results published by the Fractionation Research Incorporation (FRl) and the existing models did not show any agreement with the experimental results. Only the Porter and Lockett model showed a reasonable agreement with the experimental results for cenain tray efficiency values. A rectangular active section tray was constructed and tested to establish the channelling effect and the result of its effect on circular tray designs. The developed flow patterns showed predominantly flat profiles and some indication of significant liquid flow through the central region of the tray. This comfirms that the rectangular tray configuration might not be a satisfactory solution for liquid maldistribution on sieve trays. For a typical industrial tray the flow of liquid as it crosses the tray from the inlet to the outlet weir could be affected by the mixing of liquid by the eddy, momentum and the weir shape in the axial or the transverse direction or both. Conventional U-shape profiles were developed when the operating conditions were such that the froth dispersion was in the mixed regime, with good liquid temperature distribution while in the spray regime. For the 12.5 mm hole diameter tray the constant temperature profiles were found to be in the axial direction while in the spray regime and in the transverse direction for the 4.5 mm hole tray. It was observed that the extent of the liquid stagnant zones at the sides of the tray depended on the tray hole diameter and was larger for the 4.5 mm hole tray. The liquid hold-up results show a high liquid hold-up at the areas of the tray with low liquid temperatures, this supports the doubts about the assumptions of constant point efficiency across an operating tray. Liquid flow over the outlet weir showed more liquid flow at the centre of the tray at high liquid loading with low liquid flow at both ends of the weir. The calculated results of the point and tray efficiency model showed a general increase in the calculated point and tray efficiencies with an increase in the weir loading, as the flow regime changed from the spray to the mixed regime the point and the tray efficiencies increased from approximately 30 to 80%.Through the mixed flow regime the efficiencies were found to remain fairly constant, and as the operating conditions were changed to maintain an emulsified flow regime there was a decrease in the resulting efficiencies. The results of the estimated coefficient of mixing for the small and large hole diameter trays show that the extent of liquid mixing on an operating tray generally increased with increasing capacity factor, but decreased with increasing weir loads. This demonstrates that above certain weir loads, the effect of eddy diffusion mechanism on the process of liquid mixing on an operating tray to be negligible.
Resumo:
The design and implementation of data bases involve, firstly, the formulation of a conceptual data model by systematic analysis of the structure and information requirements of the organisation for which the system is being designed; secondly, the logical mapping of this conceptual model onto the data structure of the target data base management system (DBMS); and thirdly, the physical mapping of this structured model into storage structures of the target DBMS. The accuracy of both the logical and physical mapping determine the performance of the resulting systems. This thesis describes research which develops software tools to facilitate the implementation of data bases. A conceptual model describing the information structure of a hospital is derived using the Entity-Relationship (E-R) approach and this model forms the basis for mapping onto the logical model. Rules are derived for automatically mapping the conceptual model onto relational and CODASYL types of data structures. Further algorithms are developed for partly automating the implementation of these models onto INGRES, MIMER and VAX-11 DBMS.