967 resultados para Models, Theoretical
Resumo:
The modelling of mechanical structures using finite element analysis has become an indispensable stage in the design of new components and products. Once the theoretical design has been optimised a prototype may be constructed and tested. What can the engineer do if the measured and theoretically predicted vibration characteristics of the structure are significantly different? This thesis considers the problems of changing the parameters of the finite element model to improve the correlation between a physical structure and its mathematical model. Two new methods are introduced to perform the systematic parameter updating. The first uses the measured modal model to derive the parameter values with the minimum variance. The user must provide estimates for the variance of the theoretical parameter values and the measured data. Previous authors using similar methods have assumed that the estimated parameters and measured modal properties are statistically independent. This will generally be the case during the first iteration but will not be the case subsequently. The second method updates the parameters directly from the frequency response functions. The order of the finite element model of the structure is reduced as a function of the unknown parameters. A method related to a weighted equation error algorithm is used to update the parameters. After each iteration the weighting changes so that on convergence the output error is minimised. The suggested methods are extensively tested using simulated data. An H frame is then used to demonstrate the algorithms on a physical structure.
Resumo:
We introduce models of heterogeneous systems with finite connectivity defined on random graphs to capture finite-coordination effects on the low-temperature behaviour of finite-dimensional systems. Our models use a description in terms of small deviations of particle coordinates from a set of reference positions, particularly appropriate for the description of low-temperature phenomena. A Born-von Karman-type expansion with random coefficients is used to model effects of frozen heterogeneities. The key quantity appearing in the theoretical description is a full distribution of effective single-site potentials which needs to be determined self-consistently. If microscopic interactions are harmonic, the effective single-site potentials turn out to be harmonic as well, and the distribution of these single-site potentials is equivalent to a distribution of localization lengths used earlier in the description of chemical gels. For structural glasses characterized by frustration and anharmonicities in the microscopic interactions, the distribution of single-site potentials involves anharmonicities of all orders, and both single-well and double-well potentials are observed, the latter with a broad spectrum of barrier heights. The appearance of glassy phases at low temperatures is marked by the appearance of asymmetries in the distribution of single-site potentials, as previously observed for fully connected systems. Double-well potentials with a broad spectrum of barrier heights and asymmetries would give rise to the well-known universal glassy low-temperature anomalies when quantum effects are taken into account. © 2007 IOP Publishing Ltd.
Resumo:
Following a brief description of the atmosphere and ionosphere in Chapter I we describe how the equations of continuity and momentum for 0+, H+, He+, 0++ are derived from the formulations of St-Maurice and Schunk(1977) and Quegan et al.(1981) in Chapter II. In Chapter III we investigate the nature of the downward flow of protons in a collapsing post-sunset ionosphere. We derive an analytical form for the limiting temperature, we also note the importance of the polarization field term and concluded that the flow will remain subsonic for realistic conditions. The time-dependent behaviour of He+ under sunspot minimum conditions is investigated in Chapter IV. This is achieved by numerical solution of the 0+, H+ and,He+ continuity and momentum equations, treating He+ as a minor ion with 0+ , H+ as major ions. We found that He+ flows upwards during the day-time and downwards during the nighttime. He+ flux tube content reached a maximum on the 8th day of the integration period and started to decreasing. This is due to the large amount of H+ present at the late stages of the integration period which makes He+ unable to diffuse through the H+ layer away from the loss region. In Chapter V we investigate the behaviour of 0++ using sunspot maximum parameters. Although our results support the findings of Geis and Young (1981) that the large amounts of 0++ at the equator are caused mainly by thermal diffusion, the model used by Geis and Young overemphesizes the effect of thermal diffusion. The importance of 0++ - 0+ collision frequency is also noted. In Chapter VI we extend the work of Chapter IV, presenting a comparative study of H and He at sunspot minimum and sunspot maximum.In this last Chapter all three ions, O+ ,H+ and He+ , are treated theoretically as major ions and we concentrate mainly on light ion contents and fluxes. The results of this Chapter indicate that by assuming He+ as a minor ion we under-estimate He+ and over-estimate. H+. Some interesting features concerning the day to day behaviour of the light ion fluxes arise. In particular the day-time H+ fluxes decrease from day to day in contrast to the work of Murphy et al.(1976). In appendix.A we derive some analytical forms for the optical depth so that the models can include a realistic description of photoionization.
Resumo:
As systems for computer-aided-design and production of mechanical parts have developed, there has arisen a need for techniques for the comprehensive description of the desired part, including its 3-D shape. The creation and manipulation of shapes is generally known as geometric modelling. It is desirable that links be established between geometric modellers and machining programs. Currently, unbounded APT and some bounded geometry systems are being widely used in manufacturing industry for machining operations such as: milling, drilling, boring and turning, applied mainly to engineering parts. APT systems, however, are presently only linked to wire-frame drafting systems. The combination of a geometric modeller and APT will provide a powerful manufacturing system for industry from the initial design right through part manufacture using NC machines. This thesis describes a recently developed interface (ROMAPT) between a bounded geometry modeller (ROMULUS) and an unbounded NC processor (APT). A new set of theoretical functions and practical algorithms for the computer aided manufacturing of 3D solid geometric model has been investigated. This work has led to the development of a sophisticated computer program, ROMAPT, which provides a new link between CAD (in form of a goemetric modeller ROMULUS) and CAM (in form of the APT NC system). ROMAPT has been used to machine some engineering prototypes successfully both in soft foam material and aluminium. It has been demonstrated above that the theory and algorithms developed by the author for the development of computer aided manufacturing of 3D solid modelling are both valid and applicable. ROMAPT allows the full potential of a solid geometric modeller (ROMULUS) to be further exploited for NC applications without requiring major investment in new NC processor. ROMAPT supports output in APT-AC, APT4 and the CAM-I SSRI NC languages.
Resumo:
Much of the geometrical data relating to engineering components and assemblies is stored in the form of orthographic views, either on paper or computer files. For various engineering applications, however, it is necessary to describe objects in formal geometric modelling terms. The work reported in this thesis is concerned with the development and implementation of concepts and algorithms for the automatic interpretation of orthographic views as solid models. The various rules and conventions associated with engineering drawings are reviewed and several geometric modelling representations are briefly examined. A review of existing techniques for the automatic, and semi-automatic, interpretation of engineering drawings as solid models is given. A new theoretical approach is then presented and discussed. The author shows how the implementation of such an approach for uniform thickness objects may be extended to more general objects by introducing the concept of `approximation models'. Means by which the quality of the transformations is monitored, are also described. Detailed descriptions of the interpretation algorithms and the software package that were developed for this project are given. The process is then illustrated by a number of practical examples. Finally, the thesis concludes that, using the techniques developed, a substantial percentage of drawings of engineering components could be converted into geometric models with a specific degree of accuracy. This degree is indicative of the suitability of the model for a particular application. Further work on important details is required before a commercially acceptable package is produced.
Resumo:
The research described here concerns the development of metrics and models to support the development of hybrid (conventional/knowledge based) integrated systems. The thesis argues from the point that, although it is well known that estimating the cost, duration and quality of information systems is a difficult task, it is far from clear what sorts of tools and techniques would adequately support a project manager in the estimation of these properties. A literature review shows that metrics (measurements) and estimating tools have been developed for conventional systems since the 1960s while there has been very little research on metrics for knowledge based systems (KBSs). Furthermore, although there are a number of theoretical problems with many of the `classic' metrics developed for conventional systems, it also appears that the tools which such metrics can be used to develop are not widely used by project managers. A survey was carried out of large UK companies which confirmed this continuing state of affairs. Before any useful tools could be developed, therefore, it was important to find out why project managers were not using these tools already. By characterising those companies that use software cost estimating (SCE) tools against those which could but do not, it was possible to recognise the involvement of the client/customer in the process of estimation. Pursuing this point, a model of the early estimating and planning stages (the EEPS model) was developed to test exactly where estimating takes place. The EEPS model suggests that estimating could take place either before a fully-developed plan has been produced, or while this plan is being produced. If it were the former, then SCE tools would be particularly useful since there is very little other data available from which to produce an estimate. A second survey, however, indicated that project managers see estimating as being essentially the latter at which point project management tools are available to support the process. It would seem, therefore, that SCE tools are not being used because project management tools are being used instead. The issue here is not with the method of developing an estimating model or tool, but; in the way in which "an estimate" is intimately tied to an understanding of what tasks are being planned. Current SCE tools are perceived by project managers as targetting the wrong point of estimation, A model (called TABATHA) is then presented which describes how an estimating tool based on an analysis of tasks would thus fit into the planning stage. The issue of whether metrics can be usefully developed for hybrid systems (which also contain KBS components) is tested by extending a number of "classic" program size and structure metrics to a KBS language, Prolog. Measurements of lines of code, Halstead's operators/operands, McCabe's cyclomatic complexity, Henry & Kafura's data flow fan-in/out and post-release reported errors were taken for a set of 80 commercially-developed LPA Prolog programs: By re~defining the metric counts for Prolog it was found that estimates of program size and error-proneness comparable to the best conventional studies are possible. This suggests that metrics can be usefully applied to KBS languages, such as Prolog and thus, the development of metncs and models to support the development of hybrid information systems is both feasible and useful.
Resumo:
Swarm intelligence is a popular paradigm for algorithm design. Frequently drawing inspiration from natural systems, it assigns simple rules to a set of agents with the aim that, through local interactions, they collectively solve some global problem. Current variants of a popular swarm based optimization algorithm, particle swarm optimization (PSO), are investigated with a focus on premature convergence. A novel variant, dispersive PSO, is proposed to address this problem and is shown to lead to increased robustness and performance compared to current PSO algorithms. A nature inspired decentralised multi-agent algorithm is proposed to solve a constrained problem of distributed task allocation. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. New rules for specialisation are proposed and are shown to exhibit improved eciency and exibility compared to existing ones. These new rules are compared with a market based approach to agent control. The eciency (average number of tasks performed), the exibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved eciency and robustness. Evolutionary algorithms are employed, both to optimize parameters and to allow the various rules to evolve and compete. We also observe extinction and speciation. In order to interpret algorithm performance we analyse the causes of eciency loss, derive theoretical upper bounds for the eciency, as well as a complete theoretical description of a non-trivial case, and compare these with the experimental results. Motivated by this work we introduce agent "memory" (the possibility for agents to develop preferences for certain cities) and show that not only does it lead to emergent cooperation between agents, but also to a signicant increase in efficiency.
Resumo:
Damage to insulation materials located near to a primary circuit coolant leak may compromise the operation of the emergency core cooling system (ECCS). Insulation material in the form of mineral wool fiber agglomerates (MWFA) maybe transported to the containment sump strainers, where they may block or penetrate the strainers. Though the impact of MWFA on the pressure drop across the strainers is minimal, corrosion products formed over time may also accumulate in the fiber cakes on the strainers, which can lead to a significant increase in the strainer pressure drop and result in cavitation in the ECCS. An experimental and theoretical study performed by the Helmholtz-Zentrum Dresden-Rossendorf and the Hochschule Zittau/Görlitz is investigating the phenomena that maybe observed in the containment vessel during a primary circuit coolant leak. The study entails the generation of fiber agglomerates, the determination of their transport properties in single and multi-effect experiments and the long-term effect that corrosion and erosion of the containment internals by the coolant has on the strainer pressure drop. The focus of this paper is on the verification and validation of numerical models that can predict the transport of MWFA. A number of pseudo-continuous dispersed phases of spherical wetted agglomerates represent the MWFA. The size, density, the relative viscosity of the fluid-fiber agglomerate mixture and the turbulent dispersion all affect how the fiber agglomerates are transported. In the cases described here, the size is kept constant while the density is modified. This definition affects both the terminal velocity and volume fraction of the dispersed phases. Note that the relative viscosity is only significant at high concentrations. Three single effect experiments were used to provide validation data on the transport of the fiber agglomerates under conditions of sedimentation in quiescent fluid, sedimentation in a horizontal flow and suspension in a horizontal flow. The experiments were performed in a rectangular column for the quiescent fluid and a racetrack type channel that provided a near uniform horizontal flow. The numerical models of sedimentation in the column and the racetrack channel found that the sedimentation characteristics are consistent with the experiments. For channel suspension, the heavier fibers tend to accumulate at the channel base even at high velocities, while lighter phases are more likely to be transported around the channel.
Resumo:
How are innovative new business models established if organizations constantly compare themselves against existing criteria and expectations? The objective is to address this question from the perspective of innovators and their ability to redefine established expectations and evaluation criteria. The research questions ask whether there are discernible patterns of discursive action through which innovators theorize institutional change and what role such theorizations play for mobilizing support and realizing change projects. These questions are investigated through a case study on a critical area of enterprise computing software, Java application servers. In the present case, business practices and models were already well established among incumbents with critical market areas allocated to few dominant firms. Fringe players started experimenting with a new business approach of selling services around freely available opensource application servers. While most new players struggled, one new entrant succeeded in leading incumbents to adopt and compete on the new model. The case demonstrates that innovative and substantially new models and practices are established in organizational fields when innovators are able to refine expectations and evaluation criteria within an organisational field. The study addresses the theoretical paradox of embedded agency. Actors who are embedded in prevailing institutional logics and structures find it hard to perceive potentially disruptive opportunities that fall outside existing ways of doing things. Changing prevailing institutional logics and structures requires strategic and institutional work aimed at overcoming barriers to innovation. The study addresses this problem through the lens of (new) institutional theory. This discourse methodology traces the process through which innovators were able to establish a new social and business model in the field.
Resumo:
Multiple-antenna systems offer significant performance enhancement and will be applied to the next generation broadband wireless communications. This thesis presents the investigations of multiple-antenna systems – multiple-input multiple-output (MIMO) and cooperative communication (CC) – and their performances in more realistic propagation environments than those reported previously. For MIMO systems, the investigations are conducted via theoretical modelling and simulations in a double-scattering environment. The results show that the variations of system performances depend on how scatterer density varies in flat fading channels, and that in frequency-selective fading channels system performances are affected by the length of the coding block as well as scatterer density. In realistic propagation environments, the fading correlation also has an impact on CC systems where the antennas can be further apart than those in MIMO systems. A general stochastic model is applied to studying the effects of fading correlation on the performances of CC systems. This model reflects the asymmetry fact of the wireless channels in a CC system. The results demonstrate the varied effects of fading correlation under different protocols and channel conditions. Performances of CC systems are further studied at the packet level, using both simulations and an experimental testbed. The results obtained have verified various performance trade-offs of the cooperative relaying network (CRN) investigated in different propagation environments. The results suggest that a proper selection of the relaying algorithms and other techniques can meet the requirements of quality of service for different applications.
Resumo:
Mineral wool insulation material applied to the primary cooling circuit of a nuclear reactor maybe damaged in the course of a loss of coolant accident (LOCA). The insulation material released by the leak may compromise the operation of the emergency core cooling system (ECCS), as it maybe transported together with the coolant in the form of mineral wool fiber agglomerates (MWFA) suspensions to the containment sump strainers, which are mounted at the inlet of the ECCS to keep any debris away from the emergency cooling pumps. In the further course of the LOCA, the MWFA may block or penetrate the strainers. In addition to the impact of MWFA on the pressure drop across the strainers, corrosion products formed over time may also accumulate in the fiber cakes on the strainers, which can lead to a significant increase in the strainer pressure drop and result in cavitation in the ECCS. Therefore, it is essential to understand the transport characteristics of the insulation materials in order to determine the long-term operability of nuclear reactors, which undergo LOCA. An experimental and theoretical study performed by the Helmholtz-Zentrum Dresden-Rossendorf and the Hochschule Zittau/Görlitz1 is investigating the phenomena that maybe observed in the containment vessel during a primary circuit coolant leak. The study entails the generation of fiber agglomerates, the determination of their transport properties in single and multi-effect experiments and the long-term effects that particles formed due to corrosion of metallic containment internals by the coolant medium have on the strainer pressure drop. The focus of this presentation is on the numerical models that are used to predict the transport of MWFA by CFD simulations. A number of pseudo-continuous dispersed phases of spherical wetted agglomerates can represent the MWFA. The size, density, the relative viscosity of the fluid-fiber agglomerate mixture and the turbulent dispersion all affect how the fiber agglomerates are transported. In the cases described here, the size is kept constant while the density is modified. This definition affects both the terminal velocity and volume fraction of the dispersed phases. Only one of the single effect experimental scenarios is described here that are used in validation of the numerical models. The scenario examines the suspension and horizontal transport of the fiber agglomerates in a racetrack type channel. The corresponding experiments will be described in an accompanying presentation (see abstract of Seeliger et al.).
Resumo:
Over the past forty years the corporate identity literature has developed to a point of maturity where it currently contains many definitions and models of the corporate identity construct at the organisational level. The literature has evolved by developing models of corporate identity or in considering corporate identity in relation to new and developing themes, e.g. corporate social responsibility. It has evolved into a multidisciplinary domain recently incorporating constructs from other literature to further its development. However, the literature has a number of limitations. It remains that an overarching and universally accepted definition of corporate identity is elusive, potentially leaving the construct with a lack of clear definition. Only a few corporate identity definitions and models, at the corporate level, have been empirically tested. The corporate identity construct is overwhelmingly defined and theoretically constructed at the corporate level, leaving the literature without a detailed understanding of its influence at an individual stakeholder level. Front-line service employees (FLEs), form a component in a number of corporate identity models developed at the organisational level. FLEs deliver the services of an organisation to its customers, as well as represent the organisation by communicating and transporting its core defining characteristics to customers through continual customer contact and interaction. This person-to-person contact between an FLE and the customer is termed a service encounter, where service encounters influence a customer’s perception of both the service delivered and the associated level of service quality. Therefore this study for the first time defines, theoretically models and empirically tests corporate identity at the individual FLE level, termed FLE corporate identity. The study uses the services marketing literature to characterise an FLE’s operating environment, arriving at five potential dimensions to the FLE corporate identity construct. These are scrutinised against existing corporate identity definitions and models to arrive at a definition for the construct. In reviewing the corporate identity, services marketing, branding and organisational psychology literature, a theoretical model is developed for FLE corporate identity, which is empirically and quantitatively tested, with FLEs in seven stores of a major national retailer. Following rigorous construct reliability and validity testing, the 601 usable responses are used to estimate a confirmatory factor analysis and structural equation model for the study. The results for the individual hypotheses and the structural model are very encouraging, as they fit the data well and support a definition of FLE corporate identity. This study makes contributions to the branding, services marketing and organisational psychology literature, but its principal contribution is to extend the corporate identity literature into a new area of discourse and research, that of FLE corporate identity
Resumo:
This book contains 13 papers from the 7th Workshop on Global Sourcing, held in Val d'Isere, France, during March 11-14, 2013, which were carefully reviewed and selected from 40 submissions. They are based on a vast empirical base brought together by leading researchers in information systems, strategic management, and operations. This volume is intended for students, academics, and practitioners interested in research results and experiences on outsourcing and offshoring of information technology and business processes. The topics discussed represent both client and supplier perspectives on sourcing of global services, combine theoretical and practical insights regarding challenges that both clients and vendors face, and include case studies from client and vendor organizations.
Resumo:
* This work was financially supported by the Russian Foundation for Basic Research, project no. 04-01-00858a.
Resumo:
This paper explores the sharing of value in business transactions. Although there is an increased usage of the terminology of value in marketing (such concepts as value based selling and pricing), as well as in purchasing (value-based purchasing), the definition of the term is still vague. In order to better understand the definition of value, the author’s argue that it is important to understand the sharing of value, in general and the element of power for the sharing of value in particular. The aim of this paper is to add to this debate and this requires us to critique the current models. The key process that the analysis of power will help to explain is the division of the available revenue stream flowing up the chain from the buyer's customers. If the buyer and supplier do not cooperate, then power will be key in the sharing of that money flow. If buyers and suppliers fully cooperate, they may be able to reduce their costs and/or increase the quality of the sales offering the buyer makes to their customer.