981 resultados para Simulation package
Resumo:
In this work, the Cloud Feedback Model Intercomparison (CFMIP) Observation Simulation Package (COSP) is expanded to include scattering and emission effects of clouds and precipitation at passive microwave frequencies. This represents an advancement over the official version of COSP (version 1.4.0) in which only clear-sky brightness temperatures are simulated. To highlight the potential utility of this new microwave simulator, COSP results generated using the climate model EC-Earth's version 3 atmosphere as input are compared with Microwave Humidity Sounder (MHS) channel (190.311 GHz) observations. Specifically, simulated seasonal brightness temperatures (TB) are contrasted with MHS observations for the period December 2005 to November 2006 to identify possible biases in EC-Earth's cloud and atmosphere fields. The EC-Earth's atmosphere closely reproduces the microwave signature of many of the major large-scale and regional scale features of the atmosphere and surface. Moreover, greater than 60 % of the simulated TB are within 3 K of the NOAA-18 observations. However, COSP is unable to simulate sufficiently low TB in areas of frequent deep convection. Within the Tropics, the model's atmosphere can yield an underestimation of TB by nearly 30 K for cloudy areas in the ITCZ. Possible reasons for this discrepancy include both incorrect amount of cloud ice water in the model simulations and incorrect ice particle scattering assumptions used in the COSP microwave forward model. These multiple sources of error highlight the non-unique nature of the simulated satellite measurements, a problem exacerbated by the fact that EC-Earth lacks detailed micro-physical parameters necessary for accurate forward model calculations. Such issues limit the robustness of our evaluation and suggest a general note of caution when making COSP-satellite observation evaluations.
Resumo:
Piezoresistive sensors are commonly made of a piezoresistive membrane attached to a flexible substrate, a plate. They have been widely studied and used in several applications. It has been found that the size, position and geometry of the piezoresistive membrane may affect the performance of the sensors. Based on this remark, in this work, a topology optimization methodology for the design of piezoresistive plate-based sensors, for which both the piezoresistive membrane and the flexible substrate disposition can be optimized, is evaluated. Perfect coupling conditions between the substrate and the membrane based on the `layerwise' theory for laminated plates, and a material model for the piezoresistive membrane based on the solid isotropic material with penalization model, are employed. The design goal is to obtain the configuration of material that maximizes the sensor sensitivity to external loading, as well as the stiffness of the sensor to particular loads, which depend on the case (application) studied. The proposed approach is evaluated by studying two distinct examples: the optimization of an atomic force microscope probe and a pressure sensor. The results suggest that the performance of the sensors can be improved by using the proposed approach.
Resumo:
This work presents algorithms for the calculation of the electrostatic interaction in partially periodic systems. The framework for these algorithms is provided by the simulation package ESPResSo, of which the author was one of the main developers. The prominent features of the program are listed and the internal structure is described. In the following, algorithms for the calculation of the Coulomb sum in three dimensionally periodic systems are described. These methods are the foundations for the algorithms for partially periodic systems presented in this work. Starting from the MMM2D method for systems with one non-periodic coordinate, the ELC method for these systems is developed. This method consists of a correction term which allows to use methods for three dimensional periodicity also for the case of two periodic coordinates. The computation time of this correction term is neglible for large numbers of particles. The performance of MMM2D and ELC are demonstrated by results from the implementations contained in ESPResSo. It is also discussed, how different dielectric constants inside and outside of the simulation box can be realized. For systems with one periodic coordinate, the MMM1D method is derived from the MMM2D method. This method is applied to the problem of the attraction of like-charged rods in the presence of counterions, and results of the strong coupling theory for the equilibrium distance of the rods at infinite counterion-coupling are checked against results from computer simulations. The degree of agreement between the simulations at finite coupling and the theory can be characterized by a single parameter gamma_RB. In the special case of T=0, one finds under certain circumstances flat configurations, in which all charges are located in the rod-rod plane. The energetically optimal configuration and its stability are determined analytically, which depends on only one parameter gamma_z, similar to gamma_RB. These findings are in good agreement with results from computer simulations.
Resumo:
Being basic ingredients of numerous daily-life products with significant industrial importance as well as basic building blocks for biomaterials, charged hydrogels continue to pose a series of unanswered challenges for scientists even after decades of practical applications and intensive research efforts. Despite a rather simple internal structure it is mainly the unique combination of short- and long-range forces which render scientific investigations of their characteristic properties to be quite difficult. Hence early on computer simulations were used to link analytical theory and empirical experiments, bridging the gap between the simplifying assumptions of the models and the complexity of real world measurements. Due to the immense numerical effort, even for high performance supercomputers, system sizes and time scales were rather restricted until recently, whereas it only now has become possible to also simulate a network of charged macromolecules. This is the topic of the presented thesis which investigates one of the fundamental and at the same time highly fascinating phenomenon of polymer research: The swelling behaviour of polyelectrolyte networks. For this an extensible simulation package for the research on soft matter systems, ESPResSo for short, was created which puts a particular emphasis on mesoscopic bead-spring-models of complex systems. Highly efficient algorithms and a consistent parallelization reduced the necessary computation time for solving equations of motion even in case of long-ranged electrostatics and large number of particles, allowing to tackle even expensive calculations and applications. Nevertheless, the program has a modular and simple structure, enabling a continuous process of adding new potentials, interactions, degrees of freedom, ensembles, and integrators, while staying easily accessible for newcomers due to a Tcl-script steering level controlling the C-implemented simulation core. Numerous analysis routines provide means to investigate system properties and observables on-the-fly. Even though analytical theories agreed on the modeling of networks in the past years, our numerical MD-simulations show that even in case of simple model systems fundamental theoretical assumptions no longer apply except for a small parameter regime, prohibiting correct predictions of observables. Applying a "microscopic" analysis of the isolated contributions of individual system components, one of the particular strengths of computer simulations, it was then possible to describe the behaviour of charged polymer networks at swelling equilibrium in good solvent and close to the Theta-point by introducing appropriate model modifications. This became possible by enhancing known simple scaling arguments with components deemed crucial in our detailed study, through which a generalized model could be constructed. Herewith an agreement of the final system volume of swollen polyelectrolyte gels with results of computer simulations could be shown successfully over the entire investigated range of parameters, for different network sizes, charge fractions, and interaction strengths. In addition, the "cell under tension" was presented as a self-regulating approach for predicting the amount of swelling based on the used system parameters only. Without the need for measured observables as input, minimizing the free energy alone already allows to determine the the equilibrium behaviour. In poor solvent the shape of the network chains changes considerably, as now their hydrophobicity counteracts the repulsion of like-wise charged monomers and pursues collapsing the polyelectrolytes. Depending on the chosen parameters a fragile balance emerges, giving rise to fascinating geometrical structures such as the so-called pear-necklaces. This behaviour, known from single chain polyelectrolytes under similar environmental conditions and also theoretically predicted, could be detected for the first time for networks as well. An analysis of the total structure factors confirmed first evidences for the existence of such structures found in experimental results.
Resumo:
The electromagnetic form factors of the proton are fundamental quantities sensitive to the distribution of charge and magnetization inside the proton. Precise knowledge of the form factors, in particular of the charge and magnetization radii provide strong tests for theory in the non-perturbative regime of QCD. However, the existing data at Q^2 below 1 (GeV/c)^2 are not precise enough for a hard test of theoretical predictions.rnrnFor a more precise determination of the form factors, within this work more than 1400 cross sections of the reaction H(e,e′)p were measured at the Mainz Microtron MAMI using the 3-spectrometer-facility of the A1-collaboration. The data were taken in three periods in the years 2006 and 2007 using beam energies of 180, 315, 450, 585, 720 and 855 MeV. They cover the Q^2 region from 0.004 to 1 (GeV/c)^2 with counting rate uncertainties below 0.2% for most of the data points. The relative luminosity of the measurements was determined using one of the spectrometers as a luminosity monitor. The overlapping acceptances of the measurements maximize the internal redundancy of the data and allow, together with several additions to the standard experimental setup, for tight control of systematic uncertainties.rnTo account for the radiative processes, an event generator was developed and implemented in the simulation package of the analysis software which works without peaking approximation by explicitly calculating the Bethe-Heitler and Born Feynman diagrams for each event.rnTo separate the form factors and to determine the radii, the data were analyzed by fitting a wide selection of form factor models directly to the measured cross sections. These fits also determined the absolute normalization of the different data subsets. The validity of this method was tested with extensive simulations. The results were compared to an extraction via the standard Rosenbluth technique.rnrnThe dip structure in G_E that was seen in the analysis of the previous world data shows up in a modified form. When compared to the standard-dipole form factor as a smooth curve, the extracted G_E exhibits a strong change of the slope around 0.1 (GeV/c)^2, and in the magnetic form factor a dip around 0.2 (GeV/c)^2 is found. This may be taken as indications for a pion cloud. For higher Q^2, the fits yield larger values for G_M than previous measurements, in agreement with form factor ratios from recent precise polarized measurements in the Q2 region up to 0.6 (GeV/c)^2.rnrnThe charge and magnetic rms radii are determined as rn⟨r_e⟩=0.879 ± 0.005(stat.) ± 0.004(syst.) ± 0.002(model) ± 0.004(group) fm,rn⟨r_m⟩=0.777 ± 0.013(stat.) ± 0.009(syst.) ± 0.005(model) ± 0.002(group) fm.rnThis charge radius is significantly larger than theoretical predictions and than the radius of the standard dipole. However, it is in agreement with earlier results measured at the Mainz linear accelerator and with determinations from Hydrogen Lamb shift measurements. The extracted magnetic radius is smaller than previous determinations and than the standard-dipole value.
Resumo:
Resumen: Se planificaron las experiencias con el objeto de analizar el comportamiento del catalizador en la columna metálica de mayor diámetro. Se modificaron las masas usadas para verificar la eficiencia de retención respecto de la masa. Se realizaron ciclos de adsorción, desorción y readsorción sobre una misma muestra para determinar variaciones en la eficiencia del catalizador. En otra fase, en colaboración con el Dr. V. A. Ranea y el Prof. E. E. Mola (INIFTA, UNLP), se desarrolló el estudio teórico de la adsorción de moléculas de SO2, CH4, CO2, O2 y CO sobre Cr2O3(0001) mediante Teoría del Funcional Densidad (programa VASP, Vienna Ab-initio Simulation Package), y el estudio de la cinética de la reacción entre CH4, SO2 y el O2 junto con la presencia de especies sulfito y sulfato. Este estudio permitió hallar los sitios preferenciales de adsorción de Sº y la posible competencia con SO2 experimentalmente y por cálculos teóricos. Dentro del marco de la presente línea de investigación, la Ing. Sabrina Hernández Guiance continúa realizando experiencias en el marco del proyecto conjunto con el INIFTA, las cuales forman parte del desarrollo de su tesis doctoral. Experimentalmente, se observa que la eficiencia de adsorción del catalizador respecto al SO2 es cercana al 100%. Se observa un pico de termodesorción a 1120 K. Luego, se estudió la oxidación de CH4 con SO2. Se observa que hay producción de CO2 desde temperatura inicial, seguida de un aumento significativo en la formación de CO2 hasta 330-340 K. Luego, la producción de CO2 se mantiene aproximadamente constante. Mediante el empleo de la ecuación de Arrhenius y resultados experimentales, se obtuvo la energía de activación de la reacción global, de 7 Kcal/mol. Mediante estudios teóricos, se determinó que la energía de quimisorción del SO2 sobre el Cr2O3 es de -3.09 eV para la configuración más estable, una energía de adsorción de O2 en estado disociativo de -1.567 eV, una energía para CH4 sobre O2 adsorbido previamente de -0.335 eV, y - 0.812 eV para la configuración más estable de CO2 sobre el sustrato.
Resumo:
The aim of this work was to develop a generic methodology for evaluating and selecting, at the conceptual design phase of a project, the best process technology for Natural Gas conditioning. A generic approach would be simple and require less time and would give a better understanding of why one process is to be preferred over another. This will lead to a better understanding of the problem. Such a methodology would be useful in evaluating existing, novel and hybrid technologies. However, to date no information is available in the published literature on such a generic approach to gas processing. It is believed that the generic methodology presented here is the first available for choosing the best or cheapest method of separation for natural gas dew-point control. Process cost data are derived from evaluations carried out by the vendors. These evaluations are then modelled using a steady-state simulation package. From the results of the modelling the cost data received are correlated and defined with respect to the design or sizing parameters. This allows comparisons between different process systems to be made in terms of the overall process. The generic methodology is based on the concept of a Comparative Separation Cost. This takes into account the efficiency of each process, the value of its products, and the associated costs. To illustrate the general applicability of the methodology, three different cases suggested by BP Exploration are evaluated. This work has shown that it is possible to identify the most competitive process operations at the conceptual design phase and illustrate why one process has an advantage over another. Furthermore, the same methodology has been used to identify and evaluate hybrid processes. It has been determined here that in some cases they offer substantial advantages over the separate process techniques.
Resumo:
In series I and II of this study ([Chua et al., 2010a] and [Chua et al., 2010b]), we discussed the time scale of granule–granule collision, droplet–granule collision and droplet spreading in Fluidized Bed Melt Granulation (FBMG). In this third one, we consider the rate at which binder solidifies. Simple analytical solution, based on classical formulation for conduction across a semi-infinite slab, was used to obtain a generalized equation for binder solidification time. A multi-physics simulation package (Comsol) was used to predict the binder solidification time for various operating conditions usually considered in FBMG. The simulation results were validated with experimental temperature data obtained with a high speed infrared camera during solidification of ‘macroscopic’ (mm scale) droplets. For the range of microscopic droplet size and operating conditions considered for a FBMG process, the binder solidification time was found to fall approximately between 10-3 and 10-1 s. This is the slowest compared to the other three major FBMG microscopic events discussed in this series (granule–granule collision, granule–droplet collision and droplet spreading).
Resumo:
This thesis describes an investigation by the author into the spares operation of compare BroomWade Ltd. Whilst the complete system, including the warehousing and distribution functions, was investigated, the thesis concentrates on the provisioning aspect of the spares supply problem. Analysis of the historical data showed the presence of significant fluctuations in all the measures of system performance. Two Industrial Dynamics simulation models were developed to study this phenomena. The models showed that any fluctuation in end customer demand would be amplified as it passed through the distributor and warehouse stock control systems. The evidence from the historical data available supported this view of the system's operation. The models were utilised to determine which parts of the total system could be expected to exert a critical influence on its performance. The lead time parameters of the supply sector were found to be critical and further study showed that the manner in which the lead time changed with work in progress levels was also an important factor. The problem therefore resolved into the design of a spares manufacturing system. Which exhibited the appropriate dynamic performance characteristics. The gross level of entity presentation, inherent in the Industrial Dynamics methodology, was found to limit the value of these models in the development of detail design proposals. Accordingly, an interacting job shop simulation package was developed to allow detailed evaluation of organisational factors on the performance characteristics of a manufacturing system. The package was used to develop a design for a pilot spares production unit. The need for a manufacturing system to perform successfully under conditions of fluctuating demand is not limited to the spares field. Thus, although the spares exercise provides an example of the approach, the concepts and techniques developed can be considered to have broad application throughout batch manufacturing industry.
Resumo:
This study is concerned with several proposals concerning multiprocessor systems and with the various possible methods of evaluating such proposals. After a discussion of the advantages and disadvantages of several performance evaluation tools, the author decides that simulation is the only tool powerful enough to develop a model which would be of practical use, in the design, comparison and extension of systems. The main aims of the simulation package developed as part of this study are cost effectiveness, ease of use and generality. The methodology on which the simulation package is based is described in detail. The fundamental principles are that model design should reflect actual systems design, that measuring procedures should be carried out alongside design that models should be well documented and easily adaptable and that models should be dynamic. The simulation package itself is modular, and in this way reflects current design trends. This approach also aids documentation and ensures that the model is easily adaptable. It contains a skeleton structure and a library of segments which can be added to or directly swapped with segments of the skeleton structure, to form a model which fits a user's requirements. The study also contains the results of some experimental work carried out using the model, the first part of which tests• the model's capabilities by simulating a large operating system, the ICL George 3 system; the second part deals with general questions and some of the many proposals concerning multiprocessor systems.
Resumo:
Prior to the development of a production standard control system for ML Aviation's plan-symmetric remotely piloted helicopter system, SPRITE, optimum solutions to technical requirements had yet to be found for some aspects of the work. This thesis describes an industrial project where solutions to real problems have been provided within strict timescale constraints. Use has been made of published material wherever appropriate, new solutions have been contributed where none existed previously. A lack of clearly defined user requirements from potential Remotely Piloted Air Vehicle (RPAV) system users is identified, A simulation package is defined to enable the RPAV designer to progress with air vehicle and control system design, development and evaluation studies and to assist the user to investigate his applications. The theoretical basis of this simulation package is developed including Co-axial Contra-rotating Twin Rotor (CCTR), six degrees of freedom motion, fuselage aerodynamics and sensor and control system models. A compatible system of equations is derived for modelling a miniature plan-symmetric helicopter. Rigorous searches revealed a lack of CCTR models, based on closed form expressions to obviate integration along the rotor blade, for stabilisation and navigation studies through simulation. An economic CCTR simulation model is developed and validated by comparison with published work and practical tests. Confusion in published work between attitude and Euler angles is clarified. The implementation of package is discussed. dynamic adjustment of assessment. the theory into a high integrity software Use is made of a novel technique basing the integration time step size on error Simulation output for control system stability verification, cross coupling of motion between control channels and air vehicle response to demands and horizontal wind gusts studies are presented. Contra-Rotating Twin Rotor Flight Control System Remotely Piloted Plan-Symmetric Helicopter Simulation Six Degrees of Freedom Motion ( i i)
Resumo:
Quality, production and technological innovation management rank among the most important matters of concern to modern manufacturing organisations. They can provide companies with the decisive means of gaining a competitive advantage, especially within industries where there is an increasing similarity in product design and manufacturing processes. The papers in this special issue of International Journal of Technology Management have all been selected as examples of how aspects of quality, production and technological innovation can help to improve competitive performance. Most are based on presentations made at the UK Operations Management Association's Sixth International Conference held at Aston University at which the theme was 'Getting Ahead Through Technology and People'. At the conference itself over 80 papers were presented by authors from 15 countries around the world. Among the many topics addressed within the conference theme, technological innovation, quality and production management emerged as attracting the greatest concern and interest of delegates, particularly those from industry. For any new initiative to be implemented successfully, it should be led from the top of the organization. Achieving the desired level of commitment from top management can, however, be a difficulty. In the first paper of this issue, Mackness investigates this question by explaining how systems thinking can help. In the systems approach, properties such as 'emergence', 'hierarchy', 'commnication' and 'control' are used to assist top managers in preparing for change. Mackness's paper is then complemented by Iijima and Hasegawa's contribution in which they investigate the development of Quality Information Management (QIM) in Japan. They present the idea of a Design Review and demonstrate how it can be used to trace and reduce quality-related losses. The next paper on the subject of quality is by Whittle and colleagues. It relates to total quality and the process of culture change within organisations. Using the findings of investigations carried out in a number of case study companies, they describe four generic models which have been identified as characterising methods of implementing total quality within existing organisation cultures. Boaden and Dale's paper also relates to the management of quality, but looks specifically at the construction industry where it has been found there is still some confusion over the role of Quality Assurance (QA) and Total Quality Management (TQM). They describe the results of a questionnaire survey of forty companies in the industry and compare them to similar work carried out in other industries. Szakonyi's contribution then completes this group of papers which all relate specifically to the question of quality. His concern is with the two ways in which R&D or engineering managers can work on improving quality. The first is by improving it in the laboratory, while the second is by working with other functions to improve quality in the company. The next group of papers in this issue all address aspects of production management. Umeda's paper proposes a new manufacturing-oriented simulation package for production management which provides important information for both design and operation of manufacturing systems. A simulation for production strategy in a Computer Integrated Manufacturing (CIM) environment is also discussed. This paper is then followed by a contribution by Tanaka and colleagues in which they consider loading schedules for manufacturing orders in a Material Requirements Planning (MRP) environment. They compare mathematical programming with a knowledge-based approach, and comment on their relative effectiveness for different practical situations. Engstrom and Medbo's paper then looks at a particular aspect of production system design, namely the question of devising group working arrangements for assembly with new product structures. Using the case of a Swedish vehicle assembly plant where long cycle assembly work has been adopted, they advocate the use of a generally applicable product structure which can be adapted to suit individual local conditions. In the last paper of this particular group, Tay considers how automation has affected the production efficiency in Singapore. Using data from ten major industries he identifies several factors which are positively correlated with efficiency, with capital intensity being of greatest interest to policy makers. The two following papers examine the case of electronic data interchange (EDI) as a means of improving the efficiency and quality of trading relationships. Banerjee and Banerjee consider a particular approach to material provisioning for production systems using orderless inventory replenishment. Using the example of a single supplier and multiple buyers they develop an analytical model which is applicable for the exchange of information between trading partners using EDI. They conclude that EDI-based inventory control can be attractive from economic as well as other standpoints and that the approach is consistent with and can be instrumental in moving towards just-in-time (JIT) inventory management. Slacker's complementary viewpoint on EDI is from the perspective of the quality relation-ship between the customer and supplier. Based on the experience of Lucas, a supplier within the automotive industry, he concludes that both banks and trading companies must take responsibility for the development of payment mechanisms which satisfy the requirements of quality trading. The three final papers of this issue relate to technological innovation and are all country based. Berman and Khalil report on a survey of US technological effectiveness in the global economy. The importance of education is supported in their conclusions, although it remains unclear to what extent the US government can play a wider role in promoting technological innovation and new industries. The role of technology in national development is taken up by Martinsons and Valdemars who examine the case of the former Soviet Union. The failure to successfully infuse technology into Soviet enterprises is seen as a factor in that country's demise, and it is anticipated that the newly liberalised economies will be able to encourage greater technological creativity. This point is then taken up in Perminov's concluding paper which looks in detail at Russia. Here a similar analysis is made of the concluding paper which looks in detail at Russia. Here a similar analysis is made of the Soviet Union's technological decline, but a development strategy is also presented within the context of the change from a centralised to a free market economy. The papers included in this special issue of the International Journal of Technology Management each represent a unique and particular contribution to their own specific area of concern. Together, however, they also argue or demonstrate the general improvements in competitive performance that can be achieved through the application of modern principles and practice to the management of quality, production and technological innovation.
Resumo:
In this paper we study the self-organising behaviour of smart camera networks which use market-based handover of object tracking responsibilities to achieve an efficient allocation of objects to cameras. Specifically, we compare previously known homogeneous configurations, when all cameras use the same marketing strategy, with heterogeneous configurations, when each camera makes use of its own, possibly different marketing strategy. Our first contribution is to establish that such heterogeneity of marketing strategies can lead to system wide outcomes which are Pareto superior when compared to those possible in homogeneous configurations. However, since the particular configuration required to lead to Pareto efficiency in a given scenario will not be known in advance, our second contribution is to show how online learning of marketing strategies at the individual camera level can lead to high performing heterogeneous configurations from the system point of view, extending the Pareto front when compared to the homogeneous case. Our third contribution is to show that in many cases, the dynamic behaviour resulting from online learning leads to global outcomes which extend the Pareto front even when compared to static heterogeneous configurations. Our evaluation considers results obtained from an open source simulation package as well as data from a network of real cameras. © 2013 IEEE.
Resumo:
An integrated method for the prediction of the spatial pollution distribution within a street canyon directly from a microscopic traffic simulation model is outlined. The traffic simulation package Paramics is used to model the flow of vehicles in realistic traffic conditions on a real road network. This produces details of the amount of pollutant produced by each vehicle at any given time. The authors calculate the dispersion of the pollutant using a particle tracking diffusion method which is superimposed on a known velocity and turbulence field. This paper shows how these individual components may be integrated to provide a practical street canyon pollution model. The resulting street canyon pollution model provides isoconcentrations of pollutant within the road topography.
Resumo:
"C00-2383-0018."