939 resultados para Simulation experiments


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The primary hypothesis stated by this paper is that the use of social choice theory in Ambient Intelligence systems can improve significantly users satisfaction when accessing shared resources. A research methodology based on agent based social simulations is employed to support this hypothesis and to evaluate these benefits. The result is a six-fold contribution summarized as follows. Firstly, several considerable differences between this application case and the most prominent social choice application, political elections, have been found and described. Secondly, given these differences, a number of metrics to evaluate different voting systems in this scope have been proposed and formalized. Thirdly, given the presented application and the metrics proposed, the performance of a number of well known electoral systems is compared. Fourthly, as a result of the performance study, a novel voting algorithm capable of obtaining the best balance between the metrics reviewed is introduced. Fifthly, to improve the social welfare in the experiments, the voting methods are combined with cluster analysis techniques. Finally, the article is complemented by a free and open-source tool, VoteSim, which ensures not only the reproducibility of the experimental results presented, but also allows the interested reader to adapt the case study presented to different environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El agotamiento, la ausencia o, simplemente, la incertidumbre sobre la cantidad de las reservas de combustibles fósiles se añaden a la variabilidad de los precios y a la creciente inestabilidad en la cadena de aprovisionamiento para crear fuertes incentivos para el desarrollo de fuentes y vectores energéticos alternativos. El atractivo de hidrógeno como vector energético es muy alto en un contexto que abarca, además, fuertes inquietudes por parte de la población sobre la contaminación y las emisiones de gases de efecto invernadero. Debido a su excelente impacto ambiental, la aceptación pública del nuevo vector energético dependería, a priori, del control de los riesgos asociados su manipulación y almacenamiento. Entre estos, la existencia de un innegable riesgo de explosión aparece como el principal inconveniente de este combustible alternativo. Esta tesis investiga la modelización numérica de explosiones en grandes volúmenes, centrándose en la simulación de la combustión turbulenta en grandes dominios de cálculo en los que la resolución que es alcanzable está fuertemente limitada. En la introducción, se aborda una descripción general de los procesos de explosión. Se concluye que las restricciones en la resolución de los cálculos hacen necesario el modelado de los procesos de turbulencia y de combustión. Posteriormente, se realiza una revisión crítica de las metodologías disponibles tanto para turbulencia como para combustión, que se lleva a cabo señalando las fortalezas, deficiencias e idoneidad de cada una de las metodologías. Como conclusión de esta investigación, se obtiene que la única estrategia viable para el modelado de la combustión, teniendo en cuenta las limitaciones existentes, es la utilización de una expresión que describa la velocidad de combustión turbulenta en función de distintos parámetros. Este tipo de modelos se denominan Modelos de velocidad de llama turbulenta y permiten cerrar una ecuación de balance para la variable de progreso de combustión. Como conclusión también se ha obtenido, que la solución más adecuada para la simulación de la turbulencia es la utilización de diferentes metodologías para la simulación de la turbulencia, LES o RANS, en función de la geometría y de las restricciones en la resolución de cada problema particular. Sobre la base de estos hallazgos, el crea de un modelo de combustión en el marco de los modelos de velocidad de la llama turbulenta. La metodología propuesta es capaz de superar las deficiencias existentes en los modelos disponibles para aquellos problemas en los que se precisa realizar cálculos con una resolución moderada o baja. Particularmente, el modelo utiliza un algoritmo heurístico para impedir el crecimiento del espesor de la llama, una deficiencia que lastraba el célebre modelo de Zimont. Bajo este enfoque, el énfasis del análisis se centra en la determinación de la velocidad de combustión, tanto laminar como turbulenta. La velocidad de combustión laminar se determina a través de una nueva formulación capaz de tener en cuenta la influencia simultánea en la velocidad de combustión laminar de la relación de equivalencia, la temperatura, la presión y la dilución con vapor de agua. La formulación obtenida es válida para un dominio de temperaturas, presiones y dilución con vapor de agua más extenso de cualquiera de las formulaciones previamente disponibles. Por otra parte, el cálculo de la velocidad de combustión turbulenta puede ser abordado mediante el uso de correlaciones que permiten el la determinación de esta magnitud en función de distintos parámetros. Con el objetivo de seleccionar la formulación más adecuada, se ha realizado una comparación entre los resultados obtenidos con diversas expresiones y los resultados obtenidos en los experimentos. Se concluye que la ecuación debida a Schmidt es la más adecuada teniendo en cuenta las condiciones del estudio. A continuación, se analiza la importancia de las inestabilidades de la llama en la propagación de los frentes de combustión. Su relevancia resulta significativa para mezclas pobres en combustible en las que la intensidad de la turbulencia permanece moderada. Estas condiciones son importantes dado que son habituales en los accidentes que ocurren en las centrales nucleares. Por ello, se lleva a cabo la creación de un modelo que permita estimar el efecto de las inestabilidades, y en concreto de la inestabilidad acústica-paramétrica, en la velocidad de propagación de llama. El modelado incluye la derivación matemática de la formulación heurística de Bauwebs et al. para el cálculo de la incremento de la velocidad de combustión debido a las inestabilidades de la llama, así como el análisis de la estabilidad de las llamas con respecto a una perturbación cíclica. Por último, los resultados se combinan para concluir el modelado de la inestabilidad acústica-paramétrica. Tras finalizar esta fase, la investigación se centro en la aplicación del modelo desarrollado en varios problemas de importancia para la seguridad industrial y el posterior análisis de los resultados y la comparación de los mismos con los datos experimentales correspondientes. Concretamente, se abordo la simulación de explosiones en túneles y en contenedores, con y sin gradiente de concentración y ventilación. Como resultados generales, se logra validar el modelo confirmando su idoneidad para estos problemas. Como última tarea, se ha realizado un analisis en profundidad de la catástrofe de Fukushima-Daiichi. El objetivo del análisis es determinar la cantidad de hidrógeno que explotó en el reactor número uno, en contraste con los otros estudios sobre el tema que se han centrado en la determinación de la cantidad de hidrógeno generado durante el accidente. Como resultado de la investigación, se determinó que la cantidad más probable de hidrogeno que fue consumida durante la explosión fue de 130 kg. Es un hecho notable el que la combustión de una relativamente pequeña cantidad de hidrogeno pueda causar un daño tan significativo. Esta es una muestra de la importancia de este tipo de investigaciones. Las ramas de la industria para las que el modelo desarrollado será de interés abarca la totalidad de la futura economía de hidrógeno (pilas de combustible, vehículos, almacenamiento energético, etc) con un impacto especial en los sectores del transporte y la energía nuclear, tanto para las tecnologías de fisión y fusión. ABSTRACT The exhaustion, absolute absence or simply the uncertainty on the amount of the reserves of fossil fuels sources added to the variability of their prices and the increasing instability and difficulties on the supply chain are strong incentives for the development of alternative energy sources and carriers. The attractiveness of hydrogen in a context that additionally comprehends concerns on pollution and emissions is very high. Due to its excellent environmental impact, the public acceptance of the new energetic vector will depend on the risk associated to its handling and storage. Fromthese, the danger of a severe explosion appears as the major drawback of this alternative fuel. This thesis investigates the numerical modeling of large scale explosions, focusing on the simulation of turbulent combustion in large domains where the resolution achievable is forcefully limited. In the introduction, a general description of explosion process is undertaken. It is concluded that the restrictions of resolution makes necessary the modeling of the turbulence and combustion processes. Subsequently, a critical review of the available methodologies for both turbulence and combustion is carried out pointing out their strengths and deficiencies. As a conclusion of this investigation, it appears clear that the only viable methodology for combustion modeling is the utilization of an expression for the turbulent burning velocity to close a balance equation for the combustion progress variable, a model of the Turbulent flame velocity kind. Also, that depending on the particular resolution restriction of each problem and on its geometry the utilization of different simulation methodologies, LES or RANS, is the most adequate solution for modeling the turbulence. Based on these findings, the candidate undertakes the creation of a combustion model in the framework of turbulent flame speed methodology which is able to overcome the deficiencies of the available ones for low resolution problems. Particularly, the model utilizes a heuristic algorithm to maintain the thickness of the flame brush under control, a serious deficiency of the Zimont model. Under the approach utilized by the candidate, the emphasis of the analysis lays on the accurate determination of the burning velocity, both laminar and turbulent. On one side, the laminar burning velocity is determined through a newly developed correlation which is able to describe the simultaneous influence of the equivalence ratio, temperature, steam dilution and pressure on the laminar burning velocity. The formulation obtained is valid for a larger domain of temperature, steam dilution and pressure than any of the previously available formulations. On the other side, a certain number of turbulent burning velocity correlations are available in the literature. For the selection of the most suitable, they have been compared with experiments and ranked, with the outcome that the formulation due to Schmidt was the most adequate for the conditions studied. Subsequently, the role of the flame instabilities on the development of explosions is assessed. Their significance appears to be of importance for lean mixtures in which the turbulence intensity remains moderate. These are important conditions which are typical for accidents on Nuclear Power Plants. Therefore, the creation of a model to account for the instabilities, and concretely, the acoustic parametric instability is undertaken. This encloses the mathematical derivation of the heuristic formulation of Bauwebs et al. for the calculation of the burning velocity enhancement due to flame instabilities as well as the analysis of the stability of flames with respect to a cyclic velocity perturbation. The results are combined to build a model of the acoustic-parametric instability. The following task in this research has been to apply the model developed to several problems significant for the industrial safety and the subsequent analysis of the results and comparison with the corresponding experimental data was performed. As a part of such task simulations of explosions in a tunnel and explosions in large containers, with and without gradient of concentration and venting have been carried out. As a general outcome, the validation of the model is achieved, confirming its suitability for the problems addressed. As a last and final undertaking, a thorough study of the Fukushima-Daiichi catastrophe has been carried out. The analysis performed aims at the determination of the amount of hydrogen participating on the explosion that happened in the reactor one, in contrast with other analysis centered on the amount of hydrogen generated during the accident. As an outcome of the research, it was determined that the most probable amount of hydrogen exploding during the catastrophe was 130 kg. It is remarkable that the combustion of such a small quantity of material can cause tremendous damage. This is an indication of the importance of these types of investigations. The industrial branches that can benefit from the applications of the model developed in this thesis include the whole future hydrogen economy, as well as nuclear safety both in fusion and fission technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El propósito de esta tesis es estudiar la aproximación a los fenómenos de transporte térmico en edificación acristalada a través de sus réplicas a escala. La tarea central de esta tesis es, por lo tanto, la comparación del comportamiento térmico de modelos a escala con el correspondiente comportamiento térmico del prototipo a escala real. Los datos principales de comparación entre modelo y prototipo serán las temperaturas. En el primer capítulo del Estado del Arte de esta tesis se hará un recorrido histórico por los usos de los modelos a escala desde la antigüedad hasta nuestro días. Dentro de éste, en el Estado de la Técnica, se expondrán los beneficios que tiene su empleo y las dificultades que conllevan. A continuación, en el Estado de la Investigación de los modelos a escala, se analizarán artículos científicos y tesis. Precisamente, nos centraremos en aquellos modelos a escala que son funcionales. Los modelos a escala funcionales son modelos a escala que replican, además, una o algunas de las funciones de sus prototipos. Los modelos a escala pueden estar distorsionados o no. Los modelos a escala distorsionados son aquellos con cambios intencionados en las dimensiones o en las características constructivas para la obtención de una respuesta específica por ejemplo, replicar el comportamiento térmico. Los modelos a escala sin distorsión, o no distorsionados, son aquellos que mantienen, en la medida de lo posible, las proporciones dimensionales y características constructivas de sus prototipos de referencia. Estos modelos a escala funcionales y no distorsionados son especialmente útiles para los arquitectos ya que permiten a la vez ser empleados como elementos funcionales de análisis y como elementos de toma de decisiones en el diseño constructivo. A pesar de su versatilidad, en general, se observará que se han utilizado muy poco estos modelos a escala funcionales sin distorsión para el estudio del comportamiento térmico de la edificación. Posteriormente, se expondrán las teorías para el análisis de los datos térmicos recogidos de los modelos a escala y su aplicabilidad a los correspondientes prototipos a escala real. Se explicarán los experimentos llevados a cabo, tanto en laboratorio como a intemperie. Se han realizado experimentos con modelos sencillos cúbicos a diferentes escalas y sometidos a las mismas condiciones ambientales. De estos modelos sencillos hemos dado el salto a un modelo reducido de una edificación acristalada relativamente sencilla. Los experimentos consisten en ensayos simultáneos a intemperie del prototipo a escala real y su modelo reducido del Taller de Prototipos de la Escuela Técnica Superior de Arquitectura de Madrid (ETSAM). Para el análisis de los datos experimentales hemos aplicado las teorías conocidas, tanto comparaciones directas como el empleo del análisis dimensional. Finalmente, las simulaciones nos permiten comparaciones flexibles con los datos experimentales, por ese motivo, hemos utilizado tanto programas comerciales como un algoritmo de simulación desarrollado ad hoc para esta investigación. Finalmente, exponemos la discusión y las conclusiones de esta investigación. Abstract The purpose of this thesis is to study the approximation to phenomena of heat transfer in glazed buildings through their scale replicas. The central task of this thesis is, therefore, the comparison of the thermal performance of scale models without distortion with the corresponding thermal performance of their full-scale prototypes. Indoor air temperatures of the scale model and the corresponding prototype are the data to be compared. In the first chapter on the State of the Art, it will be shown a broad vision, consisting of a historic review of uses of scale models, from antiquity to our days. In the section State of the Technique, the benefits and difficulties associated with their implementation are presented. Additionally, in the section State of the Research, current scientific papers and theses on scale models are reviewed. Specifically, we focus on functional scale models. Functional scale models are scale models that replicate, additionally, one or some of the functions of their corresponding prototypes. Scale models can be distorted or not. Scale models with distortion are considered scale models with intentional changes, on one hand, in dimensions scaled unevenly and, on the other hand, in constructive characteristics or materials, in order to get a specific performance for instance, a specific thermal performance. Consequently, scale models without distortion, or undistorted scale models scaled evenly, are those replicating, to the extent possible, without distortion, the dimensional proportions and constructive configurations of their prototypes of reference. These undistorted and functional scale models are especially useful for architects because they can be used, simultaneously, as functional elements of analysis and as decision-making elements during the design. Although they are versatile, in general, it is remarkable that these types of models are used very little for the study of the thermal performance of buildings. Subsequently, the theories related to the analysis of the experimental thermal data collected from the scale models and their applicability to the corresponding full-scale prototypes, will be explained. Thereafter, the experiments in laboratory and at outdoor conditions are detailed. Firstly, experiments carried out with simple cube models at different scales are explained. The prototype larger in size and the corresponding undistorted scale model have been subjected to same environmental conditions in every experimental test. Secondly, a step forward is taken carrying out some simultaneous experimental tests of an undistorted scale model, replica of a relatively simple lightweight and glazed building construction. This experiment consists of monitoring the undistorted scale model of the prototype workshop located in the School of Architecture (ETSAM) of the Technical University of Madrid (UPM). For the analysis of experimental data, known related theories and resources are applied, such as, direct comparisons, statistical analyses, Dimensional Analysis and last, but not least important, simulations. Simulations allow us, specifically, flexible comparisons with experimental data. Here, apart the use of the simulation software EnergyPlus, a simulation algorithm is developed ad hoc for this research. Finally, the discussion and conclusions of this research are exposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The vibrational energy relaxation of carbon monoxide in the heme pocket of sperm whale myoglobin was studied by using molecular dynamics simulation and normal mode analysis methods. Molecular dynamics trajectories of solvated myoglobin were run at 300 K for both the δ- and ɛ-tautomers of the distal His-64. Vibrational population relaxation times of 335 ± 115 ps for the δ-tautomer and 640 ± 185 ps for the ɛ-tautomer were estimated by using the Landau–Teller model. Normal mode analysis was used to identify those protein residues that act as the primary “doorway” modes in the vibrational relaxation of the oscillator. Although the CO relaxation rates in both the ɛ- and δ-tautomers are similar in magnitude, the simulations predict that the vibrational relaxation of the CO is faster in the δ-tautomer with the distal His playing an important role in the energy relaxation mechanism. Time-resolved mid-IR absorbance measurements were performed on photolyzed carbonmonoxy hemoglobin (Hb13CO). From these measurements, a T1 time of 600 ± 150 ps was determined. The simulation and experimental estimates are compared and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three sets of laboratory column experimental results concerning the hydrogeochemistry of seawater intrusion have been modelled using two codes: ACUAINTRUSION (Chemical Engineering Department, University of Alicante) and PHREEQC (U.S.G.S.). These reactive models utilise the hydrodynamic parameters determined using the ACUAINTRUSION TRANSPORT software and fit the chloride breakthrough curves perfectly. The ACUAINTRUSION code was improved, and the instabilities were studied relative to the discretisation. The relative square errors were obtained using different combinations of the spatial and temporal steps: the global error for the total experimental data and the partial error for each element. Good simulations for the three experiments were obtained using the ACUAINTRUSION software with slight variations in the selectivity coefficients for both sediments determined in batch experiments with fresh water. The cation exchange parameters included in ACUAINTRUSION are those reported by the Gapon convention with modified exponents for the Ca/Mg exchange. PHREEQC simulations performed using the Gains-Thomas convention were unsatisfactory, with the exchange coefficients from the database of PHREEQC (or range), but those determined with fresh water – natural sediment allowed only an approximation to be obtained. For the treated sediment, the adjusted exchange coefficients were determined to improve the simulation and are vastly different from those from the database of PHREEQC or batch experiment values; however, these values fall in an order similar to the others determined under dynamic conditions. Different cation concentrations were simulated using two different software packages; this disparity could be attributed to the defined selectivity coefficients that affect the gypsum equilibrium. Consequently, different calculated sulphate concentrations are obtained using each type of software; a smaller mismatch was predicted using ACUAINTRUSION. In general, the presented simulations by ACUAINTRUSION and PHREEQC produced similar results, making predictions consistent with the experimental data. However, the simulated results are not identical to the experimental data; sulphate (total S) is overpredicted by both models, most likely due to such factors as the kinetics of gypsum, the possible variations in the exchange coefficients due to salinity and the neglect of other processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Lattice Solid Model has been used successfully as a virtual laboratory to simulate fracturing of rocks, the dynamics of faults, earthquakes and gouge processes. However, results from those simulations show that in order to make the next step towards more realistic experiments it will be necessary to use models containing a significantly larger number of particles than current models. Thus, those simulations will require a greatly increased amount of computational resources. Whereas the computing power provided by single processors can be expected to increase according to Moore's law, i.e., to double every 18-24 months, parallel computers can provide significantly larger computing power today. In order to make this computing power available for the simulation of the microphysics of earthquakes, a parallel version of the Lattice Solid Model has been implemented. Benchmarks using large models with several millions of particles have shown that the parallel implementation of the Lattice Solid Model can achieve a high parallel-efficiency of about 80% for large numbers of processors on different computer architectures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cereal-legume intercropping plays an important role in subsistence food production in developing countries, especially in situations of limited water resources. Crop simulation can be used to assess risk for intercrop productivity over time and space. In this study, a simple model for intercropping was developed for cereal and legume growth and yield, under semi-arid conditions. The model is based on radiation interception and use, and incorporates a water stress factor. Total dry matter and yield are functions of photosynthetically active radiation (PAR), the fraction of radiation intercepted and radiation use efficiency (RUE). One of two PAR sub-models was used to estimate PAR from solar radiation; either PAR is 50% of solar radiation or the ratio of PAR to solar radiation (PAR/SR) is a function of the clearness index (K-T). The fraction of radiation intercepted was calculated either based on Beer's Law with crop extinction coefficients (K) from field experiments or from previous reports. RUE was calculated as a function of available soil water to a depth of 900 mm (ASW). Either the soil water balance method or the decay curve approach was used to determine ASW. Thus, two alternatives for each of three factors, i.e., PAR/SR, K and ASW, were considered, giving eight possible models (2 methods x 3 factors). The model calibration and validation were carried out with maize-bean intercropping systems using data collected in a semi-arid region (Bloemfontein, Free State, South Africa) during seven growing seasons (1996/1997-2002/2003). The combination of PAR estimated from the clearness index, a crop extinction coefficient from the field experiment and the decay curve model gave the most reasonable and acceptable result. The intercrop model developed in this study is simple, so this modelling approach can be employed to develop other cereal-legume intercrop models for semi-arid regions. (c) 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The buffer allocation problem (BAP) is a well-known difficult problem in the design of production lines. We present a stochastic algorithm for solving the BAP, based on the cross-entropy method, a new paradigm for stochastic optimization. The algorithm involves the following iterative steps: (a) the generation of buffer allocations according to a certain random mechanism, followed by (b) the modification of this mechanism on the basis of cross-entropy minimization. Through various numerical experiments we demonstrate the efficiency of the proposed algorithm and show that the method can quickly generate (near-)optimal buffer allocations for fairly large production lines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A computer model was developed to simulate the cake formation and growth in cake filtration at an individual particle level. The model was shown to be able to generate structural information and quantify the cake thickness, average cake solidosity, filtrate volume, filtrate flowrate for constant pressure filtration or pressure drop across the filter unit for constant rate filtration as a function of filtration time. The effects of particle size distribution and key operational variables such as initial filtration flowrate, maximum pressure drop and initial solidosity were examined based on the simulated results. They are qualitatively comparable to those observed in physical experiments. The need for further development in simulation was also discussed. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article first summarizes some available experimental results on the frictional behaviour of contact interfaces, and briefly recalls typical frictional experiments and relationships, which are applicable for rock mechanics, and then a unified description is obtained to describe the entire frictional behaviour. It is formulated based on the experimental results and applied with a stick and slip decomposition algorithm to describe the stick-slip instability phenomena, which can describe the effects observed in rock experiments without using the so-called state variable, thus avoiding related numerical difficulties. This has been implemented to our finite element code, which uses the node-to-point contact element strategy proposed by the authors to handle the frictional contact between multiple finite-deformation bodies with stick and finite frictional slip, and applied here to simulate the frictional behaviour of rocks to show its usefulness and efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Operator Choice Model (OCM) was developed to model the behaviour of operators attending to complex tasks involving interdependent concurrent activities, such as in Air Traffic Control (ATC). The purpose of the OCM is to provide a flexible framework for modelling and simulation that can be used for quantitative analyses in human reliability assessment, comparison between human computer interaction (HCI) designs, and analysis of operator workload. The OCM virtual operator is essentially a cycle of four processes: Scan Classify Decide Action Perform Action. Once a cycle is complete, the operator will return to the Scan process. It is also possible to truncate a cycle and return to Scan after each of the processes. These processes are described using Continuous Time Probabilistic Automata (CTPA). The details of the probability and timing models are specific to the domain of application, and need to be specified using domain experts. We are building an application of the OCM for use in ATC. In order to develop a realistic model we are calibrating the probability and timing models that comprise each process using experimental data from a series of experiments conducted with student subjects. These experiments have identified the factors that influence perception and decision making in simplified conflict detection and resolution tasks. This paper presents an application of the OCM approach to a simple ATC conflict detection experiment. The aim is to calibrate the OCM so that its behaviour resembles that of the experimental subjects when it is challenged with the same task. Its behaviour should also interpolate when challenged with scenarios similar to those used to calibrate it. The approach illustrated here uses logistic regression to model the classifications made by the subjects. This model is fitted to the calibration data, and provides an extrapolation to classifications in scenarios outside of the calibration data. A simple strategy is used to calibrate the timing component of the model, and the results for reaction times are compared between the OCM and the student subjects. While this approach to timing does not capture the full complexity of the reaction time distribution seen in the data from the student subjects, the mean and the tail of the distributions are similar.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The following thesis describes the computer modelling of radio frequency capacitively coupled methane/hydrogen plasmas and the consequences for the reactive ion etching of (100) GaAs surfaces. In addition a range of etching experiments was undertaken over a matrix of pressure, power and methane concentration. The resulting surfaces were investigated using X-ray photoelectron spectroscopy and the results were discussed in terms of physical and chemical models of particle/surface interactions in addition to the predictions for energies, angles and relative fluxes to the substrate of the various plasma species. The model consisted of a Monte Carlo code which followed electrons and ions through the plasma and sheath potentials whilst taking account of collisions with background neutral gas molecules. The ionisation profile output from the electron module was used as input for the ionic module. Momentum scattering interactions of ions with gas molecules were investigated via different models and compared against results given by quantum mechanical code. The interactions were treated as central potential scattering events and the resulting neutral cascades were followed. The resulting predictions for ion energies at the cathode compared well to experimental ion energy distributions and this verified the particular form of the electrical potentials used and their applicability in the particular geometry plasma cell used in the etching experiments. The final code was used to investigate the effect of external plasma parameters on the mass distribution, energy and angles of all species impingent on the electrodes. Comparisons of electron energies in the plasma also agreed favourably with measurements made using a Langmuir electric probe. The surface analysis showed the surfaces all to be depleted in arsenic due to its preferential removal and the resultant Ga:As ratio in the surface was found to be directly linked to the etch rate. The etch rate was determined by the methane flux which was predicted by the code.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer programs have been developed to enable the coordination of fuses and overcurrent relays for radial power systems under estimated fault current conditions. The grading curves for these protection devices can be produced on a graphics terminal and a hard copy can be obtained. Additional programs have also been developed which could be used to assess the validity of relay settings (obtained under the above conditions) when the transient effect is included. Modelling of a current transformer is included because transformer saturation may occur if the fault current is high, and hence the secondary current is distorted. Experiments were carried out to confirm that distorted currents will affect the relay operating time, and it is shown that if the relay current contains only a small percentage of harmonic distortion, the relay operating time is increased. System equations were arranged to enable the model to predict fault currents with a generator transformer incorporated in the system, and also to include the effect of circuit breaker opening, arcing resistance, and earthing resistance. A fictitious field winding was included to enable more accurate prediction of fault currents when the system is operating at both lagging and leading power factors prior to the occurrence of the fault.