917 resultados para C. computational simulation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Severe local storms, including tornadoes, damaging hail and wind gusts, frequently occur over the eastern and northeastern states of India during the pre-monsoon season (March-May). Forecasting thunderstorms is one of the most difficult tasks in weather prediction, due to their rather small spatial and temporal extension and the inherent non-linearity of their dynamics and physics. In this paper, sensitivity experiments are conducted with the WRF-NMM model to test the impact of convective parameterization schemes on simulating severe thunderstorms that occurred over Kolkata on 20 May 2006 and 21 May 2007 and validated the model results with observation. In addition, a simulation without convective parameterization scheme was performed for each case to determine if the model could simulate the convection explicitly. A statistical analysis based on mean absolute error, root mean square error and correlation coefficient is performed for comparisons between the simulated and observed data with different convective schemes. This study shows that the prediction of thunderstorm affected parameters is sensitive to convective schemes. The Grell-Devenyi cloud ensemble convective scheme is well simulated the thunderstorm activities in terms of time, intensity and the region of occurrence of the events as compared to other convective schemes and also explicit scheme

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Decision trees are very powerful tools for classification in data mining tasks that involves different types of attributes. When coming to handling numeric data sets, usually they are converted first to categorical types and then classified using information gain concepts. Information gain is a very popular and useful concept which tells you, whether any benefit occurs after splitting with a given attribute as far as information content is concerned. But this process is computationally intensive for large data sets. Also popular decision tree algorithms like ID3 cannot handle numeric data sets. This paper proposes statistical variance as an alternative to information gain as well as statistical mean to split attributes in completely numerical data sets. The new algorithm has been proved to be competent with respect to its information gain counterpart C4.5 and competent with many existing decision tree algorithms against the standard UCI benchmarking datasets using the ANOVA test in statistics. The specific advantages of this proposed new algorithm are that it avoids the computational overhead of information gain computation for large data sets with many attributes, as well as it avoids the conversion to categorical data from huge numeric data sets which also is a time consuming task. So as a summary, huge numeric datasets can be directly submitted to this algorithm without any attribute mappings or information gain computations. It also blends the two closely related fields statistics and data mining

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When simulation modeling is used for performance improvement studies of complex systems such as transport terminals, domain specific conceptual modeling constructs could be used by modelers to create structured models. A two stage procedure which includes identification of the problem characteristics/cluster - ‘knowledge acquisition’ and identification of standard models for the problem cluster – ‘model abstraction’ was found to be effective in creating structured models when applied to certain logistic terminal systems. In this paper we discuss some methods and examples related the knowledge acquisition and model abstraction stages for the development of three different types of model categories of terminal systems

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A stand-alone power system is an autonomous system that supplies electricity to the user load without being connected to the electric grid. This kind of decentralized system is frequently located in remote and inaccessible areas. It is essential for about one third of the world population which are living in developed or isolated regions and have no access to an electricity utility grid. The most people live in remote and rural areas, with low population density, lacking even the basic infrastructure. The utility grid extension to these locations is not a cost effective option and sometimes technically not feasible. The purpose of this thesis is the modelling and simulation of a stand-alone hybrid power system, referred to as “hydrogen Photovoltaic-Fuel Cell (PVFC) hybrid system”. It couples a photovoltaic generator (PV), an alkaline water electrolyser, a storage gas tank, a proton exchange membrane fuel cell (PEMFC), and power conditioning units (PCU) to give different system topologies. The system is intended to be an environmentally friendly solution since it tries maximising the use of a renewable energy source. Electricity is produced by a PV generator to meet the requirements of a user load. Whenever there is enough solar radiation, the user load can be powered totally by the PV electricity. During periods of low solar radiation, auxiliary electricity is required. An alkaline high pressure water electrolyser is powered by the excess energy from the PV generator to produce hydrogen and oxygen at a pressure of maximum 30bar. Gases are stored without compression for short- (hourly or daily) and long- (seasonal) term. A proton exchange membrane (PEM) fuel cell is used to keep the system’s reliability at the same level as for the conventional system while decreasing the environmental impact of the whole system. The PEM fuel cell consumes gases which are produced by an electrolyser to meet the user load demand when the PV generator energy is deficient, so that it works as an auxiliary generator. Power conditioning units are appropriate for the conversion and dispatch the energy between the components of the system. No batteries are used in this system since they represent the weakest when used in PV systems due to their need for sophisticated control and their short lifetime. The model library, ISET Alternative Power Library (ISET-APL), is designed by the Institute of Solar Energy supply Technology (ISET) and used for the simulation of the hybrid system. The physical, analytical and/or empirical equations of each component are programmed and implemented separately in this library for the simulation software program Simplorer by C++ language. The model parameters are derived from manufacturer’s performance data sheets or measurements obtained from literature. The identification and validation of the major hydrogen PVFC hybrid system component models are evaluated according to the measured data of the components, from the manufacturer’s data sheet or from actual system operation. Then, the overall system is simulated, at intervals of one hour each, by using solar radiation as the primary energy input and hydrogen as energy storage for one year operation. A comparison between different topologies, such as DC or AC coupled systems, is carried out on the basis of energy point of view at two locations with different geographical latitudes, in Kassel/Germany (Europe) and in Cairo/Egypt (North Africa). The main conclusion in this work is that the simulation method of the system study under different conditions could successfully be used to give good visualization and comparison between those topologies for the overall performance of the system. The operational performance of the system is not only depending on component efficiency but also on system design and consumption behaviour. The worst case of this system is the low efficiency of the storage subsystem made of the electrolyser, the gas storage tank, and the fuel cell as it is around 25-34% at Cairo and 29-37% at Kassel. Therefore, the research for this system should be concentrated in the subsystem components development especially the fuel cell.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Excimerlaser sind gepulste Gaslaser, die Laseremission in Form von Linienstrahlung – abhängig von der Gasmischung – im UV erzeugen. Der erste entladungsgepumpte Excimerlaser wurde 1977 von Ischenko demonstriert. Alle kommerziell verfügbaren Excimerlaser sind entladungsgepumpte Systeme. Um eine Inversion der Besetzungsdichte zu erhalten, die notwendig ist, um den Laser zum Anschwingen zu bekommen, muss aufgrund der kurzen Wellenlänge sehr stark gepumpt werden. Diese Pumpleistung muss von einem Impulsleistungsmodul erzeugt werden. Als Schaltelement gebräuchlich sind Thyratrons, Niederdruckschaltröhren, deren Lebensdauer jedoch sehr limitiert ist. Deshalb haben sich seit Mitte der 1990iger Jahre Halbleiterschalter mit Pulskompressionsstufen auch in dieser Anwendung mehr und mehr durchgesetzt. In dieser Arbeit wird versucht, die Pulskompression durch einen direkt schaltenden Halbleiterstapel zu ersetzen und dadurch die Verluste zu reduzieren sowie den Aufwand für diese Pulskompression einzusparen. Zudem kann auch die maximal mögliche Repetitionsrate erhöht werden. Um die Belastung der Bauelemente zu berechnen, wurden für alle Komponenten möglichst einfache, aber leistungsfähige Modelle entwickelt. Da die normalerweise verfügbaren Daten der Bauelemente sich aber auf andere Applikationen beziehen, mussten für alle Bauteile grundlegende Messungen im Zeitbereich der späteren Applikation gemacht werden. Für die nichtlinearen Induktivitäten wurde ein einfaches Testverfahren entwickelt um die Verluste bei sehr hohen Magnetisierungsgeschwindigkeiten zu bestimmen. Diese Messungen sind die Grundlagen für das Modell, das im Wesentlichen eine stromabhängige Induktivität beschreibt. Dieses Modell wurde für den „magnetic assist“ benützt, der die Einschaltverluste in den Halbleitern reduziert. Die Impulskondensatoren wurden ebenfalls mit einem in der Arbeit entwickelten Verfahren nahe den späteren Einsatzparametern vermessen. Dabei zeigte sich, dass die sehr gebräuchlichen Class II Keramikkondensatoren für diese Anwendung nicht geeignet sind. In der Arbeit wurden deshalb Class I Hochspannungs- Vielschicht- Kondensatoren als Speicherbank verwendet, die ein deutlich besseres Verhalten zeigen. Die eingesetzten Halbleiterelemente wurden ebenfalls in einem Testverfahren nahe den späteren Einsatzparametern vermessen. Dabei zeigte sich, dass nur moderne Leistungs-MOSFET´s für diesen Einsatz geeignet sind. Bei den Dioden ergab sich, dass nur Siliziumkarbid (SiC) Schottky Dioden für die Applikation einsetzbar sind. Für die Anwendung sind prinzipiell verschiedene Topologien möglich. Bei näherer Betrachtung zeigt sich jedoch, dass nur die C-C Transfer Anordnung die gewünschten Ergebnisse liefern kann. Diese Topologie wurde realisiert. Sie besteht im Wesentlichen aus einer Speicherbank, die vom Netzteil aufgeladen wird. Aus dieser wird dann die Energie in den Laserkopf über den Schalter transferiert. Aufgrund der hohen Spannungen und Ströme müssen 24 Schaltelemente in Serie und je 4 parallel geschaltet werden. Die Ansteuerung der Schalter wird über hochisolierende „Gate“-Transformatoren erreicht. Es zeigte sich, dass eine sorgfältig ausgelegte dynamische und statische Spannungsteilung für einen sicheren Betrieb notwendig ist. In der Arbeit konnte ein Betrieb mit realer Laserkammer als Last bis 6 kHz realisiert werden, der nur durch die maximal mögliche Repetitionsrate der Laserkammer begrenzt war.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Kineticist's Workbench is a program that simulates chemical reaction mechanisms by predicting, generating, and interpreting numerical data. Prior to simulation, it analyzes a given mechanism to predict that mechanism's behavior; it then simulates the mechanism numerically; and afterward, it interprets and summarizes the data it has generated. In performing these tasks, the Workbench uses a variety of techniques: graph- theoretic algorithms (for analyzing mechanisms), traditional numerical simulation methods, and algorithms that examine simulation results and reinterpret them in qualitative terms. The Workbench thus serves as a prototype for a new class of scientific computational tools---tools that provide symbiotic collaborations between qualitative and quantitative methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modeling and simulation permeate all areas of business, science and engineering. With the increase in the scale and complexity of simulations, large amounts of computational resources are required, and collaborative model development is needed, as multiple parties could be involved in the development process. The Grid provides a platform for coordinated resource sharing and application development and execution. In this paper, we survey existing technologies in modeling and simulation, and we focus on interoperability and composability of simulation components for both simulation development and execution. We also present our recent work on an HLA-based simulation framework on the Grid, and discuss the issues to achieve composability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electroosmotic flow is a convenient mechanism for transporting polar fluid in a microfluidic device. The flow is generated through the application of an external electric field that acts on the free charges that exists in a thin Debye layer at the channel walls. The charge on the wall is due to the chemistry of the solid-fluid interface, and it can vary along the channel, e.g. due to modification of the wall. This investigation focuses on the simulation of the electroosmotic flow (EOF) profile in a cylindrical microchannel with step change in zeta potential. The modified Navier-Stoke equation governing the velocity field and a non-linear two-dimensional Poisson-Boltzmann equation governing the electrical double-layer (EDL) field distribution are solved numerically using finite control-volume method. Continuities of flow rate and electric current are enforced resulting in a non-uniform electrical field and pressure gradient distribution along the channel. The resulting parabolic velocity distribution at the junction of the step change in zeta potential, which is more typical of a pressure-driven velocity flow profile, is obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta disertación busca estudiar los mecanismos de transmisión que vinculan el comportamiento de agentes y firmas con las asimetrías presentes en los ciclos económicos. Para lograr esto, se construyeron tres modelos DSGE. El en primer capítulo, el supuesto de función cuadrática simétrica de ajuste de la inversión fue removido, y el modelo canónico RBC fue reformulado suponiendo que des-invertir es más costoso que invertir una unidad de capital físico. En el segundo capítulo, la contribución más importante de esta disertación es presentada: la construcción de una función de utilidad general que anida aversión a la pérdida, aversión al riesgo y formación de hábitos, por medio de una función de transición suave. La razón para hacerlo así es el hecho de que los individuos son aversos a la pérdidad en recesiones, y son aversos al riesgo en auges. En el tercer capítulo, las asimetrías en los ciclos económicos son analizadas junto con ajuste asimétrico en precios y salarios en un contexto neokeynesiano, con el fin de encontrar una explicación teórica de la bien documentada asimetría presente en la Curva de Phillips.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La teoría de redes de Johanson y Mattson (1988) explica como las pequeñas empresas, también conocidas como PyMes, utilizan las redes de negocio para desarrollar sus procesos de internacionalización. Es así que a través de las redes pueden superar sus limitaciones de tamaño para encontrar cierto tipo de fluidez y dinamismo en su gestión, con el fin de aprovechar los beneficios de la internacionalización. A partir del desarrollo y fortalecimiento de las relaciones dentro de la red la organización puede posicionarse en una instancia competitiva cada vez más fuerte (Jarillo, 1988). Según Forsgren y Johanson (1992), para los gerentes es importante coordinar la interacción entre los diferentes actores de la red, ya que a través de estas su posición dentro de la red mejora y así mismo el flujo de recursos será mayor. El propósito de este trabajo es analizar el modelo de internacionalización según la teoría de redes, desde una perspectiva cultural, de e-Tech Simulation una PyME “Born to be global” norteamericana. Esta empresa ha minimizado su riesgo de internacionalización, a través del desarrollo de acuerdos entre los diferentes actores. Al mejorar su posición dentro de la red, es decir al fortalecer aún más los lazos existentes y crear nuevas relaciones, la empresa ha obtenido mayores beneficios de la misma y ha logrado ser aún más flexible con sus clientes. Es por esto que a partir de este análisis se planteó una serie de recomendaciones para mejorar los procesos de negociación dentro de la red, bajo un contexto cultural. De igual forma se evidencio la importancia del papel del emprendimiento del gerente en los procesos de internacionalización, así como su habilidad para mezclar los recursos obtenidos de diferentes mercados internacionales para satisfacer las necesidades de los clientes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A través de una simulación llevada a cabo con GTAP, este documento presenta una evaluación preliminar del impacto potencial que el Área de Libre Comercio de las Américas tendría sobre la Comunidad Andina de Naciones. Mantenido por la Universidad de Purdue, el GTAP es un modelo multiregional de equilibrio general, ampliamente usado para el análisis de temas de economía internacional. El experimento llevado a cabo tiene lugar en un ambiente de competencia perfecta y rendimientos constantes a escala y consiste en la completa eliminación de aranceles a las importaciones de bienes entre los países del Hemisferio Occidental. Los resultados muestran la presencia de modestas pero positivas ganancias netas de bienestar para la Comunidad Andina, generadas fundamentalmente por mejoras en la asignación de recursos. Movimientos desfavorables en los términos de intercambio y el efecto de la desviación de comercio con respecto a terceros países, reducen considerablemente las ganancias potenciales de bienestar. De la misma forma, la existencia de distorsiones económicas al interior de la Comunidad Andina tiene un efecto negativo sobre el bienestar. El patrón de comercio aumenta su grado de concentración en el comercio bilateral con los Estados Unidos y la remuneración real a los factores productivos presenta mejoras con la implementación de la zona de libre comercio.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Charge transfer properties of DNA depend strongly on the π stack conformation. In the present paper, we identify conformations of homogeneous poly-{G}-poly-{C} stacks that should exhibit high charge mobility. Two different computational approaches were applied. First, we calculated the electronic coupling squared, V2, between adjacent base pairs for all 1 ps snapshots extracted from 15 ns molecular dynamics trajectory of the duplex G15. The average value of the coupling squared 〈 V2 〉 is found to be 0.0065 eV2. Then we analyze the base-pair and step parameters of the configurations in which V2 is at least an order of magnitude larger than 〈 V2 〉. To obtain more consistent data, ∼65 000 configurations of the (G:C)2 stack were built using systematic screening of the step parameters shift, slide, and twist. We show that undertwisted structures (twist<20°) are of special interest, because the π stack conformations with strong electronic couplings are found for a wide range of slide and shift. Although effective hole transfer can also occur in configurations with twist=30° and 35°, large mutual displacements of neighboring base pairs are required for that. Overtwisted conformation (twist38°) seems to be of limited interest in the context of effective hole transfer. The results may be helpful in the search for DNA based elements for nanoelectronics

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La present tesi proposa una metodología per a la simulació probabilística de la fallada de la matriu en materials compòsits reforçats amb fibres de carboni, basant-se en l'anàlisi de la distribució aleatòria de les fibres. En els primers capítols es revisa l'estat de l'art sobre modelització matemàtica de materials aleatoris, clcul de propietats efectives i criteris de fallada transversal en materials compòsits. El primer pas en la metodologia proposada és la definició de la determinació del tamany mínim d'un Element de Volum Representatiu Estadístic (SRVE) . Aquesta determinació es du a terme analitzant el volum de fibra, les propietats elàstiques efectives, la condició de Hill, els estadístics de les components de tensió i defromació, la funció de densitat de probabilitat i les funcions estadístiques de distància entre fibres de models d'elements de la microestructura, de diferent tamany. Un cop s'ha determinat aquest tamany mínim, es comparen un model periòdic i un model aleatori, per constatar la magnitud de les diferències que s'hi observen. Es defineix, també, una metodologia per a l'anàlisi estadístic de la distribució de la fibra en el compòsit, a partir d'imatges digitals de la secció transversal. Aquest anàlisi s'aplica a quatre materials diferents. Finalment, es proposa un mètode computacional de dues escales per a simular la fallada transversal de làmines unidireccionals, que permet obtenir funcions de densitat de probabilitat per a les variables mecniques. Es descriuen algunes aplicacions i possibilitats d'aquest mètode i es comparen els resultats obtinguts de la simulació amb valors experimentals.