925 resultados para tree-dimensional analytical solution


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identitication and quantification of the hazards associated with chemical industries. This research work presents the results of a consequence analysis carried out to assess the damage potential of the hazardous material storages in an industrial area of central Kerala, India. A survey carried out in the major accident hazard (MAH) units in the industrial belt revealed that the major hazardous chemicals stored by the various industrial units are ammonia, chlorine, benzene, naphtha, cyclohexane, cyclohexanone and LPG. The damage potential of the above chemicals is assessed using consequence modelling. Modelling of pool fires for naphtha, cyclohexane, cyclohexanone, benzene and ammonia are carried out using TNO model. Vapor cloud explosion (VCE) modelling of LPG, cyclohexane and benzene are carried out using TNT equivalent model. Boiling liquid expanding vapor explosion (BLEVE) modelling of LPG is also carried out. Dispersion modelling of toxic chemicals like chlorine, ammonia and benzene is carried out using the ALOHA air quality model. Threat zones for different hazardous storages are estimated based on the consequence modelling. The distance covered by the threat zone was found to be maximum for chlorine release from a chlor-alkali industry located in the area. The results of consequence modelling are useful for the estimation of individual risk and societal risk in the above industrial area.Vulnerability assessment is carried out using probit functions for toxic, thermal and pressure loads. Individual and societal risks are also estimated at different locations. Mapping of threat zones due to different incident outcome cases from different MAH industries is done with the help of Are GIS.Fault Tree Analysis (FTA) is an established technique for hazard evaluation. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. However it is often difficult to estimate precisely the failure probability of the components due to insufficient data or vague characteristics of the basic event. It has been reported that availability of the failure probability data pertaining to local conditions is surprisingly limited in India. This thesis outlines the generation of failure probability values of the basic events that lead to the release of chlorine from the storage and filling facility of a major chlor-alkali industry located in the area using expert elicitation and proven fuzzy logic. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor invo1ved in expert elicitation .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of simple chaotic maps for non-equilibrium processes in statistical physics has been one of the central themes in the theory of chaotic dynamical systems. Recently, many works have been carried out on deterministic diffusion in spatially extended one-dimensional maps This can be related to real physical systems such as Josephson junctions in the presence of microwave radiation and parametrically driven oscillators. Transport due to chaos is an important problem in Hamiltonian dynamics also. A recent approach is to evaluate the exact diffusion coefficient in terms of the periodic orbits of the system in the form of cycle expansions. But the fact is that the chaotic motion in such spatially extended maps has two complementary aspects- - diffusion and interrnittency. These are related to the time evolution of the probability density function which is approximately Gaussian by central limit theorem. It is noticed that the characteristic function method introduced by Fujisaka and his co-workers is a very powerful tool for analysing both these aspects of chaotic motion. The theory based on characteristic function actually provides a thermodynamic formalism for chaotic systems It can be applied to other types of chaos-induced diffusion also, such as the one arising in statistics of trajectory separation. It was noted that there is a close connection between cycle expansion technique and characteristic function method. It was found that this connection can be exploited to enhance the applicability of the cycle expansion technique. In this way, we found that cycle expansion can be used to analyse the probability density function in chaotic maps. In our research studies we have successfully applied the characteristic function method and cycle expansion technique for analysing some chaotic maps. We introduced in this connection, two classes of chaotic maps with variable shape by generalizing two types of maps well known in literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In dieser Arbeit werden zwei Aspekte bei Randwertproblemen der linearen Elastizitätstheorie untersucht: die Approximation von Lösungen auf unbeschränkten Gebieten und die Änderung von Symmetrieklassen unter speziellen Transformationen. Ausgangspunkt der Dissertation ist das von Specovius-Neugebauer und Nazarov in "Artificial boundary conditions for Petrovsky systems of second order in exterior domains and in other domains of conical type"(Math. Meth. Appl. Sci, 2004; 27) eingeführte Verfahren zur Untersuchung von Petrovsky-Systemen zweiter Ordnung in Außenraumgebieten und Gebieten mit konischen Ausgängen mit Hilfe der Methode der künstlichen Randbedingungen. Dabei werden für die Ermittlung von Lösungen der Randwertprobleme die unbeschränkten Gebiete durch das Abschneiden mit einer Kugel beschränkt, und es wird eine künstliche Randbedingung konstruiert, um die Lösung des Problems möglichst gut zu approximieren. Das Verfahren wird dahingehend verändert, dass das abschneidende Gebiet ein Polyeder ist, da es für die Lösung des Approximationsproblems mit üblichen Finite-Element-Diskretisierungen von Vorteil sei, wenn das zu triangulierende Gebiet einen polygonalen Rand besitzt. Zu Beginn der Arbeit werden die wichtigsten funktionalanalytischen Begriffe und Ergebnisse der Theorie elliptischer Differentialoperatoren vorgestellt. Danach folgt der Hauptteil der Arbeit, der sich in drei Bereiche untergliedert. Als erstes wird für abschneidende Polyedergebiete eine formale Konstruktion der künstlichen Randbedingungen angegeben. Danach folgt der Nachweis der Existenz und Eindeutigkeit der Lösung des approximativen Randwertproblems auf dem abgeschnittenen Gebiet und im Anschluss wird eine Abschätzung für den resultierenden Abschneidefehler geliefert. An die theoretischen Ausführungen schließt sich die Betrachtung von Anwendungsbereiche an. Hier werden ebene Rissprobleme und Polarisationsmatrizen dreidimensionaler Außenraumprobleme der Elastizitätstheorie erläutert. Der letzte Abschnitt behandelt den zweiten Aspekt der Arbeit, den Bereich der Algebraischen Äquivalenzen. Hier geht es um die Transformation von Symmetrieklassen, um die Kenntnis der Fundamentallösung der Elastizitätsprobleme für transversalisotrope Medien auch für Medien zu nutzen, die nicht von transversalisotroper Struktur sind. Eine allgemeine Darstellung aller Klassen konnte hier nicht geliefert werden. Als Beispiel für das Vorgehen wird eine Klasse von orthotropen Medien im dreidimensionalen Fall angegeben, die sich auf den Fall der Transversalisotropie reduzieren lässt.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The finite element method (FEM) is now developed to solve two-dimensional Hartree-Fock (HF) equations for atoms and diatomic molecules. The method and its implementation is described and results are presented for the atoms Be, Ne and Ar as well as the diatomic molecules LiH, BH, N_2 and CO as examples. Total energies and eigenvalues calculated with the FEM on the HF-level are compared with results obtained with the numerical standard methods used for the solution of the one dimensional HF equations for atoms and for diatomic molecules with the traditional LCAO quantum chemical methods and the newly developed finite difference method on the HF-level. In general the accuracy increases from the LCAO - to the finite difference - to the finite element method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Das von Maz'ya eingeführte Approximationsverfahren, die Methode der näherungsweisen Näherungen (Approximate Approximations), kann auch zur numerischen Lösung von Randintegralgleichungen verwendet werden (Randpunktmethode). In diesem Fall hängen die Komponenten der Matrix des resultierenden Gleichungssystems zur Berechnung der Näherung für die Dichte nur von der Position der Randpunkte und der Richtung der äußeren Einheitsnormalen in diesen Punkten ab. Dieses numerisches Verfahren wird am Beispiel des Dirichlet Problems für die Laplace Gleichung und die Stokes Gleichungen in einem beschränkten zweidimensionalem Gebiet untersucht. Die Randpunktmethode umfasst drei Schritte: Im ersten Schritt wird die unbekannte Dichte durch eine Linearkombination von radialen, exponentiell abklingenden Basisfunktionen approximiert. Im zweiten Schritt wird die Integration über den Rand durch die Integration über die Tangenten in Randpunkten ersetzt. Für die auftretende Näherungspotentiale können sogar analytische Ausdrücke gewonnen werden. Im dritten Schritt wird das lineare Gleichungssystem gelöst, und eine Näherung für die unbekannte Dichte und damit auch für die Lösung der Randwertaufgabe konstruiert. Die Konvergenz dieses Verfahrens wird für glatte konvexe Gebiete nachgewiesen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die Verordnung des Europäischen Rates (EC) 834/2007 erkennt das Recht des Konsumenten auf eine Entscheidung basierend auf vollständiger Information bezüglich der enthaltenen Zutaten im Produkt und deren Herkunft (Qualität der Verarbeitung). Die primäre Kennzeichnungsverordnung betont „organische“ Produktionsstandards ebenso wie die Notwendigkeit zur Kontrolle und Aufsicht. Jedoch ist zurzeit keine validierte Methode zur analytischen Diskriminierung zwischen „organischer“ und „konventioneller“ Herkunft von angebotenen Lebensmitteln verfügbar. Das Ziel der Dissertationsarbeit war die Überprüfung der Möglichkeit mit ausgewählten analytischen und holistischen Methoden zwischen organisch und konventionell angebautem Weizen objektiv zu unterscheiden. Dies beinhaltete die Bestimmung des Gesamtstickstoff (Protein) nach Dumas, zweidimensionale Fluoreszenzdifferenz Gelelektrophorese (2D DIGE) und die Kupferchloridkristallisation. Zusätzlich wurde die Anzahl der Körner pro Ähre (Kornzahl) bestimmt. Alle Bestimmungen wurden an rückverfolgbaren in den Jahren 2005 – 2007 in Belgien gesammelten Proben des Winterweizen (Triticum aestivum L. cv. Cubus) durchgeführt. Statistisch signifikante (p < 0.05) Unterschiede wurden innerhalb der untersuchten Probengruppen sowohl in der Kornzahl, dem Gesamtsticksoff (Eiweißgehalt), als auch in der Gesamtausbeute gefunden, wobei in den meisten Fällen die konventionellen Proben höhere Kornzahlen und Gesamtsticksoff (Eiweißgehalte) aufwiesen. Eine mit der 2D DIGE kompatible Probenvorbereitungsmethode für Winterweizen wurde entwickelt und auf einen internen Winterweizenstandard sowie die entsprechenden Proben angewendet. Die organischen Proben waren im Vergleich mit den konventionellen Gegenstücken in allen Fällen durch eine kleinere Anzahl von signifikant (p < 0.05) stärker exprimierten Proteinspots gekennzeichnet. Gewisse Tendenzen in Richtung der Bevorzugung bestimmter Regionen von stärker ausgeprägten Proteinspots auf aufeinanderfolgenden 2D Abbildungen in Abhängigkeit von der landwirtschaftlichen Methode konnten zwar beobachtet werden, jedoch konnte kein universelles Markerprotein zur Unterscheidung von konventionell und biologisch angebautem Winterweizen identifiziert werden. Die rechnergestützte Verarbeitung der digitalisierten Kristallisierungsbilder mittels multivariater statistischer Analyse und der Regression partieller kleinster Quadrate ermöglichte eine 100%ig korrekte Vorhersage der landwirtschaftlichen Methode unbekannter Proben sowie der Beschreibung der Kristallisierungsbilder. Diese Vorhersage bezieht sich nur auf den hier verwendeten Datensatz (Proben einer Sorte von drei Standorten über zwei Jahre) und kann nicht ohne weiteres übertragen (generalisiert) werden. Die Ergebnisse deuten an, dass die Quantifizierung der beschriebenen Parameter ein hohes Potential zur Lösung der gestellten Aufgabe besitzt.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a precorrected FFT-Fast Multipole Tree (pFFT-FMT) method for solving the potential flow around arbitrary three dimensional bodies is presented. The method takes advantage of the efficiency of the pFFT and FMT algorithms to facilitate more demanding computations such as automatic wake generation and hands-off steady and unsteady aerodynamic simulations. The velocity potential on the body surfaces and in the domain is determined using a pFFT Boundary Element Method (BEM) approach based on the Green’s Theorem Boundary Integral Equation. The vorticity trailing all lifting surfaces in the domain is represented using a Fast Multipole Tree, time advected, vortex participle method. Some simple steady state flow solutions are performed to demonstrate the basic capabilities of the solver. Although this paper focuses primarily on steady state solutions, it should be noted that this approach is designed to be a robust and efficient unsteady potential flow simulation tool, useful for rapid computational prototyping.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bimodal dispersal probability distributions with characteristic distances differing by several orders of magnitude have been derived and favorably compared to observations by Nathan [Nature (London) 418, 409 (2002)]. For such bimodal kernels, we show that two-dimensional molecular dynamics computer simulations are unable to yield accurate front speeds. Analytically, the usual continuous-space random walks (CSRWs) are applied to two dimensions. We also introduce discrete-space random walks and use them to check the CSRW results (because of the inefficiency of the numerical simulations). The physical results reported are shown to predict front speeds high enough to possibly explain Reid's paradox of rapid tree migration. We also show that, for a time-ordered evolution equation, fronts are always slower in two dimensions than in one dimension and that this difference is important both for unimodal and for bimodal kernels

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The characteristics of service independence and flexibility of ATM networks make the control problems of such networks very critical. One of the main challenges in ATM networks is to design traffic control mechanisms that enable both economically efficient use of the network resources and desired quality of service to higher layer applications. Window flow control mechanisms of traditional packet switched networks are not well suited to real time services, at the speeds envisaged for the future networks. In this work, the utilisation of the Probability of Congestion (PC) as a bandwidth decision parameter is presented. The validity of PC utilisation is compared with QOS parameters in buffer-less environments when only the cell loss ratio (CLR) parameter is relevant. The convolution algorithm is a good solution for CAC in ATM networks with small buffers. If the source characteristics are known, the actual CLR can be very well estimated. Furthermore, this estimation is always conservative, allowing the retention of the network performance guarantees. Several experiments have been carried out and investigated to explain the deviation between the proposed method and the simulation. Time parameters for burst length and different buffer sizes have been considered. Experiments to confine the limits of the burst length with respect to the buffer size conclude that a minimum buffer size is necessary to achieve adequate cell contention. Note that propagation delay is a no dismiss limit for long distance and interactive communications, then small buffer must be used in order to minimise delay. Under previous premises, the convolution approach is the most accurate method used in bandwidth allocation. This method gives enough accuracy in both homogeneous and heterogeneous networks. But, the convolution approach has a considerable computation cost and a high number of accumulated calculations. To overcome this drawbacks, a new method of evaluation is analysed: the Enhanced Convolution Approach (ECA). In ECA, traffic is grouped in classes of identical parameters. By using the multinomial distribution function instead of the formula-based convolution, a partial state corresponding to each class of traffic is obtained. Finally, the global state probabilities are evaluated by multi-convolution of the partial results. This method avoids accumulated calculations and saves storage requirements, specially in complex scenarios. Sorting is the dominant factor for the formula-based convolution, whereas cost evaluation is the dominant factor for the enhanced convolution. A set of cut-off mechanisms are introduced to reduce the complexity of the ECA evaluation. The ECA also computes the CLR for each j-class of traffic (CLRj), an expression for the CLRj evaluation is also presented. We can conclude that by combining the ECA method with cut-off mechanisms, utilisation of ECA in real-time CAC environments as a single level scheme is always possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Muchas de las nuevas aplicaciones emergentes de Internet tales como TV sobre Internet, Radio sobre Internet,Video Streamming multi-punto, entre otras, necesitan los siguientes requerimientos de recursos: ancho de banda consumido, retardo extremo-a-extremo, tasa de paquetes perdidos, etc. Por lo anterior, es necesario formular una propuesta que especifique y provea para este tipo de aplicaciones los recursos necesarios para su buen funcionamiento. En esta tesis, proponemos un esquema de ingeniería de tráfico multi-objetivo a través del uso de diferentes árboles de distribución para muchos flujos multicast. En este caso, estamos usando la aproximación de múltiples caminos para cada nodo egreso y de esta forma obtener la aproximación de múltiples árboles y a través de esta forma crear diferentes árboles multicast. Sin embargo, nuestra propuesta resuelve la fracción de la división del tráfico a través de múltiples árboles. La propuesta puede ser aplicada en redes MPLS estableciendo rutas explícitas en eventos multicast. En primera instancia, el objetivo es combinar los siguientes objetivos ponderados dentro de una métrica agregada: máxima utilización de los enlaces, cantidad de saltos, el ancho de banda total consumido y el retardo total extremo-a-extremo. Nosotros hemos formulado esta función multi-objetivo (modelo MHDB-S) y los resultados obtenidos muestran que varios objetivos ponderados son reducidos y la máxima utilización de los enlaces es minimizada. El problema es NP-duro, por lo tanto, un algoritmo es propuesto para optimizar los diferentes objetivos. El comportamiento que obtuvimos usando este algoritmo es similar al que obtuvimos con el modelo. Normalmente, durante la transmisión multicast los nodos egresos pueden salir o entrar del árbol y por esta razón en esta tesis proponemos un esquema de ingeniería de tráfico multi-objetivo usando diferentes árboles para grupos multicast dinámicos. (en el cual los nodos egresos pueden cambiar durante el tiempo de vida de la conexión). Si un árbol multicast es recomputado desde el principio, esto podría consumir un tiempo considerable de CPU y además todas las comuicaciones que están usando el árbol multicast serán temporalmente interrumpida. Para aliviar estos inconvenientes, proponemos un modelo de optimización (modelo dinámico MHDB-D) que utilice los árboles multicast previamente computados (modelo estático MHDB-S) adicionando nuevos nodos egreso. Usando el método de la suma ponderada para resolver el modelo analítico, no necesariamente es correcto, porque es posible tener un espacio de solución no convexo y por esta razón algunas soluciones pueden no ser encontradas. Adicionalmente, otros tipos de objetivos fueron encontrados en diferentes trabajos de investigación. Por las razones mencionadas anteriormente, un nuevo modelo llamado GMM es propuesto y para dar solución a este problema un nuevo algoritmo usando Algoritmos Evolutivos Multi-Objetivos es propuesto. Este algoritmo esta inspirado por el algoritmo Strength Pareto Evolutionary Algorithm (SPEA). Para dar una solución al caso dinámico con este modelo generalizado, nosotros hemos propuesto un nuevo modelo dinámico y una solución computacional usando Breadth First Search (BFS) probabilístico. Finalmente, para evaluar nuestro esquema de optimización propuesto, ejecutamos diferentes pruebas y simulaciones. Las principales contribuciones de esta tesis son la taxonomía, los modelos de optimización multi-objetivo para los casos estático y dinámico en transmisiones multicast (MHDB-S y MHDB-D), los algoritmos para dar solución computacional a los modelos. Finalmente, los modelos generalizados también para los casos estático y dinámico (GMM y GMM Dinámico) y las propuestas computacionales para dar slución usando MOEA y BFS probabilístico.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a cell by cell anisotropic adaptive mesh technique is added to an existing staggered mesh Lagrange plus remap finite element ALE code for the solution of the Euler equations. The quadrilateral finite elements may be subdivided isotropically or anisotropically and a hierarchical data structure is employed. An efficient computational method is proposed, which only solves on the finest level of resolution that exists for each part of the domain with disjoint or hanging nodes being used at resolution transitions. The Lagrangian, equipotential mesh relaxation and advection (solution remapping) steps are generalised so that they may be applied on the dynamic mesh. It is shown that for a radial Sod problem and a two-dimensional Riemann problem the anisotropic adaptive mesh method runs over eight times faster.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A manipulated increase in acid deposition (15 kg S ha(-1)), carried out for three months in a mature Scots pine (Pinus sylvestris) stand on a podzol, acidified the soil and raised dissolved Al at concentrations above the critical level of 5 mg l(-1) previously determined in a controlled experiment with Scots pine seedlings. The induced soil acidification reduced tree fine root density and biomass significantly in the top 15 cm of soil in the field. The results suggested that the reduction in fine root growth was a response not simply to high Al in solution but to the depletion of exchangeable Ca and Mg in the organic layer, K deficiency, the increase in NH4:NO3 ratio in solution and the high proton input to the soil by the acid manipulation. The results from this study could not justify the hypothesis of Al-induced root damage under field conditions, at least not in the short term. However, the study suggests that a short exposure to soil acidity may affect the fine root growth of mature Scots pine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the problem of scattering of time-harmonic acoustic waves by an unbounded sound-soft rough surface. Recently, a Brakhage Werner type integral equation formulation of this problem has been proposed, based on an ansatz as a combined single- and double-layer potential, but replacing the usual fundamental solution of the Helmholtz equation with an appropriate half-space Green's function. Moreover, it has been shown in the three-dimensional case that this integral equation is uniquely solvable in the space L-2 (Gamma) when the scattering surface G does not differ too much from a plane. In this paper, we show that this integral equation is uniquely solvable with no restriction on the surface elevation or slope. Moreover, we construct explicit bounds on the inverse of the associated boundary integral operator, as a function of the wave number, the parameter coupling the single- and double-layer potentials, and the maximum surface slope. These bounds show that the norm of the inverse operator is bounded uniformly in the wave number, kappa, for kappa > 0, if the coupling parameter h is chosen proportional to the wave number. In the case when G is a plane, we show that the choice eta = kappa/2 is nearly optimal in terms of minimizing the condition number.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One among the most influential and popular data mining methods is the k-Means algorithm for cluster analysis. Techniques for improving the efficiency of k-Means have been largely explored in two main directions. The amount of computation can be significantly reduced by adopting geometrical constraints and an efficient data structure, notably a multidimensional binary search tree (KD-Tree). These techniques allow to reduce the number of distance computations the algorithm performs at each iteration. A second direction is parallel processing, where data and computation loads are distributed over many processing nodes. However, little work has been done to provide a parallel formulation of the efficient sequential techniques based on KD-Trees. Such approaches are expected to have an irregular distribution of computation load and can suffer from load imbalance. This issue has so far limited the adoption of these efficient k-Means variants in parallel computing environments. In this work, we provide a parallel formulation of the KD-Tree based k-Means algorithm for distributed memory systems and address its load balancing issue. Three solutions have been developed and tested. Two approaches are based on a static partitioning of the data set and a third solution incorporates a dynamic load balancing policy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new heuristic for the Steiner Minimal Tree problem is presented here. The method described is based on the detection of particular sets of nodes in networks, the “Hot Spot” sets, which are used to obtain better approximations of the optimal solutions. An algorithm is also proposed which is capable of improving the solutions obtained by classical heuristics, by means of a stirring process of the nodes in solution trees. Classical heuristics and an enumerative method are used CIS comparison terms in the experimental analysis which demonstrates the goodness of the heuristic discussed in this paper.