992 resultados para calculation models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

n this article, a tool for simulating the channel impulse response for indoor visible light communications using 3D computer-aided design (CAD) models is presented. The simulation tool is based on a previous Monte Carlo ray-tracing algorithm for indoor infrared channel estimation, but including wavelength response evaluation. The 3D scene, or the simulation environment, can be defined using any CAD software in which the user specifies, in addition to the setting geometry, the reflection characteristics of the surface materials as well as the structures of the emitters and receivers involved in the simulation. Also, in an effort to improve the computational efficiency, two optimizations are proposed. The first one consists of dividing the setting into cubic regions of equal size, which offers a calculation improvement of approximately 50% compared to not dividing the 3D scene into sub-regions. The second one involves the parallelization of the simulation algorithm, which provides a computational speed-up proportional to the number of processors used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Civil buildings are not specifically designed to support blast loads, but it is important to take into account these potential scenarios because of their catastrophic effects, on persons and structures. A practical way to consider explosions on reinforced concrete structures is necessary. With this objective we propose a methodology to evaluate blast loads on large concrete buildings, using LS-DYNA code for calculation, with Lagrangian finite elements and explicit time integration. The methodology has three steps. First, individual structural elements of the building like columns and slabs are studied, using continuum 3D elements models subjected to blast loads. In these models reinforced concrete is represented with high precision, using advanced material models such as CSCM_CONCRETE model, and segregated rebars constrained within the continuum mesh. Regrettably this approach cannot be used for large structures because of its excessive computational cost. Second, models based on structural elements are developed, using shells and beam elements. In these models concrete is represented using CONCRETE_EC2 model and segregated rebars with offset formulation, being calibrated with continuum elements models from step one to obtain the same structural response: displacement, velocity, acceleration, damage and erosion. Third, models basedon structural elements are used to develop large models of complete buildings. They are used to study the global response of buildings subjected to blast loads and progressive collapse. This article carries out different techniques needed to calibrate properly the models based on structural elements, using shells and beam elements, in order to provide results of sufficient accuracy that can be used with moderate computational cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El estudio sísmico en los últimos 50 años y el análisis del comportamiento dinámico del suelo revelan que el comportamiento del suelo es altamente no lineal e histéretico incluso para pequeñas deformaciones. El comportamiento no lineal del suelo durante un evento sísmico tiene un papel predominante en el análisis de la respuesta de sitio. Los análisis unidimensionales de la respuesta sísmica del suelo son a menudo realizados utilizando procedimientos lineales equivalentes, que requieren generalmente pocos parámetros conocidos. Los análisis de respuesta de sitio no lineal tienen el potencial para simular con mayor precisión el comportamiento del suelo, pero su aplicación en la práctica se ha visto limitada debido a la selección de parámetros poco documentadas y poco claras, así como una inadecuada documentación de los beneficios del modelado no lineal en relación al modelado lineal equivalente. En el análisis del suelo, el comportamiento del suelo es aproximado como un sólido Kelvin-Voigt con un módulo de corte elástico y amortiguamiento viscoso. En el análisis lineal y no lineal del suelo se están considerando geometrías y modelos reológicos más complejos. El primero está siendo dirigido por considerar parametrizaciones más ricas del comportamiento linealizado y el segundo mediante el uso de multi-modo de los elementos de resorte-amortiguador con un eventual amortiguador fraccional. El uso del cálculo fraccional está motivado en gran parte por el hecho de que se requieren menos parámetros para lograr la aproximación exacta a los datos experimentales. Basándose en el modelo de Kelvin-Voigt, la viscoelasticidad es revisada desde su formulación más estándar a algunas descripciones más avanzada que implica la amortiguación dependiente de la frecuencia (o viscosidad), analizando los efectos de considerar derivados fraccionarios para representar esas contribuciones viscosas. Vamos a demostrar que tal elección se traduce en modelos más ricos que pueden adaptarse a diferentes limitaciones relacionadas con la potencia disipada, amplitud de la respuesta y el ángulo de fase. Por otra parte, el uso de derivados fraccionarios permite acomodar en paralelo, dentro de un análogo de Kelvin-Voigt generalizado, muchos amortiguadores que contribuyen a aumentar la flexibilidad del modelado para la descripción de los resultados experimentales. Obviamente estos modelos ricos implican muchos parámetros, los asociados con el comportamiento y los relacionados con los derivados fraccionarios. El análisis paramétrico de estos modelos requiere técnicas numéricas eficientemente capaces de simular comportamientos complejos. El método de la Descomposición Propia Generalizada (PGD) es el candidato perfecto para la construcción de este tipo de soluciones paramétricas. Podemos calcular off-line la solución paramétrica para el depósito de suelo, para todos los parámetros del modelo, tan pronto como tales soluciones paramétricas están disponibles, el problema puede ser resuelto en tiempo real, porque no se necesita ningún nuevo cálculo, el solucionador sólo necesita particularizar on-line la solución paramétrica calculada off-line, que aliviará significativamente el procedimiento de solución. En el marco de la PGD, parámetros de los materiales y los diferentes poderes de derivación podrían introducirse como extra-coordenadas en el procedimiento de solución. El cálculo fraccional y el nuevo método de reducción modelo llamado Descomposición Propia Generalizada han sido aplicado en esta tesis tanto al análisis lineal como al análisis no lineal de la respuesta del suelo utilizando un método lineal equivalente. ABSTRACT Studies of earthquakes over the last 50 years and the examination of dynamic soil behavior reveal that soil behavior is highly nonlinear and hysteretic even at small strains. Nonlinear behavior of soils during a seismic event has a predominant role in current site response analysis. One-dimensional seismic ground response analysis are often performed using equivalent-linear procedures, which require few, generally well-known parameters. Nonlinear analyses have the potential to more accurately simulate soil behavior, but their implementation in practice has been limited because of poorly documented and unclear parameter selection, as well as inadequate documentation of the benefits of nonlinear modeling relative to equivalent linear modeling. In soil analysis, soil behaviour is approximated as a Kelvin-Voigt solid with a elastic shear modulus and viscous damping. In linear and nonlinear analysis more complex geometries and more complex rheological models are being considered. The first is being addressed by considering richer parametrizations of the linearized behavior and the second by using multi-mode spring-dashpot elements with eventual fractional damping. The use of fractional calculus is motivated in large part by the fact that fewer parameters are required to achieve accurate approximation of experimental data. Based in Kelvin-Voigt model the viscoelastodynamics is revisited from its most standard formulation to some more advanced description involving frequency-dependent damping (or viscosity), analyzing the effects of considering fractional derivatives for representing such viscous contributions. We will prove that such a choice results in richer models that can accommodate different constraints related to the dissipated power, response amplitude and phase angle. Moreover, the use of fractional derivatives allows to accommodate in parallel, within a generalized Kelvin-Voigt analog, many dashpots that contribute to increase the modeling flexibility for describing experimental findings. Obviously these rich models involve many parameters, the ones associated with the behavior and the ones related to the fractional derivatives. The parametric analysis of all these models require efficient numerical techniques able to simulate complex behaviors. The Proper Generalized Decomposition (PGD) is the perfect candidate for producing such kind of parametric solutions. We can compute off-line the parametric solution for the soil deposit, for all parameter of the model, as soon as such parametric solutions are available, the problem can be solved in real time because no new calculation is needed, the solver only needs particularize on-line the parametric solution calculated off-line, which will alleviate significantly the solution procedure. Within the PGD framework material parameters and the different derivation powers could be introduced as extra-coordinates in the solution procedure. Fractional calculus and the new model reduction method called Proper Generalized Decomposition has been applied in this thesis to the linear analysis and nonlinear soil response analysis using a equivalent linear method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Phase equilibrium data regression is an unavoidable task necessary to obtain the appropriate values for any model to be used in separation equipment design for chemical process simulation and optimization. The accuracy of this process depends on different factors such as the experimental data quality, the selected model and the calculation algorithm. The present paper summarizes the results and conclusions achieved in our research on the capabilities and limitations of the existing GE models and about strategies that can be included in the correlation algorithms to improve the convergence and avoid inconsistencies. The NRTL model has been selected as a representative local composition model. New capabilities of this model, but also several relevant limitations, have been identified and some examples of the application of a modified NRTL equation have been discussed. Furthermore, a regression algorithm has been developed that allows for the advisable simultaneous regression of all the condensed phase equilibrium regions that are present in ternary systems at constant T and P. It includes specific strategies designed to avoid some of the pitfalls frequently found in commercial regression tools for phase equilibrium calculations. Most of the proposed strategies are based on the geometrical interpretation of the lowest common tangent plane equilibrium criterion, which allows an unambiguous comprehension of the behavior of the mixtures. The paper aims to show all the work as a whole in order to reveal the necessary efforts that must be devoted to overcome the difficulties that still exist in the phase equilibrium data regression problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wood is a natural and traditional building material, as popular today as ever, and presents advantages. Physically, wood is strong and stiff, but compared with other materials like steel is light and flexible. Wood material can absorb sound very effectively and it is a relatively good heat insulator. But dry wood burns quite easily and produces a great deal of heat energy. The main disadvantage is the high level of combustion when exposed to fire, where the point at which it catches fire is from 200–400°C. After fire exposure, is need to determine if the charred wooden structures are safe for future use. Design methods require the use of computer modelling to predict the fire exposure and the capacity of structures to resist those action. Also, large or small scale experimental tests are necessary to calibrate and verify the numerical models. The thermal model is essential for wood structures exposed to fire, because predicts the charring rate as a function of fire exposure. The charring rate calculation of most structural wood elements allows simple calculations, but is more complicated for situations where the fire exposure is non-standard and in wood elements protected with other materials. In this work, the authors present different case studies using numerical models, that will help professionals analysing woods elements and the type of information needed to decide whether the charred structures are adequate or not to use. Different thermal models representing wooden cellular slabs, used in building construction for ceiling or flooring compartments, will be analysed and submitted to different fire scenarios (with the standard fire curve exposure). The same numerical models, considering insulation material inside the wooden cellular slabs, will be tested to compare and determine the fire time resistance and the charring rate calculation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wood is a natural and traditional building material, as popular today as ever, and presents advantages. Physically, wood is strong and stiff, but compared with other materiais like steel is light and flexible. Wood material can absorb sound very effectively and it is a relatively good heat insulator. But dry wood does bum quite easily md produces a great deal ofheat energy. The main disadvantage is the high levei ofcombustion when exposed to fíre, where the point at which it catches fire is fi-om 200-400°C. After fu-e exposure, is need to determine if the charred wooden stmctures are safe for future use. Design methods require the use ofcomputer modelling to predict the fíre exposure and the capacity ofstructures to resist fhose action. Also, large or small scale experimental tests are necessary to calibrate and verify the numerical models. The thermal model is essential for wood stmctures exposed to fire, because predicts the charring rate as a fünction offire exposure. The charring rate calculation ofmost stmctural wood elements allows simple calculations, but is more complicated for situations where the fire exposure is non-standard and in wood elements protected with other materiais.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The precise evaluation of electromagnetic field (EMF) distributions inside biological samples is becoming an increasingly important design requirement for high field MRI systems. In evaluating the induced fields caused by magnetic field gradients and RF transmitter coils, a multilayered dielectric spherical head model is proposed to provide a better understanding of electromagnetic interactions when compared to a traditional homogeneous head phantom. This paper presents Debye potential (DP) and Dyadic Green's function (DGF)-based solutions of the EMFs inside a head-sized, stratified sphere with similar radial conductivity and permittivity profiles as a human head. The DP approach is formulated for the symmetric case in which the source is a circular loop carrying a harmonic-formed current over a wide frequency range. The DGF method is developed for generic cases in which the source may be any kind of RF coil whose current distribution can be evaluated using the method of moments. The calculated EMFs can then be used to deduce MRI imaging parameters. The proposed methods, while not representing the full complexity of a head model, offer advantages in rapid prototyping as the computation times are much lower than a full finite difference time domain calculation using a complex head model. Test examples demonstrate the capability of the proposed models/methods. It is anticipated that this model will be of particular value for high field MRI applications, especially the rapid evaluation of RF resonator (surface and volume coils) and high performance gradient set designs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In most magnetic resonance imaging (MRI) systems, pulsed magnetic gradient fields induce eddy currents in the conducting structures of the superconducting magnet. The eddy currents induced in structures within the cryostat are particularly problematic as they are characterized by long time constants by virtue of the low resistivity of the conductors. This paper presents a three-dimensional (3-D) finite-difference time-domain (FDTD) scheme in cylindrical coordinates for eddy-current calculation in conductors. This model is intended to be part of a complete FDTD model of an MRI system including all RF and low-frequency field generating units and electrical models of the patient. The singularity apparent in the governing equations is removed by using a series expansion method and the conductor-air boundary condition is handled using a variant of the surface impedance concept. The numerical difficulty due to the asymmetry of Maxwell equations for low-frequency eddy-current problems is circumvented by taking advantage of the known penetration behavior of the eddy-current fields. A perfectly matched layer absorbing boundary condition in 3-D cylindrical coordinates is also incorporated. The numerical method has been verified against analytical solutions for simple cases. Finally, the algorithm is illustrated by modeling a pulsed field gradient coil system within an MRI magnet system. The results demonstrate that the proposed FDTD scheme can be used to calculate large-scale eddy-current problems in materials with high conductivity at low frequencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we evaluate the performance of the 1- and 5-site models of methane on the description of adsorption on graphite surfaces and in graphitic slit pores. These models have been known to perform well in the description of the fluid-phase behavior and vapor-liquid equilibria. Their performance in adsorption is evaluated in this work for nonporous graphitized thermal carbon black, and simulation results are compared with the experimental data of Avgul and Kiselev (Chemistry and Physics of Carbon; Dekker: New York, 1970; Vol. 6, p 1). On this nonporous surface, it is found that these models perform as well on isotherms at various temperatures as they do on the experimental isosteric heat for adsorption on a graphite surface. They are then tested for their performance in predicting the adsorption isotherms in graphitic slit pores, in which we would like to explore the effect of confinement on the molecule packing. Pore widths of 10 and 20 angstrom are chosen in this investigation, and we also study the effects of temperature by choosing 90.7, 113, and 273 K. The first two are for subcritical conditions, with 90.7 K being the triple point of methane and 113 K being its boiling point. The last temperature is chosen to represent the supercritical condition so that we can investigate the performance of these models at extremely high pressures. We have found that for the case of slit pores investigated in this paper, although the two models yield comparable pore densities (provided the accessible pore width is used in the calculation of pore density), the number of particles predicted by the I-site model is always greater than that predicted by the 5-site model, regardless of whether temperature is subcritical or supercritical. This is due to the packing effect in the confined space such that a methane molecule modeled as a spherical particle in the I-site model would pack better than the fused five-sphere model in the case of the 5-site model. Because the 5-site model better describes the liquid- and solid-phase behavior, we would argue that the packing density in small pores is better described with a more detailed 5-site model, and care should be exercised when using the 1-site model to study adsorption in small pores.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The papers is dedicated to the questions of modeling and basing super-resolution measuring- calculating systems in the context of the conception “device + PC = new possibilities”. By the authors of the article the new mathematical method of solution of the multi-criteria optimization problems was developed. The method is based on physic-mathematical formalism of reduction of fuzzy disfigured measurements. It is shown, that determinative part is played by mathematical properties of physical models of the object, which is measured, surroundings, measuring components of measuring-calculating systems and theirs cooperation as well as the developed mathematical method of processing and interpretation of measurements problem solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The task of smooth and stable decision rules construction in logical recognition models is considered. Logical regularities of classes are defined as conjunctions of one-place predicates that determine the membership of features values in an intervals of the real axis. The conjunctions are true on a special no extending subsets of reference objects of some class and are optimal. The standard approach of linear decision rules construction for given sets of logical regularities consists in realization of voting schemes. The weighting coefficients of voting procedures are done as heuristic ones or are as solutions of complex optimization task. The modifications of linear decision rules are proposed that are based on the search of maximal estimations of standard objects for their classes and use approximations of logical regularities by smooth sigmoid functions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Дойчин Бояджиев, Галена Пеловска - В статията се предлага оптимизиран алгоритъм, който е по-бърз в сравнение с по- рано описаната ускорена (модифицирана STS) диференчна схема за възрастово структуриран популационен модел с дифузия. Запазвайки апроксимацията на модифицирания STS алгоритъм, изчислителното времето се намаля почти два пъти. Това прави оптимизирания метод по-предпочитан за задачи с нелинейност или с по-висока размерност.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human use of the oceans is increasingly in conflict with conservation of endangered species. Methods for managing the spatial and temporal placement of industries such as military, fishing, transportation and offshore energy, have historically been post hoc; i.e. the time and place of human activity is often already determined before assessment of environmental impacts. In this dissertation, I build robust species distribution models in two case study areas, US Atlantic (Best et al. 2012) and British Columbia (Best et al. 2015), predicting presence and abundance respectively, from scientific surveys. These models are then applied to novel decision frameworks for preemptively suggesting optimal placement of human activities in space and time to minimize ecological impacts: siting for offshore wind energy development, and routing ships to minimize risk of striking whales. Both decision frameworks relate the tradeoff between conservation risk and industry profit with synchronized variable and map views as online spatial decision support systems.

For siting offshore wind energy development (OWED) in the U.S. Atlantic (chapter 4), bird density maps are combined across species with weights of OWED sensitivity to collision and displacement and 10 km2 sites are compared against OWED profitability based on average annual wind speed at 90m hub heights and distance to transmission grid. A spatial decision support system enables toggling between the map and tradeoff plot views by site. A selected site can be inspected for sensitivity to a cetaceans throughout the year, so as to capture months of the year which minimize episodic impacts of pre-operational activities such as seismic airgun surveying and pile driving.

Routing ships to avoid whale strikes (chapter 5) can be similarly viewed as a tradeoff, but is a different problem spatially. A cumulative cost surface is generated from density surface maps and conservation status of cetaceans, before applying as a resistance surface to calculate least-cost routes between start and end locations, i.e. ports and entrance locations to study areas. Varying a multiplier to the cost surface enables calculation of multiple routes with different costs to conservation of cetaceans versus cost to transportation industry, measured as distance. Similar to the siting chapter, a spatial decisions support system enables toggling between the map and tradeoff plot view of proposed routes. The user can also input arbitrary start and end locations to calculate the tradeoff on the fly.

Essential to the input of these decision frameworks are distributions of the species. The two preceding chapters comprise species distribution models from two case study areas, U.S. Atlantic (chapter 2) and British Columbia (chapter 3), predicting presence and density, respectively. Although density is preferred to estimate potential biological removal, per Marine Mammal Protection Act requirements in the U.S., all the necessary parameters, especially distance and angle of observation, are less readily available across publicly mined datasets.

In the case of predicting cetacean presence in the U.S. Atlantic (chapter 2), I extracted datasets from the online OBIS-SEAMAP geo-database, and integrated scientific surveys conducted by ship (n=36) and aircraft (n=16), weighting a Generalized Additive Model by minutes surveyed within space-time grid cells to harmonize effort between the two survey platforms. For each of 16 cetacean species guilds, I predicted the probability of occurrence from static environmental variables (water depth, distance to shore, distance to continental shelf break) and time-varying conditions (monthly sea-surface temperature). To generate maps of presence vs. absence, Receiver Operator Characteristic (ROC) curves were used to define the optimal threshold that minimizes false positive and false negative error rates. I integrated model outputs, including tables (species in guilds, input surveys) and plots (fit of environmental variables, ROC curve), into an online spatial decision support system, allowing for easy navigation of models by taxon, region, season, and data provider.

For predicting cetacean density within the inner waters of British Columbia (chapter 3), I calculated density from systematic, line-transect marine mammal surveys over multiple years and seasons (summer 2004, 2005, 2008, and spring/autumn 2007) conducted by Raincoast Conservation Foundation. Abundance estimates were calculated using two different methods: Conventional Distance Sampling (CDS) and Density Surface Modelling (DSM). CDS generates a single density estimate for each stratum, whereas DSM explicitly models spatial variation and offers potential for greater precision by incorporating environmental predictors. Although DSM yields a more relevant product for the purposes of marine spatial planning, CDS has proven to be useful in cases where there are fewer observations available for seasonal and inter-annual comparison, particularly for the scarcely observed elephant seal. Abundance estimates are provided on a stratum-specific basis. Steller sea lions and harbour seals are further differentiated by ‘hauled out’ and ‘in water’. This analysis updates previous estimates (Williams & Thomas 2007) by including additional years of effort, providing greater spatial precision with the DSM method over CDS, novel reporting for spring and autumn seasons (rather than summer alone), and providing new abundance estimates for Steller sea lion and northern elephant seal. In addition to providing a baseline of marine mammal abundance and distribution, against which future changes can be compared, this information offers the opportunity to assess the risks posed to marine mammals by existing and emerging threats, such as fisheries bycatch, ship strikes, and increased oil spill and ocean noise issues associated with increases of container ship and oil tanker traffic in British Columbia’s continental shelf waters.

Starting with marine animal observations at specific coordinates and times, I combine these data with environmental data, often satellite derived, to produce seascape predictions generalizable in space and time. These habitat-based models enable prediction of encounter rates and, in the case of density surface models, abundance that can then be applied to management scenarios. Specific human activities, OWED and shipping, are then compared within a tradeoff decision support framework, enabling interchangeable map and tradeoff plot views. These products make complex processes transparent for gaming conservation, industry and stakeholders towards optimal marine spatial management, fundamental to the tenets of marine spatial planning, ecosystem-based management and dynamic ocean management.