928 resultados para Modeling methods


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This letter presents an effective approach for selection of appropriate terrain modeling methods in forming a digital elevation model (DEM). This approach achieves a balance between modeling accuracy and modeling speed. A terrain complexity index is defined to represent a terrain's complexity. A support vector machine (SVM) classifies terrain surfaces into either complex or moderate based on this index associated with the terrain elevation range. The classification result recommends a terrain modeling method for a given data set in accordance with its required modeling accuracy. Sample terrain data from the lunar surface are used in constructing an experimental data set. The results have shown that the terrain complexity index properly reflects the terrain complexity, and the SVM classifier derived from both the terrain complexity index and the terrain elevation range is more effective and generic than that designed from either the terrain complexity index or the terrain elevation range only. The statistical results have shown that the average classification accuracy of SVMs is about 84.3% ± 0.9% for terrain types (complex or moderate). For various ratios of complex and moderate terrain types in a selected data set, the DEM modeling speed increases up to 19.5% with given DEM accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A data insertion method, where a dispersion model is initialized from ash properties derived from a series of satellite observations, is used to model the 8 May 2010 Eyjafjallajökull volcanic ash cloud which extended from Iceland to northern Spain. We also briefly discuss the application of this method to the April 2010 phase of the Eyjafjallajökull eruption and the May 2011 Grímsvötn eruption. An advantage of this method is that very little knowledge about the eruption itself is required because some of the usual eruption source parameters are not used. The method may therefore be useful for remote volcanoes where good satellite observations of the erupted material are available, but little is known about the properties of the actual eruption. It does, however, have a number of limitations related to the quality and availability of the observations. We demonstrate that, using certain configurations, the data insertion method is able to capture the structure of a thin filament of ash extending over northern Spain that is not fully captured by other modeling methods. It also verifies well against the satellite observations according to the quantitative object-based quality metric, SAL—structure, amplitude, location, and the spatial coverage metric, Figure of Merit in Space.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Modeling ERP software means capturing the information necessary for supporting enterprise management. This modeling process goes down through different abstraction layers, from enterprise modeling to code generation. Thus ERP is the kind of system where enterprise engineering undoubtedly has, or should have, a strong influence. For the case of Free/Open Source ERP, the lack of proper modeling methods and tools can jeopardize the advantage brought by source code availability. Therefore, the aim of this paper is to present a development process proposal for the Open Source ERP5 system. The proposed development process aims to cover different abstraction levels, taking into account well established standards and common practices, as well as platform issues. Its main goal is to provide an adaptable meta-process to ERP5 adopters. © 2006 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Elétrica - FEIS

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A malária é um sério problema de saúde pública mundial, acarretando perdas socioeconômicas e contribuindo para o subdesenvolvimento dos países afetados. Neste contexto, faz-se necessário estudar a relação entre as propriedades eletrônicas e a capacidade antioxidante de derivados quinolínicos na atividade antimalárica, o que servirá de subsídio para propor protótipos eficazes na terapêutica da doença. Nesta dissertação, foram utilizadas técnicas de modelagem molecular, no estudo da relação estrutura e atividade antioxidante correlacionada com a atividade antimalárica, no processo de seleção de grupamentos e parâmetros eletrônicos e conformacionais que permitam aperfeiçoar a atividade farmacológica e reduzir a toxicidade dos derivados. A análise dos valores de HOMO e PI indicou que o tautômero imino-quinolina é, provavelmente, melhor antioxidante que o tautômero amino-quinolina. Também se observou que o equilíbrio dos tautômeros é mais deslocalizado para a estrutura amino-quinolina na fase gasosa, e em água e clorofórmio no método PCM, apresentando valores de barreiras de energia da faixa de 10,78 Kcal/mol, 21,65 Kcal/mol e 22,04 Kcal/mol, respectivamente. Assim pôdese observar que nos derivados análogos de quinolina, os grupos elétrons-doadores mostraram destaque na redução do potencial de ionização, como os grupos amina na posição 8 substituído por um grupo alquilamina. Nos derivados da associação de 4- e 8-amino-quinolina notou-se que a presença de um segundo nitrogênio no grupo quinolina diminui seu potencial antioxidante, com exceção da posição 5, representando o grupo de maior destaque na redução do potencial de ionização e conseqüente provável elevada atividade antioxidante.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Model-based calibration of steady-state engine operation is commonly performed with highly parameterized empirical models that are accurate but not very robust, particularly when predicting highly nonlinear responses such as diesel smoke emissions. To address this problem, and to boost the accuracy of more robust non-parametric methods to the same level, GT-Power was used to transform the empirical model input space into multiple input spaces that simplified the input-output relationship and improved the accuracy and robustness of smoke predictions made by three commonly used empirical modeling methods: Multivariate Regression, Neural Networks and the k-Nearest Neighbor method. The availability of multiple input spaces allowed the development of two committee techniques: a 'Simple Committee' technique that used averaged predictions from a set of 10 pre-selected input spaces chosen by the training data and the "Minimum Variance Committee" technique where the input spaces for each prediction were chosen on the basis of disagreement between the three modeling methods. This latter technique equalized the performance of the three modeling methods. The successively increasing improvements resulting from the use of a single best transformed input space (Best Combination Technique), Simple Committee Technique and Minimum Variance Committee Technique were verified with hypothesis testing. The transformed input spaces were also shown to improve outlier detection and to improve k-Nearest Neighbor performance when predicting dynamic emissions with steady-state training data. An unexpected finding was that the benefits of input space transformation were unaffected by changes in the hardware or the calibration of the underlying GT-Power model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A range of societal issues have been caused by fossil fuel consumption in the transportation sector in the United States (U.S.), including health related air pollution, climate change, the dependence on imported oil, and other oil related national security concerns. Biofuels production from various lignocellulosic biomass types such as wood, forest residues, and agriculture residues have the potential to replace a substantial portion of the total fossil fuel consumption. This research focuses on locating biofuel facilities and designing the biofuel supply chain to minimize the overall cost. For this purpose an integrated methodology was proposed by combining the GIS technology with simulation and optimization modeling methods. The GIS based methodology was used as a precursor for selecting biofuel facility locations by employing a series of decision factors. The resulted candidate sites for biofuel production served as inputs for simulation and optimization modeling. As a precursor to simulation or optimization modeling, the GIS-based methodology was used to preselect potential biofuel facility locations for biofuel production from forest biomass. Candidate locations were selected based on a set of evaluation criteria, including: county boundaries, a railroad transportation network, a state/federal road transportation network, water body (rivers, lakes, etc.) dispersion, city and village dispersion, a population census, biomass production, and no co-location with co-fired power plants. The simulation and optimization models were built around key supply activities including biomass harvesting/forwarding, transportation and storage. The built onsite storage served for spring breakup period where road restrictions were in place and truck transportation on certain roads was limited. Both models were evaluated using multiple performance indicators, including cost (consisting of the delivered feedstock cost, and inventory holding cost), energy consumption, and GHG emissions. The impact of energy consumption and GHG emissions were expressed in monetary terms to keep consistent with cost. Compared with the optimization model, the simulation model represents a more dynamic look at a 20-year operation by considering the impacts associated with building inventory at the biorefinery to address the limited availability of biomass feedstock during the spring breakup period. The number of trucks required per day was estimated and the inventory level all year around was tracked. Through the exchange of information across different procedures (harvesting, transportation, and biomass feedstock processing procedures), a smooth flow of biomass from harvesting areas to a biofuel facility was implemented. The optimization model was developed to address issues related to locating multiple biofuel facilities simultaneously. The size of the potential biofuel facility is set up with an upper bound of 50 MGY and a lower bound of 30 MGY. The optimization model is a static, Mathematical Programming Language (MPL)-based application which allows for sensitivity analysis by changing inputs to evaluate different scenarios. It was found that annual biofuel demand and biomass availability impacts the optimal results of biofuel facility locations and sizes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of this paper is to provide a review of general processes related to plasma sources, their transport, energization, and losses in the planetary magnetospheres. We provide background information as well as the most up-to-date knowledge of the comparative studies of planetary magnetospheres, with a focus on the plasma supply to each region of the magnetospheres. This review also includes the basic equations and modeling methods commonly used to simulate the plasma sources of the planetary magnetospheres. In this paper, we will describe basic and common processes related to plasma supply to each region of the planetary magnetospheres in our solar system. First, we will describe source processes in Sect. 1. Then the transport and energization processes to supply those source plasmas to various regions of the magnetosphere are described in Sect. 2. Loss processes are also important to understand the plasma population in the magnetosphere and Sect. 3 is dedicated to the explanation of the loss processes. In Sect. 4, we also briefly summarize the basic equations and modeling methods with a focus on plasma supply processes for planetary magnetospheres.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study is aimed at determining the spatial distribution, physical properties, and groundwater conditions of the Vashon advance outwash (Qva) in the Mountlake Terrace, WA area. The Qva is correlative with the Esperance Sand, as defined at its type section; however, local variations in the Qva are not well-characterized (Mullineaux, 1965). While the Qva is a dense glacial unit with low compressibility and high frictional shear strength (Gurtowski and Boirum, 1989), the strength of this unit can be reduced when it becomes saturated (Tubbs, 1974). This can lead to caving or flowing in excavations, and on a larger scale, can lead to slope failures and mass-wasting when intersected by steep slopes. By studying the Qva, we can better predict how it will behave under certain conditions, which will be beneficial to geologists, hydrogeologists, engineers, and environmental scientists during site assessments and early phases of project planning. In this study, I use data from 27 geotechnical borings from previous field investigations and C-Tech Corporation’s EnterVol software to create three-dimensional models of the subsurface geology in the study area. These models made it possible to visualize the spatial distribution of the Qva in relation to other geologic units. I also conducted a comparative study between data from the borings and generalized published data on the spatial distribution, relative density, soil classification, grain-size distribution, moisture content, groundwater conditions, and aquifer properties of the Qva. I found that the elevation of the top of the Qva ranges from 247 to 477 ft. I found that the Qva is thickest where the modern topography is high, and is thinnest where the topography is low. The thickness of the Qva ranges from absent to 242 ft. Along the northern, east-west trending transect, the Qva thins to the east as it rises above a ridge composed of Pre- Vashon glacial deposits. Along the southern, east-west trending transect, the Qva pinches out against a ridge composed of pre-Vashon interglacial deposits. Two plausible explanations for this ridge are paleotopography and active faulting associated with the Southern Whidbey Fault Zone. Further investigations should be done using geophysical methods and the modeling methods described in this study to determine the nature of this ridge. The relative density of the Qva in the study area ranges from loose to very dense, with the loose end of the spectrum probably relating to heave in saturated sands. I found subtle correlations between density and depth. Volumetric analysis of the soil groups listed in the boring logs indicate that the Qva in the study area is composed of approximately 9.5% gravel, 89.3% sand, and 1.2% silt and clay. The natural moisture content ranges from 3.0 to 35.4% in select samples from the Qva. The moisture content appears to increase with depth and fines content. The water table in the study area ranges in elevation from 231.9 to 458 ft, based on observations and measurements recorded in the boring logs. The results from rising-head and falling-head slug tests done at a single well in the study area indicate that the geometric mean of hydraulic conductivity is 15.93 ft/d (5.62 x 10-03 cm/s), the storativity is 3.28x10-03, and the estimated transmissivity is 738.58 ft2/d in the vicinity of this observation well. At this location, there was 1.73 ft of seasonal variation in groundwater elevation between August 2014 and March 2015.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Current Physiologically based pharmacokinetic (PBPK) models are inductive. We present an additional, different approach that is based on the synthetic rather than the inductive approach to modeling and simulation. It relies on object-oriented programming A model of the referent system in its experimental context is synthesized by assembling objects that represent components such as molecules, cells, aspects of tissue architecture, catheters, etc. The single pass perfused rat liver has been well described in evaluating hepatic drug pharmacokinetics (PK) and is the system on which we focus. In silico experiments begin with administration of objects representing actual compounds. Data are collected in a manner analogous to that in the referent PK experiments. The synthetic modeling method allows for recognition and representation of discrete event and discrete time processes, as well as heterogeneity in organization, function, and spatial effects. An application is developed for sucrose and antipyrine, administered separately and together PBPK modeling has made extensive progress in characterizing abstracted PK properties but this has also been its limitation. Now, other important questions and possible extensions emerge. How are these PK properties and the observed behaviors generated? The inherent heuristic limitations of traditional models have hindered getting meaningful, detailed answers to such questions. Synthetic models of the type described here are specifically intended to help answer such questions. Analogous to wet-lab experimental models, they retain their applicability even when broken apart into sub-components. Having and applying this new class of models along with traditional PK modeling methods is expected to increase the productivity of pharmaceutical research at all levels that make use of modeling and simulation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The binding between antigenic peptides (epitopes) and the MHC molecule is a key step in the cellular immune response. Accurate in silico prediction of epitope-MHC binding affinity can greatly expedite epitope screening by reducing costs and experimental effort. Recently, we demonstrated the appealing performance of SVRMHC, an SVR-based quantitative modeling method for peptide-MHC interactions, when applied to three mouse class I MHC molecules. Subsequently, we have greatly extended the construction of SVRMHC models and have established such models for more than 40 class I and class II MHC molecules. Here we present the SVRMHC web server for predicting peptide-MHC binding affinities using these models. Benchmarked percentile scores are provided for all predictions. The larger number of SVRMHC models available allowed for an updated evaluation of the performance of the SVRMHC method compared to other well- known linear modeling methods. SVRMHC is an accurate and easy-to-use prediction server for epitope-MHC binding with significant coverage of MHC molecules. We believe it will prove to be a valuable resource for T cell epitope researchers.