982 resultados para Modeling problems
Resumo:
Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in [8, 44, 39, 9]. On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with m = n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables – variables with the same value in all the solutions of an instance– and hardness of GSP.
Resumo:
The Feller process is an one-dimensional diffusion process with linear drift and state-dependent diffusion coefficient vanishing at the origin. The process is positive definite and it is this property along with its linear character that have made Feller process a convenient candidate for the modeling of a number of phenomena ranging from single-neuron firing to volatility of financial assets. While general properties of the process have long been well known, less known are properties related to level crossing such as the first-passage and the escape problems. In this work we thoroughly address these questions.
Resumo:
The transport of macromolecules, such as low-density lipoprotein (LDL), and their accumulation in the layers of the arterial wall play a critical role in the creation and development of atherosclerosis. Atherosclerosis is a disease of large arteries e.g., the aorta, coronary, carotid, and other proximal arteries that involves a distinctive accumulation of LDL and other lipid-bearing materials in the arterial wall. Over time, plaque hardens and narrows the arteries. The flow of oxygen-rich blood to organs and other parts of the body is reduced. This can lead to serious problems, including heart attack, stroke, or even death. It has been proven that the accumulation of macromolecules in the arterial wall depends not only on the ease with which materials enter the wall, but also on the hindrance to the passage of materials out of the wall posed by underlying layers. Therefore, attention was drawn to the fact that the wall structure of large arteries is different than other vessels which are disease-resistant. Atherosclerosis tends to be localized in regions of curvature and branching in arteries where fluid shear stress (shear rate) and other fluid mechanical characteristics deviate from their normal spatial and temporal distribution patterns in straight vessels. On the other hand, the smooth muscle cells (SMCs) residing in the media layer of the arterial wall respond to mechanical stimuli, such as shear stress. Shear stress may affect SMC proliferation and migration from the media layer to intima. This occurs in atherosclerosis and intimal hyperplasia. The study of blood flow and other body fluids and of heat transport through the arterial wall is one of the advanced applications of porous media in recent years. The arterial wall may be modeled in both macroscopic (as a continuous porous medium) and microscopic scales (as a heterogeneous porous medium). In the present study, the governing equations of mass, heat and momentum transport have been solved for different species and interstitial fluid within the arterial wall by means of computational fluid dynamics (CFD). Simulation models are based on the finite element (FE) and finite volume (FV) methods. The wall structure has been modeled by assuming the wall layers as porous media with different properties. In order to study the heat transport through human tissues, the simulations have been carried out for a non-homogeneous model of porous media. The tissue is composed of blood vessels, cells, and an interstitium. The interstitium consists of interstitial fluid and extracellular fibers. Numerical simulations are performed in a two-dimensional (2D) model to realize the effect of the shape and configuration of the discrete phase on the convective and conductive features of heat transfer, e.g. the interstitium of biological tissues. On the other hand, the governing equations of momentum and mass transport have been solved in the heterogeneous porous media model of the media layer, which has a major role in the transport and accumulation of solutes across the arterial wall. The transport of Adenosine 5´-triphosphate (ATP) is simulated across the media layer as a benchmark to observe how SMCs affect on the species mass transport. In addition, the transport of interstitial fluid has been simulated while the deformation of the media layer (due to high blood pressure) and its constituents such as SMCs are also involved in the model. In this context, the effect of pressure variation on shear stress is investigated over SMCs induced by the interstitial flow both in 2D and three-dimensional (3D) geometries for the media layer. The influence of hypertension (high pressure) on the transport of lowdensity lipoprotein (LDL) through deformable arterial wall layers is also studied. This is due to the pressure-driven convective flow across the arterial wall. The intima and media layers are assumed as homogeneous porous media. The results of the present study reveal that ATP concentration over the surface of SMCs and within the bulk of the media layer is significantly dependent on the distribution of cells. Moreover, the shear stress magnitude and distribution over the SMC surface are affected by transmural pressure and the deformation of the media layer of the aorta wall. This work reflects the fact that the second or even subsequent layers of SMCs may bear shear stresses of the same order of magnitude as the first layer does if cells are arranged in an arbitrary manner. This study has brought new insights into the simulation of the arterial wall, as the previous simplifications have been ignored. The configurations of SMCs used here with elliptic cross sections of SMCs closely resemble the physiological conditions of cells. Moreover, the deformation of SMCs with high transmural pressure which follows the media layer compaction has been studied for the first time. On the other hand, results demonstrate that LDL concentration through the intima and media layers changes significantly as wall layers compress with transmural pressure. It was also noticed that the fraction of leaky junctions across the endothelial cells and the area fraction of fenestral pores over the internal elastic lamina affect the LDL distribution dramatically through the thoracic aorta wall. The simulation techniques introduced in this work can also trigger new ideas for simulating porous media involved in any biomedical, biomechanical, chemical, and environmental engineering applications.
Resumo:
Investigation of high pressure pretreatment process for gold leaching is the objective of the present master's thesis. The gold ores and concentrates which cannot be easily treated by leaching process are called "refractory". These types of ores or concentrates often have high content of sulfur and arsenic that renders the precious metal inaccessible to the leaching agents. Since the refractory ores in gold manufacturing industry take a considerable share, the pressure oxidation method (autoclave method) is considered as one of the possible ways to overcome the related problems. Mathematical modeling is the main approach in this thesis which was used for investigation of high pressure oxidation process. For this task, available information from literature concerning this phenomenon, including chemistry, mass transfer and kinetics, reaction conditions, applied apparatus and application, was collected and studied. The modeling part includes investigation of pyrite oxidation kinetics in order to create a descriptive mathematical model. The following major steps are completed: creation of process model by using the available knowledge; estimation of unknown parameters and determination of goodness of the fit; study of the reliability of the model and its parameters.
Resumo:
Fireside deposits can be found in many types of utility and industrial furnaces. The deposits in furnaces are problematic because they can reduce heat transfer, block gas paths and cause corrosion. To tackle these problems, it is vital to estimate the influence of deposits on heat transfer, to minimize deposit formation and to optimize deposit removal. It is beneficial to have a good understanding of the mechanisms of fireside deposit formation. Numerical modeling is a powerful tool for investigating the heat transfer in furnaces, and it can provide valuable information for understanding the mechanisms of deposit formation. In addition, a sub-model of deposit formation is generally an essential part of a comprehensive furnace model. This work investigates two specific processes of fireside deposit formation in two industrial furnaces. The first process is the slagging wall found in furnaces with molten deposits running on the wall. A slagging wall model is developed to take into account the two-layer structure of the deposits. With the slagging wall model, the thickness and the surface temperature of the molten deposit layer can be calculated. The slagging wall model is used to predict the surface temperature and the heat transfer to a specific section of a super-heater tube panel with the boundary condition obtained from a Kraft recovery furnace model. The slagging wall model is also incorporated into the computational fluid dynamics (CFD)-based Kraft recovery furnace model and applied on the lower furnace walls. The implementation of the slagging wall model includes a grid simplification scheme. The wall surface temperature calculated with the slagging wall model is used as the heat transfer boundary condition. Simulation of a Kraft recovery furnace is performed, and it is compared with two other cases and measurements. In the two other cases, a uniform wall surface temperature and a wall surface temperature calculated with a char bed burning model are used as the heat transfer boundary conditions. In this particular furnace, the wall surface temperatures from the three cases are similar and are in the correct range of the measurements. Nevertheless, the wall surface temperature profiles with the slagging wall model and the char bed burning model are different because the deposits are represented differently in the two models. In addition, the slagging wall model is proven to be computationally efficient. The second process is deposit formation due to thermophoresis of fine particles to the heat transfer surface. This process is considered in the simulation of a heat recovery boiler of the flash smelting process. In order to determine if the small dust particles stay on the wall, a criterion based on the analysis of forces acting on the particle is applied. Time-dependent simulation of deposit formation in the heat recovery boiler is carried out and the influence of deposits on heat transfer is investigated. The locations prone to deposit formation are also identified in the heat recovery boiler. Modeling of the two processes in the two industrial furnaces enhances the overall understanding of the processes. The sub-models developed in this work can be applied in other similar deposit formation processes with carefully-defined boundary conditions.
Resumo:
Biological systems exhibit rich and complex behavior through the orchestrated interplay of a large array of components. It is hypothesized that separable subsystems with some degree of functional autonomy exist; deciphering their independent behavior and functionality would greatly facilitate understanding the system as a whole. Discovering and analyzing such subsystems are hence pivotal problems in the quest to gain a quantitative understanding of complex biological systems. In this work, using approaches from machine learning, physics and graph theory, methods for the identification and analysis of such subsystems were developed. A novel methodology, based on a recent machine learning algorithm known as non-negative matrix factorization (NMF), was developed to discover such subsystems in a set of large-scale gene expression data. This set of subsystems was then used to predict functional relationships between genes, and this approach was shown to score significantly higher than conventional methods when benchmarking them against existing databases. Moreover, a mathematical treatment was developed to treat simple network subsystems based only on their topology (independent of particular parameter values). Application to a problem of experimental interest demonstrated the need for extentions to the conventional model to fully explain the experimental data. Finally, the notion of a subsystem was evaluated from a topological perspective. A number of different protein networks were examined to analyze their topological properties with respect to separability, seeking to find separable subsystems. These networks were shown to exhibit separability in a nonintuitive fashion, while the separable subsystems were of strong biological significance. It was demonstrated that the separability property found was not due to incomplete or biased data, but is likely to reflect biological structure.
The impact of deformation strain on the formation of banded clouds in idealized modeling experiments
Resumo:
Experiments are performed using an idealized version of an operational forecast model to determine the impact on banded frontal clouds of the strength of deformational forcing, low-level baroclinicity, and model representation of convection. Line convection is initiated along the front, and slantwise bands extend from the top of the line-convection elements into the cold air. This banding is attributed primarily to M adjustment. The cross-frontal spreading of the cold pool generated by the line convection leads to further triggering of upright convection in the cold air that feeds into these slantwise bands. Secondary low-level bands form later in the simulations; these are attributed to the release of conditional symmetric instability. Enhanced deformation strain leads to earlier onset of convection and more coherent line convection. A stronger cold pool is generated, but its speed is reduced relative to that seen in experiments with weaker deformational strain, because of inhibition by the strain field. Enhanced low-level baroclinicity leads to the generation of more inertial instability by line convection (for a given capping height of convection), and consequently greater strength of the slantwise circulations formed by M adjustment. These conclusions are based on experiments without a convective-parametrization scheme. Experiments using the standard or a modified scheme for this model demonstrate known problems with the use of this scheme at the awkward 4 km grid length used in these simulations. Copyright © 2008 Royal Meteorological Society
Resumo:
In this paper,the Prony's method is applied to the time-domain waveform data modelling in the presence of noise.The following three problems encountered in this work are studied:(1)determination of the order of waveform;(2)de-termination of numbers of multiple roots;(3)determination of the residues.The methods of solving these problems are given and simulated on the computer.Finally,an output pulse of model PG-10N signal generator and the distorted waveform obtained by transmitting the pulse above mentioned through a piece of coaxial cable are modelled,and satisfactory results are obtained.So the effectiveness of Prony's method in waveform data modelling in the presence of noise is confirmed.
Resumo:
In this paper a modified algorithm is suggested for developing polynomial neural network (PNN) models. Optimal partial description (PD) modeling is introduced at each layer of the PNN expansion, a task accomplished using the orthogonal least squares (OLS) method. Based on the initial PD models determined by the polynomial order and the number of PD inputs, OLS selects the most significant regressor terms reducing the output error variance. The method produces PNN models exhibiting a high level of accuracy and superior generalization capabilities. Additionally, parsimonious models are obtained comprising a considerably smaller number of parameters compared to the ones generated by means of the conventional PNN algorithm. Three benchmark examples are elaborated, including modeling of the gas furnace process as well as the iris and wine classification problems. Extensive simulation results and comparison with other methods in the literature, demonstrate the effectiveness of the suggested modeling approach.
Resumo:
In 2007, the world reached the unprecedented milestone of half of its people living in cities, and that proportion is projected to be 60% in 2030. The combined effect of global climate change and rapid urban growth, accompanied by economic and industrial development, will likely make city residents more vulnerable to a number of urban environmental problems, including extreme weather and climate conditions, sea-level rise, poor public health and air quality, atmospheric transport of accidental or intentional releases of toxic material, and limited water resources. One fundamental aspect of predicting the future risks and defining mitigation strategies is to understand the weather and regional climate affected by cities. For this reason, dozens of researchers from many disciplines and nations attended the Urban Weather and Climate Workshop.1 Twenty-five students from Chinese universities and institutes also took part. The presentations by the workshop's participants span a wide range of topics, from the interaction between the urban climate and energy consumption in climate-change environments to the impact of urban areas on storms and local circulations, and from the impact of urbanization on the hydrological cycle to air quality and weather prediction.
Resumo:
Upscaling ecological information to larger scales in space and downscaling remote sensing observations or model simulations to finer scales remain grand challenges in Earth system science. Downscaling often involves inferring subgrid information from coarse-scale data, and such ill-posed problems are classically addressed using regularization. Here, we apply two-dimensional Tikhonov Regularization (2DTR) to simulate subgrid surface patterns for ecological applications. Specifically, we test the ability of 2DTR to simulate the spatial statistics of high-resolution (4 m) remote sensing observations of the normalized difference vegetation index (NDVI) in a tundra landscape. We find that the 2DTR approach as applied here can capture the major mode of spatial variability of the high-resolution information, but not multiple modes of spatial variability, and that the Lagrange multiplier (γ) used to impose the condition of smoothness across space is related to the range of the experimental semivariogram. We used observed and 2DTR-simulated maps of NDVI to estimate landscape-level leaf area index (LAI) and gross primary productivity (GPP). NDVI maps simulated using a γ value that approximates the range of observed NDVI result in a landscape-level GPP estimate that differs by ca 2% from those created using observed NDVI. Following findings that GPP per unit LAI is lower near vegetation patch edges, we simulated vegetation patch edges using multiple approaches and found that simulated GPP declined by up to 12% as a result. 2DTR can generate random landscapes rapidly and can be applied to disaggregate ecological information and compare of spatial observations against simulated landscapes.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Artificial neural networks are dynamic systems consisting of highly interconnected and parallel nonlinear processing elements. Systems based on artificial neural networks have high computational rates due to the use of a massive number of these computational elements. Neural networks with feedback connections provide a computing model capable of solving a rich class of optimization problems. In this paper, a modified Hopfield network is developed for solving problems related to operations research. The internal parameters of the network are obtained using the valid-subspace technique. Simulated examples are presented as an illustration of the proposed approach. Copyright (C) 2000 IFAC.
Resumo:
The conventional Newton and fast decoupled power flow (FDPF) methods have been considered inadequate to obtain the maximum loading point of power systems due to ill-conditioning problems at and near this critical point. It is well known that the PV and Q-theta decoupling assumptions of the fast decoupled power flow formulation no longer hold in the vicinity of the critical point. Moreover, the Jacobian matrix of the Newton method becomes singular at this point. However, the maximum loading point can be efficiently computed through parameterization techniques of continuation methods. In this paper it is shown that by using either theta or V as a parameter, the new fast decoupled power flow versions (XB and BX) become adequate for the computation of the maximum loading point only with a few small modifications. The possible use of reactive power injection in a selected PV bus (Q(PV)) as continuation parameter (mu) for the computation of the maximum loading point is also shown. A trivial secant predictor, the modified zero-order polynomial which uses the current solution and a fixed increment in the parameter (V, theta, or mu) as an estimate for the next solution, is used in predictor step. These new versions are compared to each other with the purpose of pointing out their features, as well as the influence of reactive power and transformer tap limits. The results obtained with the new approach for the IEEE test systems (14, 30, 57 and 118 buses) are presented and discussed in the companion paper. The results show that the characteristics of the conventional method are enhanced and the region of convergence around the singular solution is enlarged. In addition, it is shown that parameters can be switched during the tracing process in order to efficiently determine all the PV curve points with few iterations. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
This paper describes an interactive environment built entirely upon public domain or free software, intended to be used as the preprocessor of a finite element package for the simulation of three-dimensional electromagnetic problems.