940 resultados para Non-linear anisotropic diffusion


Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lahopuun määrästä ja sijoittumisesta ollaan kiinnostuneita paitsi elinympäristöjen monimuotoisuuden, myös ilmakehän hiilen varastoinnin kannalta. Tutkimuksen tavoitteena oli kehittää aluepohjainen laserkeilausdataa hyödyntävä malli lahopuukohteiden paikantamiseksi ja lahopuun määrän estimoimiseksi. Samalla tutkittiin mallin selityskyvyn muuttumista mallinnettavan ruudun kokoa suurennettaessa. Tutkimusalue sijaitsi Itä-Suomessa Sonkajärvellä ja koostui pääasiassa nuorista hoidetuista talousmetsistä. Tutkimuksessa käytettiin harvapulssista laserkeilausdataa sekä kaistoittain mitattua maastodataa kuolleesta puuaineksesta. Aineisto jaettiin siten, että neljäsosa datasta oli käytössä mallinnusta varten ja loput varattiin valmiiden mallien testaamiseen. Lahopuun mallintamisessa käytettiin sekä parametrista että ei-parametrista mallinnusmenetelmää. Logistisen regression avulla erikokoisille (0,04, 0,20, 0,32, 0,52 ja 1,00 ha) ruuduille ennustettiin todennäköisyys lahopuun esiintymiselle. Muodostettujen mallien selittävät muuttujat valittiin 80 laserpiirteen ja näiden muunnoksien joukosta. Mallien selittävät muuttujat valittiin kolmessa vaiheessa. Aluksi muuttujia tarkasteltiin visuaalisesti kuvaamalla ne lahopuumäärän suhteen. Ensimmäisessä vaiheessa sopivimmiksi arvioitujen muuttujien selityskykyä testattiin mallinnuksen toisessa vaiheessa yhden muuttujan mallien avulla. Lopullisessa usean muuttujan mallissa selittävien muuttujien kriteerinä oli tilastollinen merkitsevyys 5 % riskitasolla. 0,20 hehtaarin ruutukoolle luotu malli parametrisoitiin muun kokoisille ruuduille. Logistisella regressiolla toteutetun parametrisen mallintamisen lisäksi, 0,04 ja 1,0 hehtaarin ruutukokojen aineistot luokiteltiin ei-parametrisen CART-mallinnuksen (Classification and Regression Trees) avulla. CARTmenetelmällä etsittiin aineistosta vaikeasti havaittavia epälineaarisia riippuvuuksia laserpiirteiden ja lahopuumäärän välillä. CART-luokittelu tehtiin sekä lahopuustoisuuden että lahopuutilavuuden suhteen. CART-luokituksella päästiin logistista regressiota parempiin tuloksiin ruutujen luokituksessa lahopuustoisuuden suhteen. Logistisella mallilla tehty luokitus parani ruutukoon suurentuessa 0,04 ha:sta(kappa 0,19) 0,32 ha:iin asti (kappa 0,38). 0,52 ha:n ruutukoolla luokituksen kappa-arvo kääntyi laskuun (kappa 0,32) ja laski edelleen hehtaarin ruutukokoon saakka (kappa 0,26). CART-luokitus parani ruutukoon kasvaessa. Luokitustulokset olivat logistista mallinnusta parempia sekä 0,04 ha:n (kappa 0,24) että 1,0 ha:n (kappa 0,52) ruutukoolla. CART-malleilla määritettyjen ruutukohtaisten lahopuutilavuuksien suhteellinen RMSE pieneni ruutukoon kasvaessa. 0,04 hehtaarin ruutukoolla koko aineiston lahopuumäärän suhteellinen RMSE oli 197,1 %, kun hehtaarin ruutukoolla vastaava luku oli 120,3 %. Tämän tutkimuksen tulosten perusteella voidaan todeta, että maastossa mitatun lahopuumäärän ja tutkimuksessa käytettyjen laserpiirteiden yhteys on pienellä ruutukoolla hyvin heikko, mutta vahvistuu hieman ruutukoon kasvaessa. Kun mallinnuksessa käytetty ruutukoko kasvaa, pienialaisten lahopuukeskittymien havaitseminen kuitenkin vaikeutuu. Tutkimuksessa kohteen lahopuustoisuus pystyttiin kartoittamaan kohtuullisesti suurella ruutukoolla, mutta pienialaisten kohteiden kartoittaminen ei onnistunut käytetyillä menetelmillä. Pienialaisten kohteiden paikantaminen laserkeilauksen avulla edellyttää jatkotutkimusta erityisesti tiheäpulssisen laserdatan käytöstä lahopuuinventoinneissa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Methodologies are presented for minimization of risk in a river water quality management problem. A risk minimization model is developed to minimize the risk of low water quality along a river in the face of conflict among various stake holders. The model consists of three parts: a water quality simulation model, a risk evaluation model with uncertainty analysis and an optimization model. Sensitivity analysis, First Order Reliability Analysis (FORA) and Monte-Carlo simulations are performed to evaluate the fuzzy risk of low water quality. Fuzzy multiobjective programming is used to formulate the multiobjective model. Probabilistic Global Search Laussane (PGSL), a global search algorithm developed recently, is used for solving the resulting non-linear optimization problem. The algorithm is based on the assumption that better sets of points are more likely to be found in the neighborhood of good sets of points, therefore intensifying the search in the regions that contain good solutions. Another model is developed for risk minimization, which deals with only the moments of the generated probability density functions of the water quality indicators. Suitable skewness values of water quality indicators, which lead to low fuzzy risk are identified. Results of the models are compared with the results of a deterministic fuzzy waste load allocation model (FWLAM), when methodologies are applied to the case study of Tunga-Bhadra river system in southern India, with a steady state BOD-DO model. The fractional removal levels resulting from the risk minimization model are slightly higher, but result in a significant reduction in risk of low water quality. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A method for total risk analysis of embankment dams under earthquake conditions is discussed and applied to the selected embankment dams, i.e., Chang, Tapar, Rudramata, and Kaswati located in the Kachchh region of Gujarat, India, to obtain the seismic hazard rating of the dam site and the risk rating of the structures. Based on the results of the total risk analysis of the dams, coupled non-linear dynamic numerical analyses of the dam sections are performed using acceleration time history record of the Bhuj (India) earthquake as well as five other major earthquakes recorded worldwide. The objective of doing so is to perform the numerical analysis of the dams for the range of amplitude, frequency content and time duration of input motions. The deformations calculated from the numerical analyses are also compared with other approaches available in literature, viz, Makdisi and Seed (1978) approach, Jansen's approach (1990), Swaisgood's method (1995), Bureau's method (1997). Singh et al. approach (2007), and Saygili and Rathje approach (2008) and the results are utilized to foresee the stability of dams in future earthquake scenario. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Composites are finding increasing application in many advanced engineering fields like aerospace, marine engineering, hightech sports equipment, etc., due to their high specific strength and/or specific stiffness values. The use of composite components in complex situations like airplane wing root or locations of concentrated load transfer is limited due to the lack of complete understanding of their behaviour in the region of joints. Joints are unavoidable in the design and manufacture of complex structures. Pin joints are one of the most commonly used methods of connection. In regions of high stresses like airplane wing root joints interference fit pins are used to increase its fatigue life and thereby increase the reliability of the whole structure. The present contribution is a study on the behaviour of the interference fit pin in a composite plate subjected to both pull and push type of loads. The interference fit pin exhibits partial contact/separation under the loads and the contact region is a non-linear function of the load magnitude. This non-linear behaviour is studied by adopting the inverse technique and some new results are presented in this paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been suggested that materials with interesting and useful bulk non-linear optical properties might result by substituting vanadium, the lightest element in the group V of periodic table, for Nb or Ta atoms along with Li and three oxygens. It is with this motivation that we have been making attempts to grow single crystals of LiNbO3 doped with various concentrations of V2O5. Unfortunately the results obtained on the ceramic samples of this material have not been very encouraging, owing to their hygroscopic nature. However, our attempts to prepare both ceramic and single-crystalline samples of potassium lithium niobate (K3Li2Nb5O15; KLN) doped V2O5 were successful. In this letter we report the preliminary results concerning our studies on the effect of V2O5 doping on the structural as well as topographic features of both ceramic and single-crystalline samples of KLN.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The concept of short range strong spin-two (f) field (mediated by massive f-mesons) and interacting directly with hadrons was introduced along with the infinite range (g) field in early seventies. In the present review of this growing area (often referred to as strong gravity) we give a general relativistic treatment in terms of Einstein-type (non-abelian gauge) field equations with a coupling constant Gf reverse similar, equals 1038 GN (GN being the Newtonian constant) and a cosmological term λf ƒ;μν (ƒ;μν is strong gravity metric and λf not, vert, similar 1028 cm− is related to the f-meson mass). The solutions of field equations linearized over de Sitter (uniformly curves) background are capable of having connections with internal symmetries of hadrons and yielding mass formulae of SU(3) or SU(6) type. The hadrons emerge as de Sitter “microuniverses” intensely curved within (radius of curvature not, vert, similar10−14 cm).The study of spinor fields in the context of strong gravity has led to Heisenberg's non-linear spinor equation with a fundamental length not, vert, similar2 × 10−14 cm. Furthermore, one finds repulsive spin-spin interaction when two identical spin-Image particles are in parallel configuration and a connection between weak interaction and strong gravity.Various other consequences of strong gravity embrace black hole (solitonic) solutions representing hadronic bags with possible quark confinement, Regge-like relations between spins and masses, connection with monopoles and dyons, quantum geons and friedmons, hadronic temperature, prevention of gravitational singularities, providing a physical basis for Dirac's two metric and large numbers hypothesis and projected unification with other basic interactions through extended supergravity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Often the soil hydraulic parameters are obtained by the inversion of measured data (e.g. soil moisture, pressure head, and cumulative infiltration, etc.). However, the inverse problem in unsaturated zone is ill-posed due to various reasons, and hence the parameters become non-unique. The presence of multiple soil layers brings the additional complexities in the inverse modelling. The generalized likelihood uncertainty estimate (GLUE) is a useful approach to estimate the parameters and their uncertainty when dealing with soil moisture dynamics which is a highly non-linear problem. Because the estimated parameters depend on the modelling scale, inverse modelling carried out on laboratory data and field data may provide independent estimates. The objective of this paper is to compare the parameters and their uncertainty estimated through experiments in the laboratory and in the field and to assess which of the soil hydraulic parameters are independent of the experiment. The first two layers in the field site are characterized by Loamy sand and Loamy. The mean soil moisture and pressure head at three depths are measured with an interval of half hour for a period of 1 week using the evaporation method for the laboratory experiment, whereas soil moisture at three different depths (60, 110, and 200 cm) is measured with an interval of 1 h for 2 years for the field experiment. A one-dimensional soil moisture model on the basis of the finite difference method was used. The calibration and validation are approximately for 1 year each. The model performance was found to be good with root mean square error (RMSE) varying from 2 to 4 cm(3) cm(-3). It is found from the two experiments that mean and uncertainty in the saturated soil moisture (theta(s)) and shape parameter (n) of van Genuchten equations are similar for both the soil types. Copyright (C) 2010 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Seepage through sand bed channels in a downward direction (suction) reduces the stability of particles and initiates the sand movement. Incipient motion of sand bed channel with seepage cannot be designed by using the conventional approach. Metamodeling techniques, which employ a non-linear pattern analysis between input and output parameters and solely based on the experimental observations, can be used to model such phenomena. Traditional approach to find non-dimensional parameters has not been used in the present work. Parameters, which can influence the incipient motion with seepage, have been identified and non-dimensionalized in the present work. Non-dimensional stream power concept has been used to describe the process. By using these non-dimensional parameters; present work describes a radial basis function (RBF) metamodel for prediction of incipient motion condition affected by seepage. The coefficient of determination, R-2 of the model is 0.99. Thus, it can be said that model predicts the phenomena very well. With the help of the metamodel, design curves have been presented for designing the sand bed channel when it is affected by seepage. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis studies the interest-rate policy of the ECB by estimating monetary policy rules using real-time data and central bank forecasts. The aim of the estimations is to try to characterize a decade of common monetary policy and to look at how different models perform at this task.The estimated rules include: contemporary Taylor rules, forward-looking Taylor rules, nonlinearrules and forecast-based rules. The nonlinear models allow for the possibility of zone-like preferences and an asymmetric response to key variables. The models therefore encompass the most popular sub-group of simple models used for policy analysis as well as the more unusual non-linear approach. In addition to the empirical work, this thesis also contains a more general discussion of monetary policy rules mostly from a New Keynesian perspective. This discussion includes an overview of some notable related studies, optimal policy, policy gradualism and several other related subjects. The regression estimations are performed with either least squares or the generalized method of moments depending on the requirements of the estimations. The estimations use data from both the Euro Area Real-Time Database and the central bank forecasts published in ECB Monthly Bulletins. These data sources represent some of the best data that is available for this kind of analysis. The main results of this thesis are that forward-looking behavior appears highly prevalent, but that standard forward-looking Taylor rules offer only ambivalent results with regard to inflation. Nonlinear models are shown to work, but on the other hand do not have a strong rationale over a simpler linear formulation. However, the forecasts appear to be highly useful in characterizing policy and may offer the most accurate depiction of a predominantly forward-looking central bank. In particular the inflation response appears much stronger while the output response becomes highly forward-looking as well.