929 resultados para Function Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is still an open question how equilibrium warming in response to increasing radiative forcing - the specific equilibrium climate sensitivity S - depends on background climate. We here present palaeodata-based evidence on the state dependency of S, by using CO2 proxy data together with a 3-D ice-sheet-model-based reconstruction of land ice albedo over the last 5 million years (Myr). We find that the land ice albedo forcing depends non-linearly on the background climate, while any non-linearity of CO2 radiative forcing depends on the CO2 data set used. This non-linearity has not, so far, been accounted for in similar approaches due to previously more simplistic approximations, in which land ice albedo radiative forcing was a linear function of sea level change. The latitudinal dependency of ice-sheet area changes is important for the non-linearity between land ice albedo and sea level. In our set-up, in which the radiative forcing of CO2 and of the land ice albedo (LI) is combined, we find a state dependence in the calculated specific equilibrium climate sensitivity, S[CO2,LI], for most of the Pleistocene (last 2.1 Myr). During Pleistocene intermediate glaciated climates and interglacial periods, S[CO2,LI] is on average ~ 45 % larger than during Pleistocene full glacial conditions. In the Pliocene part of our analysis (2.6-5 Myr BP) the CO2 data uncertainties prevent a well-supported calculation for S[CO2,LI], but our analysis suggests that during times without a large land ice area in the Northern Hemisphere (e.g. before 2.82 Myr BP), the specific equilibrium climate sensitivity, S[CO2,LI], was smaller than during interglacials of the Pleistocene. We thus find support for a previously proposed state change in the climate system with the widespread appearance of northern hemispheric ice sheets. This study points for the first time to a so far overlooked non-linearity in the land ice albedo radiative forcing, which is important for similar palaeodata-based approaches to calculate climate sensitivity. However, the implications of this study for a suggested warming under CO2 doubling are not yet entirely clear since the details of necessary corrections for other slow feedbacks are not fully known and the uncertainties that exist in the ice-sheet simulations and global temperature reconstructions are large.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A nested ice flow model was developed for eastern Dronning Maud Land to assist with the dating and interpretation of the EDML deep ice core. The model consists of a high-resolution higher-order ice dynamic flow model that was nested into a comprehensive 3-D thermomechanical model of the whole Antarctic ice sheet. As the drill site is on a flank position the calculations specifically take into account the effects of horizontal advection as deeper ice in the core originated from higher inland. First the regional velocity field and ice sheet geometry is obtained from a forward experiment over the last 8 glacial cycles. The result is subsequently employed in a Lagrangian backtracing algorithm to provide particle paths back to their time and place of deposition. The procedure directly yields the depth-age distribution, surface conditions at particle origin, and a suite of relevant parameters such as initial annual layer thickness. This paper discusses the method and the main results of the experiment, including the ice core chronology, the non-climatic corrections needed to extract the climatic part of the signal, and the thinning function. The focus is on the upper 89% of the ice core (appr. 170 kyears) as the dating below that is increasingly less robust owing to the unknown value of the geothermal heat flux. It is found that the temperature biases resulting from variations of surface elevation are up to half of the magnitude of the climatic changes themselves.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyze the effect of environmental uncertainties on optimal fishery management in a bio-economic fishery model. Unlike most of the literature on resource economics, but in line with ecological models, we allow the different biological processes of survival and recruitment to be affected differently by environmental uncertainties. We show that the overall effect of uncertainty on the optimal size of a fish stock is ambiguous, depending on the prudence of the value function. For the case of a risk-neutral fishery manager, the overall effect depends on the relative magnitude of two opposing effects, the 'convex-cost effect' and the 'gambling effect'. We apply the analysis to the Baltic cod and the North Sea herring fisheries, concluding that for risk neutral agents the net effect of environmental uncertainties on the optimal size of these fish stocks is negative, albeit small in absolute value. Under risk aversion, the effect on optimal stock size is positive for sufficiently high coefficients of constant relative risk aversion.

Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SIMBAA is a spatially explicit, individual-based simulation model. It was developed to analyse the response of populations of Antarctic benthic species and their diversity to iceberg scouring. This disturbance is causing a high local mortality providing potential space for new colonisation. Traits can be attributed to model species, e.g. in terms of reproduction, dispersal, and life span. Physical disturbances can be designed in space and time, e.g. in terms of size, shape, and frequency. Environmental heterogeneity can be considered by cell-specific capacities to host a certain number of individuals. When grid cells become empty (after a disturbance event or due to natural mortality of of an individual), a lottery decides which individual from which species stored in a pool of candidates (for this cell) will recruit in that cell. After a defined period the individuals become mature and their offspring are dispersed and stored in the pool of candidates. The biological parameters and disturbance regimes decide on how long an individual lives. Temporal development of single populations of species as well as Shannon diversity are depicted in the main window graphically and primary values are listed. Examples for simulations can be loaded and saved as sgf-files. The results are also shown in an additional window in a dimensionless area with 50 x 50 cells, which contain single individuals depicted as circles; their colour indicates the assignment to the self-designed model species and the size represents their age. Dominant species per cell and disturbed areas can also be depicted. Output of simulation runs can be saved as images, which can be assembled to video-clips by standard computer programs (see GIF-examples of which "Demo 1" represents the response of the Antarctic benthos to iceberg scouring and "Demo 2" represents a simulation of a deep-sea benthic habitat).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Armington Assumption in the context of multi-regional CGE models is commonly interpreted as follows: Same commodities with different origins are imperfect substitutes for each other. In this paper, a static spatial CGE model that is compatible with this assumption and explicitly considers the transport sector and regional price differentials is formulated. Trade coefficients, which are derived endogenously from the optimization behaviors of firms and households, are shown to take the form of a potential function. To investigate how the elasticity of substitutions affects equilibrium solutions, a simpler version of the model that incorporates three regions and two sectors (besides the transport sector) is introduced. Results indicate: (1) if commodities produced in different regions are perfect substitutes, regional economies will be either autarkic or completely symmetric and (2) if they are imperfect substitutes, the impact of elasticity on the price equilibrium system as well as trade coefficients will be nonlinear and sometimes very sensitive.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a framework for an SCGE model that is compatible with the Armington assumption and explicitly considers transport activities. In the model, the trade coefficient takes the form of a potential function,and the equilibrium market price becomes similar to the price index of varietal goods in the context of new economic geography (NEG). The features of the model are investigated by using the minimal setting, which comprises two non-transport sectors and three regions. Because transport costs are given exogenously to facilitate study of their impacts, commodity prices are also determined relative to them. The model can be described as a system of homogeneous equations, where an output in one region can arbitrarily be determined similarly as a price in the Walrasian equilibrium. The model closure is sensitive to formulation consistency so that homogeneity of the system would be lost by use of an alternative form of trade coefficients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper shows that today’s modelling of electrical noise as coming from noisy resistances is a non sense one contradicting their nature as systems bearing an electrical noise. We present a new model for electrical noise that including Johnson and Nyquist work also agrees with the Quantum Mechanical description of noisy systems done by Callen and Welton, where electrical energy fluctuates and is dissipated with time. By the two currents the Admittance function links in frequency domain with their common voltage, this new model shows the connection Cause-Effect that exists between Fluctuation and Dissipation of energy in time domain. In spite of its radical departure from today’s belief on electrical noise in resistors, this Complex model for electrical noise is obtained from Nyquist result by basic concepts of Circuit Theory and Thermo- dynamics that also apply to capacitors and inductors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with the problem of efficiently tracking 3D objects in sequences of images. We tackle the efficient 3D tracking problem by using direct image registration. This problem is posed as an iterative optimization procedure that minimizes a brightness error norm. We review the most popular iterative methods for image registration in the literature, turning our attention to those algorithms that use efficient optimization techniques. Two forms of efficient registration algorithms are investigated. The first type comprises the additive registration algorithms: these algorithms incrementally compute the motion parameters by linearly approximating the brightness error function. We centre our attention on Hager and Belhumeur’s factorization-based algorithm for image registration. We propose a fundamental requirement that factorization-based algorithms must satisfy to guarantee good convergence, and introduce a systematic procedure that automatically computes the factorization. Finally, we also bring out two warp functions to register rigid and nonrigid 3D targets that satisfy the requirement. The second type comprises the compositional registration algorithms, where the brightness function error is written by using function composition. We study the current approaches to compositional image alignment, and we emphasize the importance of the Inverse Compositional method, which is known to be the most efficient image registration algorithm. We introduce a new algorithm, the Efficient Forward Compositional image registration: this algorithm avoids the necessity of inverting the warping function, and provides a new interpretation of the working mechanisms of the inverse compositional alignment. By using this information, we propose two fundamental requirements that guarantee the convergence of compositional image registration methods. Finally, we support our claims by using extensive experimental testing with synthetic and real-world data. We propose a distinction between image registration and tracking when using efficient algorithms. We show that, depending whether the fundamental requirements are hold, some efficient algorithms are eligible for image registration but not for tracking.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AnewRelativisticScreenedHydrogenicModel has been developed to calculate atomic data needed to compute the optical and thermodynamic properties of high energy density plasmas. The model is based on anewset of universal screeningconstants, including nlj-splitting that has been obtained by fitting to a large database of ionization potentials and excitation energies. This database was built with energies compiled from the National Institute of Standards and Technology (NIST) database of experimental atomic energy levels, and energies calculated with the Flexible Atomic Code (FAC). The screeningconstants have been computed up to the 5p3/2 subshell using a Genetic Algorithm technique with an objective function designed to minimize both the relative error and the maximum error. To select the best set of screeningconstants some additional physical criteria has been applied, which are based on the reproduction of the filling order of the shells and on obtaining the best ground state configuration. A statistical error analysis has been performed to test the model, which indicated that approximately 88% of the data lie within a ±10% error interval. We validate the model by comparing the results with ionization energies, transition energies, and wave functions computed using sophisticated self-consistent codes and experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes new approaches to improve the local and global approximation (matching) and modeling capability of Takagi–Sugeno (T-S) fuzzy model. The main aim is obtaining high function approximation accuracy and fast convergence. The main problem encountered is that T-S identification method cannot be applied when the membership functions are overlapped by pairs. This restricts the application of the T-S method because this type of membership function has been widely used during the last 2 decades in the stability, controller design of fuzzy systems and is popular in industrial control applications. The approach developed here can be considered as a generalized version of T-S identification method with optimized performance in approximating nonlinear functions. We propose a noniterative method through weighting of parameters approach and an iterative algorithm by applying the extended Kalman filter, based on the same idea of parameters’ weighting. We show that the Kalman filter is an effective tool in the identification of T-S fuzzy model. A fuzzy controller based linear quadratic regulator is proposed in order to show the effectiveness of the estimation method developed here in control applications. An illustrative example of an inverted pendulum is chosen to evaluate the robustness and remarkable performance of the proposed method locally and globally in comparison with the original T-S model. Simulation results indicate the potential, simplicity, and generality of the algorithm. An illustrative example is chosen to evaluate the robustness. In this paper, we prove that these algorithms converge very fast, thereby making them very practical to use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have recently demonstrated a biosensor based on a lattice of SU8 pillars on a 1 μm SiO2/Si wafer by measuring vertically reflectivity as a function of wavelength. The biodetection has been proven with the combination of Bovine Serum Albumin (BSA) protein and its antibody (antiBSA). A BSA layer is attached to the pillars; the biorecognition of antiBSA involves a shift in the reflectivity curve, related with the concentration of antiBSA. A detection limit in the order of 2 ng/ml is achieved for a rhombic lattice of pillars with a lattice parameter (a) of 800 nm, a height (h) of 420 nm and a diameter(d) of 200 nm. These results correlate with calculations using 3D-finite difference time domain method. A 2D simplified model is proposed, consisting of a multilayer model where the pillars are turned into a 420 nm layer with an effective refractive index obtained by using Beam Propagation Method (BPM) algorithm. Results provided by this model are in good correlation with experimental data, reaching a reduction in time from one day to 15 minutes, giving a fast but accurate tool to optimize the design and maximizing sensitivity, and allows analyzing the influence of different variables (diameter, height and lattice parameter). Sensitivity is obtained for a variety of configurations, reaching a limit of detection under 1 ng/ml. Optimum design is not only chosen because of its sensitivity but also its feasibility, both from fabrication (limited by aspect ratio and proximity of the pillars) and fluidic point of view. (© 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An approximate analytic model of a shared memory multiprocessor with a Cache Only Memory Architecture (COMA), the busbased Data Difussion Machine (DDM), is presented and validated. It describes the timing and interference in the system as a function of the hardware, the protocols, the topology and the workload. Model results have been compared to results from an independent simulator. The comparison shows good model accuracy specially for non-saturated systems, where the errors in response times and device utilizations are independent of the number of processors and remain below 10% in 90% of the simulations. Therefore, the model can be used as an average performance prediction tool that avoids expensive simulations in the design of systems with many processors.