990 resultados para Continuous functions
Resumo:
In this article, a new methodology is presented to obtain representation models for a priori relation z = u(x1, x2, . . . ,xn) (1), with a known an experimental dataset zi; x1i ; x2i ; x3i ; . . . ; xni i=1;2;...;p· In this methodology, a potential energy is initially defined over each possible model for the relationship (1), what allows the application of the Lagrangian mechanics to the derived system. The solution of the Euler–Lagrange in this system allows obtaining the optimal solution according to the minimal action principle. The defined Lagrangian, corresponds to a continuous medium, where a n-dimensional finite elements model has been applied, so it is possible to get a solution for the problem solving a compatible and determined linear symmetric equation system. The computational implementation of the methodology has resulted in an improvement in the process of get representation models obtained and published previously by the authors.
Resumo:
This paper proposes an adaptive algorithm for clustering cumulative probability distribution functions (c.p.d.f.) of a continuous random variable, observed in different populations, into the minimum homogeneous clusters, making no parametric assumptions about the c.p.d.f.’s. The distance function for clustering c.p.d.f.’s that is proposed is based on the Kolmogorov–Smirnov two sample statistic. This test is able to detect differences in position, dispersion or shape of the c.p.d.f.’s. In our context, this statistic allows us to cluster the recorded data with a homogeneity criterion based on the whole distribution of each data set, and to decide whether it is necessary to add more clusters or not. In this sense, the proposed algorithm is adaptive as it automatically increases the number of clusters only as necessary; therefore, there is no need to fix in advance the number of clusters. The output of the algorithm are the common c.p.d.f. of all observed data in the cluster (the centroid) and, for each cluster, the Kolmogorov–Smirnov statistic between the centroid and the most distant c.p.d.f. The proposed algorithm has been used for a large data set of solar global irradiation spectra distributions. The results obtained enable to reduce all the information of more than 270,000 c.p.d.f.’s in only 6 different clusters that correspond to 6 different c.p.d.f.’s.
Resumo:
We investigate the role of local connectedness in utility theory and prove that any continuous total preorder on a locally connected separable space is continuously representable. This is a new simple criterion for the representability of continuous preferences, and is not a consequence of the standard theorems in utility theory that use conditions such as connectedness and separability, second countability, or path-connectedness. Finally we give applications to problems involving the existence of value functions in population ethics and to the problem of proving the existence of continuous utility functions in general equilibrium models with land as one of the commodities. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Let (Phi(t))(t is an element of R+) be a Harris ergodic continuous-time Markov process on a general state space, with invariant probability measure pi. We investigate the rates of convergence of the transition function P-t(x, (.)) to pi; specifically, we find conditions under which r(t) vertical bar vertical bar P-t (x, (.)) - pi vertical bar vertical bar -> 0 as t -> infinity, for suitable subgeometric rate functions r(t), where vertical bar vertical bar - vertical bar vertical bar denotes the usual total variation norm for a signed measure. We derive sufficient conditions for the convergence to hold, in terms of the existence of suitable points on which the first hitting time moments are bounded. In particular, for stochastically ordered Markov processes, explicit bounds on subgeometric rates of convergence are obtained. These results are illustrated in several examples.
Resumo:
E. L. DeLosh, J. R. Busemeyer, and M. A. McDaniel (1997) found that when learning a positive, linear relationship between a continuous predictor (x) and a continuous criterion (y), trainees tend to underestimate y on items that ask the trainee to extrapolate. In 3 experiments, the authors examined the phenomenon and found that the tendency to underestimate y is reliable only in the so-called lower extrapolation region-that is, new values of x that lie between zero and the edge of the training region. Existing models of function learning, such as the extrapolation-association model (DeLosh et al., 1997) and the population of linear experts model (M. L. Kalish, S. Lewandowsky, & J. Kruschke, 2004), cannot account for these results. The authors show that with minor changes, both models can predict the correct pattern of results.
Resumo:
In this paper, we address some issue related to evaluating and testing evolutionary algorithms. A landscape generator based on Gaussian functions is proposed for generating a variety of continuous landscapes as fitness functions. Through some initial experiments, we illustrate the usefulness of this landscape generator in testing evolutionary algorithms.
Resumo:
A review is given of general chromatographic theory, the factors affecting the performance of chromatographi c columns, and aspects of scale-up of the chromatographic process. The theory of gel permeation chromatography (g. p. c.) is received, and the results of an experimental study to optimize the performance of an analytical g.p.c. system are reported. The design and construction of a novel sequential continuous chromatographic refining unit (SCCR3), for continuous liquid-liquid chromatography applications, is described. Counter-current operation is simulated by sequencing a system of inlet and outlet port functions around a connected series of fixed, 5.1 cm internal diameter x 70 cm long, glass columns. The number of columns may be varied, and, during this research, a series of either twenty or ten columns was used. Operation of the unit for continuous fractionation of a dextran polymer (M. W. - 30,000) by g.p.c. is reported using 200-400 µm diameter porous silica beads (Spherosil XOB07S) as packing, and distilled water for the mobile phase. The effects of feed concentration, feed flow rate, and mobile and stationary phase flow rates have been investigated, by means of both product, and on-column, concentrations and molecular weight distributions. The ability to operate the unit successfully at on-column concentrations as high as 20% w/v dextran has been demonstrated, and removal of both high and low molecular weight ends of a polymer feed distribution, to produce products meeting commercial specifications, has been achieved. Equivalent throughputs have been as high as 2.8 tonnes per annum for ten columns, based on continuous operation for 8000 hours per annum. A concentration dependence of the equilibrium distribution coefficient, KD observed during continuous fractionation studies, is related to evidence in the literature and experimental results obtained on a small-scale batch column. Theoretical treatments of the counter-current chromatographic process are outlined, and a preliminary computer simulation of the SCCR3 unit t is presented.
Resumo:
The paper considers vector discrete optimization problem with linear fractional functions of criteria on a feasible set that has combinatorial properties of combinations. Structural properties of a feasible solution domain and of Pareto–optimal (efficient), weakly efficient, strictly efficient solution sets are examined. A relation between vector optimization problems on a combinatorial set of combinations and on a continuous feasible set is determined. One possible approach is proposed in order to solve a multicriteria combinatorial problem with linear- fractional functions of criteria on a set of combinations.
Resumo:
∗Participant in Workshop in Linear Analysis and Probability, Texas A & M University, College Station, Texas, 2000. Research partially supported by the Edmund Landau Center for Research in Mathematical Analysis and related areas, sponsored by Minerva Foundation (Germany).
Resumo:
Stability of nonlinear impulsive differential equations with "supremum" is studied. A special type of stability, combining two different measures and a dot product on a cone, is defined. Perturbing cone-valued piecewise continuous Lyapunov functions have been applied. Method of Razumikhin as well as comparison method for scalar impulsive ordinary differential equations have been employed.
Resumo:
MSC 2010: 26A33, 46Fxx, 58C05 Dedicated to 80-th birthday of Prof. Rudolf Gorenflo
Resumo:
Dynamics of biomolecules over various spatial and time scales are essential for biological functions such as molecular recognition, catalysis and signaling. However, reconstruction of biomolecular dynamics from experimental observables requires the determination of a conformational probability distribution. Unfortunately, these distributions cannot be fully constrained by the limited information from experiments, making the problem an ill-posed one in the terminology of Hadamard. The ill-posed nature of the problem comes from the fact that it has no unique solution. Multiple or even an infinite number of solutions may exist. To avoid the ill-posed nature, the problem needs to be regularized by making assumptions, which inevitably introduce biases into the result.
Here, I present two continuous probability density function approaches to solve an important inverse problem called the RDC trigonometric moment problem. By focusing on interdomain orientations we reduced the problem to determination of a distribution on the 3D rotational space from residual dipolar couplings (RDCs). We derived an analytical equation that relates alignment tensors of adjacent domains, which serves as the foundation of the two methods. In the first approach, the ill-posed nature of the problem was avoided by introducing a continuous distribution model, which enjoys a smoothness assumption. To find the optimal solution for the distribution, we also designed an efficient branch-and-bound algorithm that exploits the mathematical structure of the analytical solutions. The algorithm is guaranteed to find the distribution that best satisfies the analytical relationship. We observed good performance of the method when tested under various levels of experimental noise and when applied to two protein systems. The second approach avoids the use of any model by employing maximum entropy principles. This 'model-free' approach delivers the least biased result which presents our state of knowledge. In this approach, the solution is an exponential function of Lagrange multipliers. To determine the multipliers, a convex objective function is constructed. Consequently, the maximum entropy solution can be found easily by gradient descent methods. Both algorithms can be applied to biomolecular RDC data in general, including data from RNA and DNA molecules.
Resumo:
The mixing regime of the upper 180 m of a mesoscale eddy in the vicinity of the Antarctic Polar Front at 47° S and 21° E was investigated during the R.V. Polarstern cruise ANT-XVIII/2 within the scope of the iron fertilization experiment EisenEx. On the basis of hydrographic CTD and ADCP profiles we deduced the vertical diffusivity Kz from two different parameterizations. Since these parameterizations bear the character of empirical functions, based on theoretical and idealized assumptions, they were inter alia compared with Cox-number and Thorpe-scale related diffusivities deduced from microstructure measurements, which supplied the first direct insights into turbulence of this ocean region. Values of Kz in the range of 10**-4 - 10**-3 m**2/s appear as a rather robust estimate of vertical diffusivity within the seasonal pycnocline. Values in the mixed layer above are more variable in time and reach 10**-1 m**2/s during periods of strong winds. The results confirm a close agreement between the microstructure-based eddy diffusivities and eddy diffusivities calculated after the parameterization of Pacanowski and Philander [1981, Journal of Physical Oceanography 11, 1443-1451, doi:10.1175/1520-0485(1981)011<1443:POVMIN>2.0.CO;2].
Resumo:
Adjoint methods have proven to be an efficient way of calculating the gradient of an objective function with respect to a shape parameter for optimisation, with a computational cost nearly independent of the number of the design variables [1]. The approach in this paper links the adjoint surface sensitivities (gradient of objective function with respect to the surface movement) with the parametric design velocities (movement of the surface due to a CAD parameter perturbation) in order to compute the gradient of the objective function with respect to CAD variables.
For a successful implementation of shape optimization strategies in practical industrial cases, the choice of design variables or parameterisation scheme used for the model to be optimized plays a vital role. Where the goal is to base the optimization on a CAD model the choices are to use a NURBS geometry generated from CAD modelling software, where the position of the NURBS control points are the optimisation variables [2] or to use the feature based CAD model with all of the construction history to preserve the design intent [3]. The main advantage of using the feature based model is that the optimized model produced can be directly used for the downstream applications including manufacturing and process planning.
This paper presents an approach for optimization based on the feature based CAD model, which uses CAD parameters defining the features in the model geometry as the design variables. In order to capture the CAD surface movement with respect to the change in design variable, the “Parametric Design Velocity” is calculated, which is defined as the movement of the CAD model boundary in the normal direction due to a change in the parameter value.
The approach presented here for calculating the design velocities represents an advancement in terms of capability and robustness of that described by Robinson et al. [3]. The process can be easily integrated to most industrial optimisation workflows and is immune to the topology and labelling issues highlighted by other CAD based optimisation processes. It considers every continuous (“real value”) parameter type as an optimisation variable, and it can be adapted to work with any CAD modelling software, as long as it has an API which provides access to the values of the parameters which control the model shape and allows the model geometry to be exported. To calculate the movement of the boundary the methodology employs finite differences on the shape of the 3D CAD models before and after the parameter perturbation. The implementation procedure includes calculating the geometrical movement along a normal direction between two discrete representations of the original and perturbed geometry respectively. Parametric design velocities can then be directly linked with adjoint surface sensitivities to extract the gradients to use in a gradient-based optimization algorithm.
The optimisation of a flow optimisation problem is presented, in which the power dissipation of the flow in an automotive air duct is to be reduced by changing the parameters of the CAD geometry created in CATIA V5. The flow sensitivities are computed with the continuous adjoint method for a laminar and turbulent flow [4] and are combined with the parametric design velocities to compute the cost function gradients. A line-search algorithm is then used to update the design variables and proceed further with optimisation process.
Resumo:
The lack of flexibility in logistic systems currently on the market leads to the development of new innovative transportation systems. In order to find the optimal configuration of such a system depending on the current goal functions, for example minimization of transport times and maximization of the throughput, various mathematical methods of multi-criteria optimization are applicable. In this work, the concept of a complex transportation system is presented. Furthermore, the question of finding the optimal configuration of such a system through mathematical methods of optimization is considered.