6 resultados para Computations

em Universidad de Alicante


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Different non-Fourier models of heat conduction have been considered in recent years, in a growing area of applications, to model microscale and ultrafast, transient, nonequilibrium responses in heat and mass transfer. In this work, using Fourier transforms, we obtain exact solutions for different lagging models of heat conduction in a semi-infinite domain, which allow the construction of analytic-numerical solutions with prescribed accuracy. Examples of numerical computations, comparing the properties of the models considered, are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Non-Fourier models of heat conduction are increasingly being considered in the modeling of microscale heat transfer in engineering and biomedical heat transfer problems. The dual-phase-lagging model, incorporating time lags in the heat flux and the temperature gradient, and some of its particular cases and approximations, result in heat conduction modeling equations in the form of delayed or hyperbolic partial differential equations. In this work, the application of difference schemes for the numerical solution of lagging models of heat conduction is considered. Numerical schemes for some DPL approximations are developed, characterizing their properties of convergence and stability. Examples of numerical computations are included.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Different non-Fourier models of heat conduction, that incorporate time lags in the heat flux and/or the temperature gradient, have been increasingly considered in the last years to model microscale heat transfer problems in engineering. Numerical schemes to obtain approximate solutions of constant coefficients lagging models of heat conduction have already been proposed. In this work, an explicit finite difference scheme for a model with coefficients variable in time is developed, and their properties of convergence and stability are studied. Numerical computations showing examples of applications of the scheme are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to present a new geometric model based on the mathematical morphology paradigm, specialized to provide determinism to the classic morphological operations. The determinism is needed to model dynamic processes that require an order of application, as is the case for designing and manufacturing objects in CAD/CAM environments. Design/methodology/approach – The basic trajectory-based operation is the basis of the proposed morphological specialization. This operation allows the definition of morphological operators that obtain sequentially ordered sets of points from the boundary of the target objects, inexistent determinism in the classical morphological paradigm. From this basic operation, the complete set of morphological operators is redefined, incorporating the concept of boundary and determinism: trajectory-based erosion and dilation, and other morphological filtering operations. Findings – This new morphological framework allows the definition of complex three-dimensional objects, providing arithmetical support to generating machining trajectories, one of the most complex problems currently occurring in CAD/CAM. Originality/value – The model proposes the integration of the processes of design and manufacture, so that it avoids the problems of accuracy and integrity that present other classic geometric models that divide these processes in two phases. Furthermore, the morphological operative is based on points sets, so the geometric data structures and the operations are intrinsically simple and efficient. Another important value that no excessive computational resources are needed, because only the points in the boundary are processed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Free Core Nutation (FCN) is a free mode of the Earth's rotation caused by the different material characteristics of the Earth's core and mantle. This causes the rotational axes of those layers to slightly diverge from each other, resulting in a wobble of the Earth's rotation axis comparable to nutations. In this paper we focus on estimating empirical FCN models using the observed nutations derived from the VLBI sessions between 1993 and 2013. Assuming a fixed value for the oscillation period, the time-variable amplitudes and phases are estimated by means of multiple sliding window analyses. The effects of using different a priori Earth Rotation Parameters (ERP) in the derivation of models are also addressed. The optimal choice of the fundamental parameters of the model, namely the window width and step-size of its shift, is searched by performing a thorough experimental analysis using real data. The former analyses lead to the derivation of a model with a temporal resolution higher than the one used in the models currently available, with a sliding window reduced to 400 days and a day-by-day shift. It is shown that this new model increases the accuracy of the modeling of the observed Earth's rotation. Besides, empirical models determined from USNO Finals as a priori ERP present a slightly lower Weighted Root Mean Square (WRMS) of residuals than IERS 08 C04 along the whole period of VLBI observations, according to our computations. The model is also validated through comparisons with other recognized models. The level of agreement among them is satisfactory. Let us remark that our estimates give rise to the lowest residuals and seem to reproduce the FCN signal in more detail.