985 resultados para interpolation method
Resumo:
This paper presents a series of calculation procedures for computer design of ternary distillation columns overcoming the iterative equilibrium calculations necessary in these kind of problems and, thus, reducing the calculation time. The proposed procedures include interpolation and intersection methods to solve the equilibrium equations and the mass and energy balances. The calculation programs proposed also include the possibility of rigorous solution of mass and energy balances and equilibrium relations.
Resumo:
This paper presents a method to interpolate a periodic band-limited signal from its samples lying at nonuniform positions in a regular grid, which is based on the FFT and has the same complexity order as this last algorithm. This kind of interpolation is usually termed “the missing samples problem” in the literature, and there exists a wide variety of iterative and direct methods for its solution. The one presented in this paper is a direct method that exploits the properties of the so-called erasure polynomial and provides a significant improvement on the most efficient method in the literature, which seems to be the burst error recovery (BER) technique of Marvasti’s The paper includes numerical assessments of the method’s stability and complexity.
Resumo:
Superstructure approaches are the solution to the difficult problem which involves the rigorous economic design of a distillation column. These methods require complex initialization procedures and they are hard to solve. For this reason, these methods have not been extensively used. In this work, we present a methodology for the rigorous optimization of chemical processes implemented on a commercial simulator using surrogate models based on a kriging interpolation. Several examples were studied, but in this paper, we perform the optimization of a superstructure for a non-sharp separation to show the efficiency and effectiveness of the method. Noteworthy that it is possible to get surrogate models accurate enough with up to seven degrees of freedom.
Resumo:
We have developed a statistical gap-filling method adapted to the specific coverage and properties of observed fugacity of surface ocean CO2 (fCO2). We have used this method to interpolate the Surface Ocean CO2 Atlas (SOCAT) v2 database on a 2.5°×2.5° global grid (south of 70°N) for 1985-2011 at monthly resolution. The method combines a spatial interpolation based on a 'radius of influence' to determine nearby similar fCO2 values with temporal harmonic and cubic spline curve-fitting, and also fits long term trends and seasonal cycles. Interannual variability is established using deviations of observations from the fitted trends and seasonal cycles. An uncertainty is computed for all interpolated values based on the spatial and temporal range of the interpolation. Tests of the method using model data show that it performs as well as or better than previous regional interpolation methods, but in addition it provides a near-global and interannual coverage.
Resumo:
In many Environmental Information Systems the actual observations arise from a discrete monitoring network which might be rather heterogeneous in both location and types of measurements made. In this paper we describe the architecture and infrastructure for a system, developed as part of the EU FP6 funded INTAMAP project, to provide a service oriented solution that allows the construction of an interoperable, automatic, interpolation system. This system will be based on the Open Geospatial Consortium’s Web Feature Service (WFS) standard. The essence of our approach is to extend the GML3.1 observation feature to include information about the sensor using SensorML, and to further extend this to incorporate observation error characteristics. Our extended WFS will accept observations, and will store them in a database. The observations will be passed to our R-based interpolation server, which will use a range of methods, including a novel sparse, sequential kriging method (only briefly described here) to produce an internal representation of the interpolated field resulting from the observations currently uploaded to the system. The extended WFS will then accept queries, such as ‘What is the probability distribution of the desired variable at a given point’, ‘What is the mean value over a given region’, or ‘What is the probability of exceeding a certain threshold at a given location’. To support information-rich transfer of complex and uncertain predictions we are developing schema to represent probabilistic results in a GML3.1 (object-property) style. The system will also offer more easily accessible Web Map Service and Web Coverage Service interfaces to allow users to access the system at the level of complexity they require for their specific application. Such a system will offer a very valuable contribution to the next generation of Environmental Information Systems in the context of real time mapping for monitoring and security, particularly for systems that employ a service oriented architecture.
Resumo:
Recently within the machine learning and spatial statistics communities many papers have explored the potential of reduced rank representations of the covariance matrix, often referred to as projected or fixed rank approaches. In such methods the covariance function of the posterior process is represented by a reduced rank approximation which is chosen such that there is minimal information loss. In this paper a sequential framework for inference in such projected processes is presented, where the observations are considered one at a time. We introduce a C++ library for carrying out such projected, sequential estimation which adds several novel features. In particular we have incorporated the ability to use a generic observation operator, or sensor model, to permit data fusion. We can also cope with a range of observation error characteristics, including non-Gaussian observation errors. Inference for the variogram parameters is based on maximum likelihood estimation. We illustrate the projected sequential method in application to synthetic and real data sets. We discuss the software implementation and suggest possible future extensions.
Resumo:
Cardiotocographic data provide physicians information about foetal development and permit to assess conditions such as foetal distress. An incorrect evaluation of the foetal status can be of course very dangerous. To improve interpretation of cardiotocographic recordings, great interest has been dedicated to foetal heart rate variability spectral analysis. It is worth reminding, however, that foetal heart rate is intrinsically an uneven series, so in order to produce an evenly sampled series a zero-order, linear or cubic spline interpolation can be employed. This is not suitable for frequency analyses because interpolation introduces alterations in the foetal heart rate power spectrum. In particular, interpolation process can produce alterations of the power spectral density that, for example, affects the estimation of the sympatho-vagal balance (computed as low-frequency/high-frequency ratio), which represents an important clinical parameter. In order to estimate the frequency spectrum alterations of the foetal heart rate variability signal due to interpolation and cardiotocographic storage rates, in this work, we simulated uneven foetal heart rate series with set characteristics, their evenly spaced versions (with different orders of interpolation and storage rates) and computed the sympatho-vagal balance values by power spectral density. For power spectral density estimation, we chose the Lomb method, as suggested by other authors to study the uneven heart rate series in adults. Summarising, the obtained results show that the evaluation of SVB values on the evenly spaced FHR series provides its overestimation due to the interpolation process and to the storage rate. However, cubic spline interpolation produces more robust and accurate results. © 2010 Elsevier Ltd. All rights reserved.
Resumo:
Heterogeneous datasets arise naturally in most applications due to the use of a variety of sensors and measuring platforms. Such datasets can be heterogeneous in terms of the error characteristics and sensor models. Treating such data is most naturally accomplished using a Bayesian or model-based geostatistical approach; however, such methods generally scale rather badly with the size of dataset, and require computationally expensive Monte Carlo based inference. Recently within the machine learning and spatial statistics communities many papers have explored the potential of reduced rank representations of the covariance matrix, often referred to as projected or fixed rank approaches. In such methods the covariance function of the posterior process is represented by a reduced rank approximation which is chosen such that there is minimal information loss. In this paper a sequential Bayesian framework for inference in such projected processes is presented. The observations are considered one at a time which avoids the need for high dimensional integrals typically required in a Bayesian approach. A C++ library, gptk, which is part of the INTAMAP web service, is introduced which implements projected, sequential estimation and adds several novel features. In particular the library includes the ability to use a generic observation operator, or sensor model, to permit data fusion. It is also possible to cope with a range of observation error characteristics, including non-Gaussian observation errors. Inference for the covariance parameters is explored, including the impact of the projected process approximation on likelihood profiles. We illustrate the projected sequential method in application to synthetic and real datasets. Limitations and extensions are discussed. © 2010 Elsevier Ltd.
Resumo:
In the process of engineering design of structural shapes, the flat plate analysis results can be generalized to predict behaviors of complete structural shapes. In this case, the purpose of this project is to analyze a thin flat plate under conductive heat transfer and to simulate the temperature distribution, thermal stresses, total displacements, and buckling deformations. The current approach in these cases has been using the Finite Element Method (FEM), whose basis is the construction of a conforming mesh. In contrast, this project uses the mesh-free Scan Solve Method. This method eliminates the meshing limitation using a non-conforming mesh. I implemented this modeling process developing numerical algorithms and software tools to model thermally induced buckling. In addition, convergence analysis was achieved, and the results were compared with FEM. In conclusion, the results demonstrate that the method gives similar solutions to FEM in quality, but it is computationally less time consuming.
Resumo:
Energy saving, reduction of greenhouse gasses and increased use of renewables are key policies to achieve the European 2020 targets. In particular, distributed renewable energy sources, integrated with spatial planning, require novel methods to optimise supply and demand. In contrast with large scale wind turbines, small and medium wind turbines (SMWTs) have a less extensive impact on the use of space and the power system, nevertheless, a significant spatial footprint is still present and the need for good spatial planning is a necessity. To optimise the location of SMWTs, detailed knowledge of the spatial distribution of the average wind speed is essential, hence, in this article, wind measurements and roughness maps were used to create a reliable annual mean wind speed map of Flanders at 10 m above the Earth’s surface. Via roughness transformation, the surface wind speed measurements were converted into meso- and macroscale wind data. The data were further processed by using seven different spatial interpolation methods in order to develop regional wind resource maps. Based on statistical analysis, it was found that the transformation into mesoscale wind, in combination with Simple Kriging, was the most adequate method to create reliable maps for decision-making on optimal production sites for SMWTs in Flanders (Belgium).
Resumo:
Measuring the extent to which a piece of structural timber has distorted at a macroscopic scale is fundamental to assessing its viability as a structural component. From the sawmill to the construction site, as structural timber dries, distortion can render it unsuitable for its intended purposes. This rejection of unusable timber is a considerable source of waste to the timber industry and the wider construction sector. As such, ensuring accurate measurement of distortion is a key step in addressing ineffciencies within timber processing. Currently, the FRITS frame method is the established approach used to gain an understanding of timber surface profile. The method, while reliable, is dependent upon relatively few measurements taken across a limited area of the overall surface, with a great deal of interpolation required. Further, the process is unavoidably slow and cumbersome, the immobile scanning equipment limiting where and when measurements can be taken and constricting the process as a whole. This thesis seeks to introduce LiDAR scanning as a new, alternative approach to distortion feature measurement. In its infancy as a measurement technique within timber research, the practicalities of using LiDAR scanning as a measurement method are herein demonstrated, exploiting many of the advantages the technology has over current approaches. LiDAR scanning creates a much more comprehensive image of a timber surface, generating input data multiple magnitudes larger than that of the FRITS frame. Set-up and scanning time for LiDAR is also much quicker and more flexible than existing methods. With LiDAR scanning the measurement process is freed from many of the constraints of the FRITS frame and can be done in almost any environment. For this thesis, surface scans were carried out on seven Sitka spruce samples of dimensions 48.5x102x3000mm using both the FRITS frame and LiDAR scanner. The samples used presented marked levels of distortion and were relatively free from knots. A computational measurement model was created to extract feature measurements from the raw LiDAR data, enabling an assessment of each piece of timber to be carried out in accordance with existing standards. Assessment of distortion features focused primarily on the measurement of twist due to its strong prevalence in spruce and the considerable concern it generates within the construction industry. Additional measurements of surface inclination and bow were also made with each method to further establish LiDAR's credentials as a viable alternative. Overall, feature measurements as generated by the new LiDAR method compared well with those of the established FRITS method. From these investigations recommendations were made to address inadequacies within existing measurement standards, namely their reliance on generalised and interpretative descriptions of distortion. The potential for further uses of LiDAR scanning within timber researches was also discussed.
Resumo:
We propose a novel finite element formulation that significantly reduces the number of degrees of freedom necessary to obtain reasonably accurate approximations of the low-frequency component of the deformation in boundary-value problems. In contrast to the standard Ritz–Galerkin approach, the shape functions are defined on a Lie algebra—the logarithmic space—of the deformation function. We construct a deformation function based on an interpolation of transformations at the nodes of the finite element. In the case of the geometrically exact planar Bernoulli beam element presented in this work, these transformation functions at the nodes are given as rotations. However, due to an intrinsic coupling between rotational and translational components of the deformation function, the formulation provides for a good approximation of the deflection of the beam, as well as of the resultant forces and moments. As both the translational and the rotational components of the deformation function are defined on the logarithmic space, we propose to refer to the novel approach as the “Logarithmic finite element method”, or “LogFE” method.
Resumo:
Fleck and Johnson (Int. J. Mech. Sci. 29 (1987) 507) and Fleck et al. (Proc. Inst. Mech. Eng. 206 (1992) 119) have developed foil rolling models which allow for large deformations in the roll profile, including the possibility that the rolls flatten completely. However, these models require computationally expensive iterative solution techniques. A new approach to the approximate solution of the Fleck et al. (1992) Influence Function Model has been developed using both analytic and approximation techniques. The numerical difficulties arising from solving an integral equation in the flattened region have been reduced by applying an Inverse Hilbert Transform to get an analytic expression for the pressure. The method described in this paper is applicable to cases where there is or there is not a flat region.