768 resultados para trilinear interpolation
Resumo:
In the analysis of heart rate variability (HRV) are used temporal series that contains the distances between successive heartbeats in order to assess autonomic regulation of the cardiovascular system. These series are obtained from the electrocardiogram (ECG) signal analysis, which can be affected by different types of artifacts leading to incorrect interpretations in the analysis of the HRV signals. Classic approach to deal with these artifacts implies the use of correction methods, some of them based on interpolation, substitution or statistical techniques. However, there are few studies that shows the accuracy and performance of these correction methods on real HRV signals. This study aims to determine the performance of some linear and non-linear correction methods on HRV signals with induced artefacts by quantification of its linear and nonlinear HRV parameters. As part of the methodology, ECG signals of rats measured using the technique of telemetry were used to generate real heart rate variability signals without any error. In these series were simulated missing points (beats) in different quantities in order to emulate a real experimental situation as accurately as possible. In order to compare recovering efficiency, deletion (DEL), linear interpolation (LI), cubic spline interpolation (CI), moving average window (MAW) and nonlinear predictive interpolation (NPI) were used as correction methods for the series with induced artifacts. The accuracy of each correction method was known through the results obtained after the measurement of the mean value of the series (AVNN), standard deviation (SDNN), root mean square error of the differences between successive heartbeats (RMSSD), Lomb\'s periodogram (LSP), Detrended Fluctuation Analysis (DFA), multiscale entropy (MSE) and symbolic dynamics (SD) on each HRV signal with and without artifacts. The results show that, at low levels of missing points the performance of all correction techniques are very similar with very close values for each HRV parameter. However, at higher levels of losses only the NPI method allows to obtain HRV parameters with low error values and low quantity of significant differences in comparison to the values calculated for the same signals without the presence of missing points.
Resumo:
The energy demand for operating Information and Communication Technology (ICT) systems has been growing, implying in high operational costs and consequent increase of carbon emissions. Both in datacenters and telecom infrastructures, the networks represent a significant amount of energy spending. Given that, there is an increased demand for energy eficiency solutions, and several capabilities to save energy have been proposed. However, it is very dificult to orchestrate such energy eficiency capabilities, i.e., coordinate or combine them in the same network, ensuring a conflict-free operation and choosing the best one for a given scenario, ensuring that a capability not suited to the current bandwidth utilization will not be applied and lead to congestion or packet loss. Also, there is no way in the literature to do this taking business directives into account. In this regard, a method able to orchestrate diferent energy eficiency capabilities is proposed considering the possible combinations and conflicts among them, as well as the best option for a given bandwidth utilization and network characteristics. In the proposed method, the business policies specified in a high-level interface are refined down to the network level in order to bring highlevel directives into the operation, and a Utility Function is used to combine energy eficiency and performance requirements. A Decision Tree able to determine what to do in each scenario is deployed in a Software Defined Network environment. The proposed method was validated with diferent experiments, testing the Utility Function, checking the extra savings when combining several capabilities, the decision tree interpolation and dynamicity aspects. The orchestration proved to be valid to solve the problem of finding the best combination for a given scenario, achieving additional savings due to the combination, besides ensuring a conflict-free operation.
Resumo:
A condutividade hidráulica (K) é um dos parâmetros controladores da magnitude da velocidade da água subterrânea, e consequentemente, é um dos mais importantes parâmetros que afetam o fluxo subterrâneo e o transporte de solutos, sendo de suma importância o conhecimento da distribuição de K. Esse trabalho visa estimar valores de condutividade hidráulica em duas áreas distintas, uma no Sistema Aquífero Guarani (SAG) e outra no Sistema Aquífero Bauru (SAB) por meio de três técnicas geoestatísticas: krigagem ordinária, cokrigagem e simulação condicional por bandas rotativas. Para aumentar a base de dados de valores de K, há um tratamento estatístico dos dados conhecidos. O método de interpolação matemática (krigagem ordinária) e o estocástico (simulação condicional por bandas rotativas) são aplicados para estimar os valores de K diretamente, enquanto que os métodos de krigagem ordinária combinada com regressão linear e cokrigagem permitem incorporar valores de capacidade específica (Q/s) como variável secundária. Adicionalmente, a cada método geoestatístico foi aplicada a técnica de desagrupamento por célula para comparar a sua capacidade de melhorar a performance dos métodos, o que pode ser avaliado por meio da validação cruzada. Os resultados dessas abordagens geoestatísticas indicam que os métodos de simulação condicional por bandas rotativas com a técnica de desagrupamento e de krigagem ordinária combinada com regressão linear sem a técnica de desagrupamento são os mais adequados para as áreas do SAG (rho=0.55) e do SAB (rho=0.44), respectivamente. O tratamento estatístico e a técnica de desagrupamento usados nesse trabalho revelaram-se úteis ferramentas auxiliares para os métodos geoestatísticos.
Resumo:
This paper presents a series of calculation procedures for computer design of ternary distillation columns overcoming the iterative equilibrium calculations necessary in these kind of problems and, thus, reducing the calculation time. The proposed procedures include interpolation and intersection methods to solve the equilibrium equations and the mass and energy balances. The calculation programs proposed also include the possibility of rigorous solution of mass and energy balances and equilibrium relations.
Resumo:
We present an algorithm to process images of reflected Placido rings captured by a commercial videokeratoscope. Raw data are obtained with no Cartesian-to-polar-coordinate conversion, thus avoiding interpolation and associated numerical artifacts. The method provides a characteristic equation for the device and is able to process around 6 times more corneal data than the commercial software. Our proposal allows complete control over the whole process from the capture of corneal images until the computation of curvature radii.
Resumo:
Isobaric vapour–liquid and vapour–liquid–liquid equilibrium data for the water + 1-butanol + toluene ternary system were measured at 101.3 kPa with a modified VLE 602 Fischer apparatus. In addition, the liquid–liquid equilibrium data at 313.15 K were measured and compared with data from other authors at different temperatures. The system exhibits a ternary heterogeneous azeotrope whose temperature and composition have been determined by interpolation. The thermodynamic consistency of the experimental vapour–liquid and vapour–liquid–liquid data was checked by means of the Wisniak’s Li/Wi consistency test. Moreover, the vapour–liquid and the liquid–liquid equilibrium correlation for the ternary system with NRTL and UNIQUAC models, together with the prediction made with the UNIFAC model, were studied and discussed.
Resumo:
A new methodology is proposed to produce subsidence activity maps based on the geostatistical analysis of persistent scatterer interferometry (PSI) data. PSI displacement measurements are interpolated based on conditional Sequential Gaussian Simulation (SGS) to calculate multiple equiprobable realizations of subsidence. The result from this process is a series of interpolated subsidence values, with an estimation of the spatial variability and a confidence level on the interpolation. These maps complement the PSI displacement map, improving the identification of wide subsiding areas at a regional scale. At a local scale, they can be used to identify buildings susceptible to suffer subsidence related damages. In order to do so, it is necessary to calculate the maximum differential settlement and the maximum angular distortion for each building of the study area. Based on PSI-derived parameters those buildings in which the serviceability limit state has been exceeded, and where in situ forensic analysis should be made, can be automatically identified. This methodology has been tested in the city of Orihuela (SE Spain) for the study of historical buildings damaged during the last two decades by subsidence due to aquifer overexploitation. The qualitative evaluation of the results from the methodology carried out in buildings where damages have been reported shows a success rate of 100%.
Resumo:
In this work, we propose a new methodology for the large scale optimization and process integration of complex chemical processes that have been simulated using modular chemical process simulators. Units with significant numerical noise or large CPU times are substituted by surrogate models based on Kriging interpolation. Using a degree of freedom analysis, some of those units can be aggregated into a single unit to reduce the complexity of the resulting model. As a result, we solve a hybrid simulation-optimization model formed by units in the original flowsheet, Kriging models, and explicit equations. We present a case study of the optimization of a sour water stripping plant in which we simultaneously consider economics, heat integration and environmental impact using the ReCiPe indicator, which incorporates the recent advances made in Life Cycle Assessment (LCA). The optimization strategy guarantees the convergence to a local optimum inside the tolerance of the numerical noise.
Resumo:
Tese de doutoramento, Belas-Artes (Design de Equipamento), Universidade de Lisboa, Faculdade de Belas-Artes, 2016
Resumo:
This package includes various Mata functions. kern(): various kernel functions; kint(): kernel integral functions; kdel0(): canonical bandwidth of kernel; quantile(): quantile function; median(): median; iqrange(): inter-quartile range; ecdf(): cumulative distribution function; relrank(): grade transformation; ranks(): ranks/cumulative frequencies; freq(): compute frequency counts; histogram(): produce histogram data; mgof(): multinomial goodness-of-fit tests; collapse(): summary statistics by subgroups; _collapse(): summary statistics by subgroups; gini(): Gini coefficient; sample(): draw random sample; srswr(): SRS with replacement; srswor(): SRS without replacement; upswr(): UPS with replacement; upswor(): UPS without replacement; bs(): bootstrap estimation; bs2(): bootstrap estimation; bs_report(): report bootstrap results; jk(): jackknife estimation; jk_report(): report jackknife results; subset(): obtain subsets, one at a time; composition(): obtain compositions, one by one; ncompositions(): determine number of compositions; partition(): obtain partitions, one at a time; npartitionss(): determine number of partitions; rsubset(): draw random subset; rcomposition(): draw random composition; colvar(): variance, by column; meancolvar(): mean and variance, by column; variance0(): population variance; meanvariance0(): mean and population variance; mse(): mean squared error; colmse(): mean squared error, by column; sse(): sum of squared errors; colsse(): sum of squared errors, by column; benford(): Benford distribution; cauchy(): cumulative Cauchy-Lorentz dist.; cauchyden(): Cauchy-Lorentz density; cauchytail(): reverse cumulative Cauchy-Lorentz; invcauchy(): inverse cumulative Cauchy-Lorentz; rbinomial(): generate binomial random numbers; cebinomial(): cond. expect. of binomial r.v.; root(): Brent's univariate zero finder; nrroot(): Newton-Raphson zero finder; finvert(): univariate function inverter; integrate_sr(): univariate function integration (Simpson's rule); integrate_38(): univariate function integration (Simpson's 3/8 rule); ipolate(): linear interpolation; polint(): polynomial inter-/extrapolation; plot(): Draw twoway plot; _plot(): Draw twoway plot; panels(): identify nested panel structure; _panels(): identify panel sizes; npanels(): identify number of panels; nunique(): count number of distinct values; nuniqrows(): count number of unique rows; isconstant(): whether matrix is constant; nobs(): number of observations; colrunsum(): running sum of each column; linbin(): linear binning; fastlinbin(): fast linear binning; exactbin(): exact binning; makegrid(): equally spaced grid points; cut(): categorize data vector; posof(): find element in vector; which(): positions of nonzero elements; locate(): search an ordered vector; hunt(): consecutive search; cond(): matrix conditional operator; expand(): duplicate single rows/columns; _expand(): duplicate rows/columns in place; repeat(): duplicate contents as a whole; _repeat(): duplicate contents in place; unorder2(): stable version of unorder(); jumble2(): stable version of jumble(); _jumble2(): stable version of _jumble(); pieces(): break string into pieces; npieces(): count number of pieces; _npieces(): count number of pieces; invtokens(): reverse of tokens(); realofstr(): convert string into real; strexpand(): expand string argument; matlist(): display a (real) matrix; insheet(): read spreadsheet file; infile(): read free-format file; outsheet(): write spreadsheet file; callf(): pass optional args to function; callf_setup(): setup for mm_callf().
Resumo:
Modeling of self-similar traffic is performed for the queuing system of G/M/1/K type using Weibull distribution. To study the self-similar traffic the simulation model is developed by using SIMULINK software package in MATLAB environment. Approximation of self-similar traffic on the basis of spline functions. Modeling self-similar traffic is carried outfor QS of W/M/1/K type using the Weibull distribution. Initial data are: the value of Hurst parameter H=0,65, the shape parameter of the distribution curve α≈0,7 and distribution parameter β≈0,0099. Considering that the self-similar traffic is characterized by the presence of "splashes" and long-termdependence between the moments of requests arrival in this study under given initial data it is reasonable to use linear interpolation splines.
Resumo:
A stable isotope record from the eastern Weddell Sea from 69°S is presented. For the first time, a 250,000-yr record from the Southern Ocean can be correlated in detail to the global isotope stratigraphy. Together with magnetostratigraphic, sedimentological and micropalaeontological data, the stratigraphic control of this record can be extended back to 910,000 yrs B.P. A time scale is constructed by linear interpolation between confirmed stratigraphic data points. The benthic d18O record (Epistominella exigua) reflects global continental ice volume changes during the Brunhes and late Matuyama chrons, whereas the planktonic isotopic record (Neogloboquadrina pachyderma) may be influenced by a meltwater lid caused by the nearby Antarctic ice shelf and icebergs. The worldwide climatic improvement during deglaciations is documented in the eastern Weddell Sea by an increase in production of siliceous plankton followed, with a time lag of approximately 10,000 yrs, by planktonic foraminifera production. Peak values in the difference between planktonic and benthic d13C records, which are 0.5 per mil higher during warm climatic periods than during times with expanded continental ice sheets, also suggest increased surface productivity during interglacials in the Southern Ocean.
Resumo:
The quality of water level time series data strongly varies with periods of high and low quality sensor data. In this paper we are presenting the processing steps which were used to generate high quality water level data from water pressure measured at the Time Series Station (TSS) Spiekeroog. The TSS is positioned in a tidal inlet between the islands of Spiekeroog and Langeoog in the East Frisian Wadden Sea (southern North Sea). The processing steps will cover sensor drift, outlier identification, interpolation of data gaps and quality control. A central step is the removal of outliers. For this process an absolute threshold of 0.25m/10min was selected which still keeps the water level increase and decrease during extreme events as shown during the quality control process. A second important feature of data processing is the interpolation of gappy data which is accomplished with a high certainty of generating trustworthy data. Applying these methods a 10 years dataset (December 2002-December 2012) of water level information at the TSS was processed resulting in a seven year time series (2005-2011).
Resumo:
The continuous plankton recorder (CPR) survey is an upper layer plankton monitoring program that has regularly collected samples, at monthly intervals, in the North Atlantic and adjacent seas since 1946. Water from approximately 6 m depth enters the CPR through a small aperture at the front of the sampler and travels down a tunnel where it passes through a silk filtering mesh of 270 µm before exiting at the back of the CPR. The plankton filtered on the silk is analyzed in sections corresponding to 10 nautical miles (approx. 3 m**3 of seawater filtered) and the plankton microscopically identified (Richardson et al., 2006 and reference therein). In the present study we used the CPR data to investigate the current basin scale distribution of C. finmarchicus (C5-C6), C. helgolandicus (C5-C6), C. hyperboreus (C5-C6), Pseudocalanus spp. (C6), Oithona spp. (C1-C6), total Euphausiida, total Thecosomata and the presence/absence of Cnidaria and the Phytoplankton Colour Index (PCI). The PCI, which is a visual assessment of the greenness of the silk, is used as an indicator of the distribution of total phytoplankton biomass across the Atlantic basin (Batten et al., 2003). Monthly data collected between 2000 and 2009 were gridded using the inverse-distance interpolation method, in which the interpolated values were the nodes of a 2 degree by 2 degree grid. The resulting twelve monthly matrices were then averaged within the year and in the case of the zooplankton the data were log-transformed (i.e. log10 (x+1).