993 resultados para Re-sampling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: Development of an interpolation algorithm for re‐sampling spatially distributed CT‐data with the following features: global and local integral conservation, avoidance of negative interpolation values for positively defined datasets and the ability to control re‐sampling artifacts. Method and Materials: The interpolation can be separated into two steps: first, the discrete CT‐data has to be continuously distributed by an analytic function considering the boundary conditions. Generally, this function is determined by piecewise interpolation. Instead of using linear or high order polynomialinterpolations, which do not fulfill all the above mentioned features, a special form of Hermitian curve interpolation is used to solve the interpolation problem with respect to the required boundary conditions. A single parameter is determined, by which the behavior of the interpolation function is controlled. Second, the interpolated data have to be re‐distributed with respect to the requested grid. Results: The new algorithm was compared with commonly used interpolation functions based on linear and second order polynomial. It is demonstrated that these interpolation functions may over‐ or underestimate the source data by about 10%–20% while the parameter of the new algorithm can be adjusted in order to significantly reduce these interpolation errors. Finally, the performance and accuracy of the algorithm was tested by re‐gridding a series of X‐ray CT‐images. Conclusion: Inaccurate sampling values may occur due to the lack of integral conservation. Re‐sampling algorithms using high order polynomialinterpolation functions may result in significant artifacts of the re‐sampled data. Such artifacts can be avoided by using the new algorithm based on Hermitian curve interpolation

Relevância:

60.00% 60.00%

Publicador:

Resumo:

For dynamic closed loop control of a multilevel converter with a low pulse number (ratio of switching frequency to synthesized fundamental), natural sampled pulse-width modulation (PWM) is the best form of modulation. Natural sampling does not introduce distortion or a delayed response to the modulating signal. However previous natural sampled PWM implementations have generally been analog. For a modular multilevel converter, a digital implementation has advantages of accuracy and flexibility. Re-sampled uniform PWM is a novel digital modulation technique which approaches the performance of natural PWM. Both hardware and software implementations for a five level multilevel converter phase are presented, demonstrating the improvement over uniform PWM.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This is one of the few studies that have explored the value of baseline symptoms and health-related quality of life (HRQOL) in predicting survival in brain cancer patients. Baseline HRQOL scores (from the EORTC QLQ-C30 and the Brain Cancer Module (BN 20)) were examined in 490 newly diagnosed glioblastoma cancer patients for the relationship with overall survival by using Cox proportional hazards regression models. Refined techniques as the bootstrap re-sampling procedure and the computation of C-indexes and R(2)-coefficients were used to try and validate the model. Classical analysis controlled for major clinical prognostic factors selected cognitive functioning (P=0.0001), global health status (P=0.0055) and social functioning (P<0.0001) as statistically significant prognostic factors of survival. However, several issues question the validity of these findings. C-indexes and R(2)-coefficients, which are measures of the predictive ability of the models, did not exhibit major improvements when adding selected or all HRQOL scores to clinical factors. While classical techniques lead to positive results, more refined analyses suggest that baseline HRQOL scores add relatively little to clinical factors to predict survival. These results may have implications for future use of HRQOL as a prognostic factor in cancer patients.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The current thesis manuscript studies the suitability of a recent data assimilation method, the Variational Ensemble Kalman Filter (VEnKF), to real-life fluid dynamic problems in hydrology. VEnKF combines a variational formulation of the data assimilation problem based on minimizing an energy functional with an Ensemble Kalman filter approximation to the Hessian matrix that also serves as an approximation to the inverse of the error covariance matrix. One of the significant features of VEnKF is the very frequent re-sampling of the ensemble: resampling is done at every observation step. This unusual feature is further exacerbated by observation interpolation that is seen beneficial for numerical stability. In this case the ensemble is resampled every time step of the numerical model. VEnKF is implemented in several configurations to data from a real laboratory-scale dam break problem modelled with the shallow water equations. It is also tried in a two-layer Quasi- Geostrophic atmospheric flow problem. In both cases VEnKF proves to be an efficient and accurate data assimilation method that renders the analysis more realistic than the numerical model alone. It also proves to be robust against filter instability by its adaptive nature.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose a novel, simple, efficient and distribution-free re-sampling technique for developing prediction intervals for returns and volatilities following ARCH/GARCH models. In particular, our key idea is to employ a Box–Jenkins linear representation of an ARCH/GARCH equation and then to adapt a sieve bootstrap procedure to the nonlinear GARCH framework. Our simulation studies indicate that the new re-sampling method provides sharp and well calibrated prediction intervals for both returns and volatilities while reducing computational costs by up to 100 times, compared to other available re-sampling techniques for ARCH/GARCH models. The proposed procedure is illustrated by an application to Yen/U.S. dollar daily exchange rate data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

During the past 15 years, a number of initiatives have been undertaken at national level to develop ocean forecasting systems operating at regional and/or global scales. The co-ordination between these efforts has been organized internationally through the Global Ocean Data Assimilation Experiment (GODAE). The French MERCATOR project is one of the leading participants in GODAE. The MERCATOR systems routinely assimilate a variety of observations such as multi-satellite altimeter data, sea-surface temperature and in situ temperature and salinity profiles, focusing on high-resolution scales of the ocean dynamics. The assimilation strategy in MERCATOR is based on a hierarchy of methods of increasing sophistication including optimal interpolation, Kalman filtering and variational methods, which are progressively deployed through the Syst`eme d’Assimilation MERCATOR (SAM) series. SAM-1 is based on a reduced-order optimal interpolation which can be operated using ‘altimetry-only’ or ‘multi-data’ set-ups; it relies on the concept of separability, assuming that the correlations can be separated into a product of horizontal and vertical contributions. The second release, SAM-2, is being developed to include new features from the singular evolutive extended Kalman (SEEK) filter, such as three-dimensional, multivariate error modes and adaptivity schemes. The third one, SAM-3, considers variational methods such as the incremental four-dimensional variational algorithm. Most operational forecasting systems evaluated during GODAE are based on least-squares statistical estimation assuming Gaussian errors. In the framework of the EU MERSEA (Marine EnviRonment and Security for the European Area) project, research is being conducted to prepare the next-generation operational ocean monitoring and forecasting systems. The research effort will explore nonlinear assimilation formulations to overcome limitations of the current systems. This paper provides an overview of the developments conducted in MERSEA with the SEEK filter, the Ensemble Kalman filter and the sequential importance re-sampling filter.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work analyzes the use of linear discriminant models, multi-layer perceptron neural networks and wavelet networks for corporate financial distress prediction. Although simple and easy to interpret, linear models require statistical assumptions that may be unrealistic. Neural networks are able to discriminate patterns that are not linearly separable, but the large number of parameters involved in a neural model often causes generalization problems. Wavelet networks are classification models that implement nonlinear discriminant surfaces as the superposition of dilated and translated versions of a single "mother wavelet" function. In this paper, an algorithm is proposed to select dilation and translation parameters that yield a wavelet network classifier with good parsimony characteristics. The models are compared in a case study involving failed and continuing British firms in the period 1997-2000. Problems associated with over-parameterized neural networks are illustrated and the Optimal Brain Damage pruning technique is employed to obtain a parsimonious neural model. The results, supported by a re-sampling study, show that both neural and wavelet networks may be a valid alternative to classical linear discriminant models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Asset allocation decisions and value at risk calculations rely strongly on volatility estimates. Volatility measures such as rolling window, EWMA, GARCH and stochastic volatility are used in practice. GARCH and EWMA type models that incorporate the dynamic structure of volatility and are capable of forecasting future behavior of risk should perform better than constant, rolling window volatility models. For the same asset the model that is the ‘best’ according to some criterion can change from period to period. We use the reality check test∗ to verify if one model out-performs others over a class of re-sampled time-series data. The test is based on re-sampling the data using stationary bootstrapping. For each re-sample we check the ‘best’ model according to two criteria and analyze the distribution of the performance statistics. We compare constant volatility, EWMA and GARCH models using a quadratic utility function and a risk management measurement as comparison criteria. No model consistently out-performs the benchmark.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coffee is one of the main products of Brazilian agriculture, the country is currently the largest producer and exporter. Knowing the growth pattern of a fruit can assist in the development of culture indicating for example, the times of increased fruit weight and its optimum harvest, essential to improve the management and quality of coffee. Some authors indicate that the growth curve of the coffee fruit has a double sigmoid shape. However, it consists of just a visual observation without exploring the use of regression models. The aims of this study were: i) determine if the growth pattern of the coffee fruit is really double sigmoidal; ii) to propose a new approach in weighted importance re-sampling to estimate the parameters of regression models and select the most suitable double sigmoidal model to describe the growth of coffee fruits; iii) to study the spatial distribution effect of the crop in the growth curve of coffee fruits. In the first article the aim was determine if the growth pattern of the coffee fruit is really double sigmoidal. The models double Gompertz and double Logistic showed significantly superior fit to models of simple sigmoid confirming that the standard of coffee fruits growth is really double sigmoidal. In the second article we propose to consider an approximation of the likelihood as the candidate distribution of the weighted importance resampling, aiming to facilitate the process of obtaining samples of marginal distributions of each parameter. This technique was effective since it provided parameters with practical interpretation and low computational effort, therefore, it can be used to estimate parameters of double sigmoidal growth curves. The nonlinear model double Logistic was the most appropriate to describe the growth curve of coffee fruits. In the third article aimed to verify the influence of different planting alignments and sun exposure faces in the fruits growth curve. A difference between the growth rates in the two stages of fruit development was identified, regardless the side. Although it has been proven differences in productivity and quality of coffee, there was no difference between the growth curves in the different planting alignments herein studied.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The genera Pachymenes de Saussure and Santamenes Giordani Soika arerevised and the phylogenetic relationships among their species, based on external mor-phology and male genitalia, are presented. The cladistics analysis, using 22 terminalspecies (19 ingroup and 3 outgroup species) and 44 characters, produced a single clado-gram under implied weighting. Both genera were recovered as paraphyletic, althoughttwo major clades were formed and were well supported by the re-sampling analysis.We propose the synonymy of Pachymenes with Santamenes, and the description of twonew species: P. saussurei Grandinete n.sp. and P. riograndensis Grandinete n.sp..Newcombinations are: Pachymenes novarae (de Saussure) n.comb., P. olympicus (Zavattari)n.comb., P. peregrinus (Zavattari) n.comb. and P. santanna (de Saussure) revised combi-nation. We state the synonymy of P. obscurus orellanoides under P. obscurus consuetus,reviewing the status of the latter and raising P. consuetus to species level. Pachymenesorellanae vardyi is synonymized under P. orellanae; P. ghilianii olivaceus, P. ghilianiiavissimus and P. peruanus are proposed as synonyms of P. ghilianii; P. picturatusobscuratus is synonymized under P. laeviventris; P. picturatus nigromaculatus andP. picturatus var . intermedia are synonymized under P. picturatus and P. a t ra var . ornatis-sima get its lectotype designated and proposed as synonym of P. ater.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Plant species distributions are expected to shift and diversity is expected to decline as a result of global climate change, particularly in the Arctic where climate warming is amplified. We have recorded the changes in richness and abundance of vascular plants at Abisko, sub-Arctic Sweden, by re-sampling five studies consisting of seven datasets; one in the mountain birch forest and six at open sites. The oldest study was initiated in 1977-1979 and the latest in 1992. Total species number increased at all sites except for the birch forest site where richness decreased. We found no general pattern in how composition of vascular plants has changed over time. Three species, Calamagrostis lapponica, Carex vaginata and Salix reticulata, showed an overall increase in cover/frequency, while two Equisetum taxa decreased. Instead, we showed that the magnitude and direction of changes in species richness and composition differ among sites.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We report on a revisit in 2009 to sites where vegetation was recorded in 1967 and 1970 on Disko Island, West Greenland. Re-sampling of the same clones of the grass Phleum alpinum after 39 years showed complete stability in biometrics but dramatic earlier onset of various phenological stages that were not related to changes in population density. In a fell-field community, there was a net species loss, but in a herb-slope community, species losses balanced those that were gained. The type of species establishing and increasing in frequency and/or cover abundance at the fell-field site, particularly prostrate dwarf shrubs, indicates a possible start of a shift towards a heath, rather than a fell-field community. At the herb-slope site, those species that established or increased markedly in frequency and/or cover abundance indicate a change to drier conditions. This is confirmed both by the decrease in abundance of Alchemilla glomerulans and Epilobium hornemanii, and the drying of a nearby pond. The causes of these changes are unknown, although mean annual temperature has risen since 1984.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Oral itraconazole (ITRA) is used for the treatment of allergic bronchopulmonary aspergillosis in patients with cystic fibrosis (CF) because of its antifungal activity against Aspergillus species. ITRA has an active hydroxy-metabolite (OH-ITRA) which has similar antifungal activity. ITRA is a highly lipophilic drug which is available in two different oral formulations, a capsule and an oral solution. It is reported that the oral solution has a 60% higher relative bioavailability. The influence of altered gastric physiology associated with CF on the pharmacokinetics (PK) of ITRA and its metabolite has not been previously evaluated. Objectives: 1) To estimate the population (pop) PK parameters for ITRA and its active metabolite OH-ITRA including relative bioavailability of the parent after administration of the parent by both capsule and solution and 2) to assess the performance of the optimal design. Methods: The study was a cross-over design in which 30 patients received the capsule on the first occasion and 3 days later the solution formulation. The design was constrained to have a maximum of 4 blood samples per occasion for estimation of the popPK of both ITRA and OH-ITRA. The sampling times for the population model were optimized previously using POPT v.2.0.[1] POPT is a series of applications that run under MATLAB and provide an evaluation of the information matrix for a nonlinear mixed effects model given a particular design. In addition it can be used to optimize the design based on evaluation of the determinant of the information matrix. The model details for the design were based on prior information obtained from the literature, which suggested that ITRA may have either linear or non-linear elimination. The optimal sampling times were evaluated to provide information for both competing models for the parent and metabolite and for both capsule and solution simultaneously. Blood samples were assayed by validated HPLC.[2] PopPK modelling was performed using FOCE with interaction under NONMEM, version 5 (level 1.1; GloboMax LLC, Hanover, MD, USA). The PK of ITRA and OH‑ITRA was modelled simultaneously using ADVAN 5. Subsequently three methods were assessed for modelling concentrations less than the LOD (limit of detection). These methods (corresponding to methods 5, 6 & 4 from Beal[3], respectively) were (a) where all values less than LOD were assigned to half of LOD, (b) where the closest missing value that is less than LOD was assigned to half the LOD and all previous (if during absorption) or subsequent (if during elimination) missing samples were deleted, and (c) where the contribution of the expectation of each missing concentration to the likelihood is estimated. The LOD was 0.04 mg/L. The final model evaluation was performed via bootstrap with re-sampling and a visual predictive check. The optimal design and the sampling windows of the study were evaluated for execution errors and for agreement between the observed and predicted standard errors. Dosing regimens were simulated for the capsules and the oral solution to assess their ability to achieve ITRA target trough concentration (Cmin,ss of 0.5-2 mg/L) or a combined Cmin,ss for ITRA and OH-ITRA above 1.5mg/L. Results and Discussion: A total of 241 blood samples were collected and analysed, 94% of them were taken within the defined optimal sampling windows, of which 31% where taken within 5 min of the exact optimal times. Forty six per cent of the ITRA values and 28% of the OH-ITRA values were below LOD. The entire profile after administration of the capsule for five patients was below LOD and therefore the data from this occasion was omitted from estimation. A 2-compartment model with 1st order absorption and elimination best described ITRA PK, with 1st order metabolism of the parent to OH-ITRA. For ITRA the clearance (ClItra/F) was 31.5 L/h; apparent volumes of central and peripheral compartments were 56.7 L and 2090 L, respectively. Absorption rate constants for capsule (kacap) and solution (kasol) were 0.0315 h-1 and 0.125 h-1, respectively. Comparative bioavailability of the capsule was 0.82. There was no evidence of nonlinearity in the popPK of ITRA. No screened covariate significantly improved the fit to the data. The results of the parameter estimates from the final model were comparable between the different methods for accounting for missing data, (M4,5,6)[3] and provided similar parameter estimates. The prospective application of an optimal design was found to be successful. Due to the sampling windows, most of the samples could be collected within the daily hospital routine, but still at times that were near optimal for estimating the popPK parameters. The final model was one of the potential competing models considered in the original design. The asymptotic standard errors provided by NONMEM for the final model and empirical values from bootstrap were similar in magnitude to those predicted from the Fisher Information matrix associated with the D-optimal design. Simulations from the final model showed that the current dosing regimen of 200 mg twice daily (bd) would provide a target Cmin,ss (0.5-2 mg/L) for only 35% of patients when administered as the solution and 31% when administered as capsules. The optimal dosing schedule was 500mg bd for both formulations. The target success for this dosing regimen was 87% for the solution with an NNT=4 compared to capsules. This means, for every 4 patients treated with the solution one additional patient will achieve a target success compared to capsule but at an additional cost of AUD $220 per day. The therapeutic target however is still doubtful and potential risks of these dosing schedules need to be assessed on an individual basis. Conclusion: A model was developed which described the popPK of ITRA and its main active metabolite OH-ITRA in adult CF after administration of both capsule and solution. The relative bioavailability of ITRA from the capsule was 82% that of the solution, but considerably more variable. To incorporate missing data, using the simple Beal method 5 (using half LOD for all samples below LOD) provided comparable results to the more complex but theoretically better Beal method 4 (integration method). The optimal sparse design performed well for estimation of model parameters and provided a good fit to the data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A aceitação e o uso de Tecnologia da Informação (TI) pelo indivíduo têm sido estudadas por diferentes modelos conceituais que, em geral, derivaram de teorias da Psicologia como a TRA Theory of Reasoned Action e a TPB Theory of Planned Behavior, derivada da primeira. Um importante modelo de análise dai derivado, resultado da minuciosa análise de outros 8 modelos anteriores, o UTAUT - Unified Theory of Acceptance and Use of Technology de VENKATESH et. al. (2003) tem sido largamente analisado e validado em vários cenários de tecnologia e ambientes. Este trabalho visa compreender de uma maneira mais ampla dos fatores antecedentes da intenção de uso e comportamento de uso a partir do modelo UTAUT, bem como os fatores que melhores explicam a intenção e o comportamento de uso, assim como a análise de seus moderadores. Em seu desenvolvimento, Venkatesh et al. empreenderam comparações em três etapas de implantação e em dois cenários: na adoção mandatória, aquela em que se deu em ambiente empresarial onde o sistema é requerido para execução de processos e tomada de decisões, e na adoção voluntária, cenário em que a adoção se dá pelo indivíduo. No segundo caso, os autores concluíram que o fator influência social tem baixa magnitude e significância, não se revelando um fator importante na adoção da tecnologia. Este trabalho visa analisar também se o mesmo fenômeno ocorre para adoção que se dá de forma voluntária, mas passível de ser altamente influenciada pelos laços sociais, como o que ocorre entre usuários das redes sociais como Orkut, Facebook, Twitter e Linkedin, especialmente em tecnologias que habilitam ganhos associados ao exercício desses laços, como no caso do uso de sites de compras coletivas tais como Peixe Urbano, Groupon e Clickon. Com base no modelo UTAUT, foi aplicada uma pesquisa e posteriormente foram analisados os resultados de 292 respondentes validados que foram acessados por e-mails e redes sociais. A técnica de análise empregada consistiu do uso de modelagem por equações estruturais, com base no algoritmo PLS Partial Least Square, com bootstrap de 1000 reamostragens. Os resultados demonstraram alta magnitude e significância preditiva sobre a Intenção de uso da tecnologia pelos fatores de Expectativa de Desempenho (0,288@0,1%), Influência Social (0,176@0,1%). Os primeiro, compatível com estudos anteriores. Já a magnitude e significância do último fator resultou amplamente superior ao estudo original de Venkatesh et al. (2003) variando entre 0,02 a 0,04, não significante, dependendo dos dados estarem agrupados ou não (p.465). A principal conclusão deste estudo é que, ao considerarmos o fenômeno das compras coletivas, em um ambiente de adoção voluntária, portanto, o fator social é altamente influente na intenção de uso da tecnologia, o que contrasta fortemente com o estudo original do UTAUT (já que no estudo de Venkatesh et al. (2003) este fator não foi significante) e apresenta várias possibilidades de pesquisas futuras e possíveis implicações gerenciais.