899 resultados para the SIMPLE algorithm


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this work was to study and quantify the differences in dose distributions computed with some of the newest dose calculation algorithms available in commercial planning systems. The study was done for clinical cases originally calculated with pencil beam convolution (PBC) where large density inhomogeneities were present. Three other dose algorithms were used: a pencil beam like algorithm, the anisotropic analytic algorithm (AAA), a convolution superposition algorithm, collapsed cone convolution (CCC), and a Monte Carlo program, voxel Monte Carlo (VMC++). The dose calculation algorithms were compared under static field irradiations at 6 MV and 15 MV using multileaf collimators and hard wedges where necessary. Five clinical cases were studied: three lung and two breast cases. We found that, in terms of accuracy, the CCC algorithm performed better overall than AAA compared to VMC++, but AAA remains an attractive option for routine use in the clinic due to its short computation times. Dose differences between the different algorithms and VMC++ for the median value of the planning target volume (PTV) were typically 0.4% (range: 0.0 to 1.4%) in the lung and -1.3% (range: -2.1 to -0.6%) in the breast for the few cases we analysed. As expected, PTV coverage and dose homogeneity turned out to be more critical in the lung than in the breast cases with respect to the accuracy of the dose calculation. This was observed in the dose volume histograms obtained from the Monte Carlo simulations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The modified American College of Cardiology/American Heart Association (ACC/AHA) lesion morphology classification scheme has prognostic impact for early and late outcomes when bare-metal stents are used. Its value after drug-eluting stent placement is unknown. The predictive value of this lesion morphology classification system in patients treated using sirolimus-eluting stents included in the German Cypher Registry was prospectively examined. The study population included 6,755 patients treated for 7,960 lesions using sirolimus-eluting stents. Lesions were classified as type A, B1, B2, or C. Lesion type A or B1 was considered simple (35.1%), and type B2 or C, complex (64.9%). The combined end point of all deaths, myocardial infarction, or target vessel revascularization was seen in 2.6% versus 2.4% in the complex and simple groups, respectively (p = 0.62) at initial hospital discharge, with a trend for higher rates of myocardial infarction in the complex group. At the 6-month clinical follow-up and after adjusting for other independent factors, the composite of cumulative death, myocardial infarction, and target vessel revascularization was nonsignificantly different between groups (11.4% vs 11.2% in the complex and simple groups, respectively; odds ratio 1.08, 95% confidence interval 0.8 to 1.46). This was also true for target vessel revascularization alone (8.3% of the complex group, 9.0% of the simple group; odds ratio 0.87, 95% confidence interval 0.72 to 1.05). In conclusion, the modified ACC/AHA lesion morphology classification system has some value in determining early complications after sirolimus-eluting stent implantation. Clinical follow-up results at 6 months were generally favorable and cannot be adequately differentiated on the basis of this lesion morphology classification scheme.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this dissertation, the problem of creating effective large scale Adaptive Optics (AO) systems control algorithms for the new generation of giant optical telescopes is addressed. The effectiveness of AO control algorithms is evaluated in several respects, such as computational complexity, compensation error rejection and robustness, i.e. reasonable insensitivity to the system imperfections. The results of this research are summarized as follows: 1. Robustness study of Sparse Minimum Variance Pseudo Open Loop Controller (POLC) for multi-conjugate adaptive optics (MCAO). The AO system model that accounts for various system errors has been developed and applied to check the stability and performance of the POLC algorithm, which is one of the most promising approaches for the future AO systems control. It has been shown through numerous simulations that, despite the initial assumption that the exact system knowledge is necessary for the POLC algorithm to work, it is highly robust against various system errors. 2. Predictive Kalman Filter (KF) and Minimum Variance (MV) control algorithms for MCAO. The limiting performance of the non-dynamic Minimum Variance and dynamic KF-based phase estimation algorithms for MCAO has been evaluated by doing Monte-Carlo simulations. The validity of simple near-Markov autoregressive phase dynamics model has been tested and its adequate ability to predict the turbulence phase has been demonstrated both for single- and multiconjugate AO. It has also been shown that there is no performance improvement gained from the use of the more complicated KF approach in comparison to the much simpler MV algorithm in the case of MCAO. 3. Sparse predictive Minimum Variance control algorithm for MCAO. The temporal prediction stage has been added to the non-dynamic MV control algorithm in such a way that no additional computational burden is introduced. It has been confirmed through simulations that the use of phase prediction makes it possible to significantly reduce the system sampling rate and thus overall computational complexity while both maintaining the system stable and effectively compensating for the measurement and control latencies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An important problem in computational biology is finding the longest common subsequence (LCS) of two nucleotide sequences. This paper examines the correctness and performance of a recently proposed parallel LCS algorithm that uses successor tables and pruning rules to construct a list of sets from which an LCS can be easily reconstructed. Counterexamples are given for two pruning rules that were given with the original algorithm. Because of these errors, performance measurements originally reported cannot be validated. The work presented here shows that speedup can be reliably achieved by an implementation in Unified Parallel C that runs on an Infiniband cluster. This performance is partly facilitated by exploiting the software cache of the MuPC runtime system. In addition, this implementation achieved speedup without bulk memory copy operations and the associated programming complexity of message passing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This dissertation discusses structural-electrostatic modeling techniques, genetic algorithm based optimization and control design for electrostatic micro devices. First, an alternative modeling technique, the interpolated force model, for electrostatic micro devices is discussed. The method provides improved computational efficiency relative to a benchmark model, as well as improved accuracy for irregular electrode configurations relative to a common approximate model, the parallel plate approximation model. For the configuration most similar to two parallel plates, expected to be the best case scenario for the approximate model, both the parallel plate approximation model and the interpolated force model maintained less than 2.2% error in static deflection compared to the benchmark model. For the configuration expected to be the worst case scenario for the parallel plate approximation model, the interpolated force model maintained less than 2.9% error in static deflection while the parallel plate approximation model is incapable of handling the configuration. Second, genetic algorithm based optimization is shown to improve the design of an electrostatic micro sensor. The design space is enlarged from published design spaces to include the configuration of both sensing and actuation electrodes, material distribution, actuation voltage and other geometric dimensions. For a small population, the design was improved by approximately a factor of 6 over 15 generations to a fitness value of 3.2 fF. For a larger population seeded with the best configurations of the previous optimization, the design was improved by another 7% in 5 generations to a fitness value of 3.0 fF. Third, a learning control algorithm is presented that reduces the closing time of a radiofrequency microelectromechanical systems switch by minimizing bounce while maintaining robustness to fabrication variability. Electrostatic actuation of the plate causes pull-in with high impact velocities, which are difficult to control due to parameter variations from part to part. A single degree-of-freedom model was utilized to design a learning control algorithm that shapes the actuation voltage based on the open/closed state of the switch. Experiments on 3 test switches show that after 5-10 iterations, the learning algorithm lands the switch with an impact velocity not exceeding 0.2 m/s, eliminating bounce.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The numerical solution of the incompressible Navier-Stokes Equations offers an effective alternative to the experimental analysis of Fluid-Structure interaction i.e. dynamical coupling between a fluid and a solid which otherwise is very complex, time consuming and very expensive. To have a method which can accurately model these types of mechanical systems by numerical solutions becomes a great option, since these advantages are even more obvious when considering huge structures like bridges, high rise buildings, or even wind turbine blades with diameters as large as 200 meters. The modeling of such processes, however, involves complex multiphysics problems along with complex geometries. This thesis focuses on a novel vorticity-velocity formulation called the KLE to solve the incompressible Navier-stokes equations for such FSI problems. This scheme allows for the implementation of robust adaptive ODE time integration schemes and thus allows us to tackle the various multiphysics problems as separate modules. The current algorithm for KLE employs a structured or unstructured mesh for spatial discretization and it allows the use of a self-adaptive or fixed time step ODE solver while dealing with unsteady problems. This research deals with the analysis of the effects of the Courant-Friedrichs-Lewy (CFL) condition for KLE when applied to unsteady Stoke’s problem. The objective is to conduct a numerical analysis for stability and, hence, for convergence. Our results confirmthat the time step ∆t is constrained by the CFL-like condition ∆t ≤ const. hα, where h denotes the variable that represents spatial discretization.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This dissertation has three separate parts: the first part deals with the general pedigree association testing incorporating continuous covariates; the second part deals with the association tests under population stratification using the conditional likelihood tests; the third part deals with the genome-wide association studies based on the real rheumatoid arthritis (RA) disease data sets from Genetic Analysis Workshop 16 (GAW16) problem 1. Many statistical tests are developed to test the linkage and association using either case-control status or phenotype covariates for family data structure, separately. Those univariate analyses might not use all the information coming from the family members in practical studies. On the other hand, the human complex disease do not have a clear inheritance pattern, there might exist the gene interactions or act independently. In part I, the new proposed approach MPDT is focused on how to use both the case control information as well as the phenotype covariates. This approach can be applied to detect multiple marker effects. Based on the two existing popular statistics in family studies for case-control and quantitative traits respectively, the new approach could be used in the simple family structure data set as well as general pedigree structure. The combined statistics are calculated using the two statistics; A permutation procedure is applied for assessing the p-value with adjustment from the Bonferroni for the multiple markers. We use simulation studies to evaluate the type I error rates and the powers of the proposed approach. Our results show that the combined test using both case-control information and phenotype covariates not only has the correct type I error rates but also is more powerful than the other existing methods. For multiple marker interactions, our proposed method is also very powerful. Selective genotyping is an economical strategy in detecting and mapping quantitative trait loci in the genetic dissection of complex disease. When the samples arise from different ethnic groups or an admixture population, all the existing selective genotyping methods may result in spurious association due to different ancestry distributions. The problem can be more serious when the sample size is large, a general requirement to obtain sufficient power to detect modest genetic effects for most complex traits. In part II, I describe a useful strategy in selective genotyping while population stratification is present. Our procedure used a principal component based approach to eliminate any effect of population stratification. The paper evaluates the performance of our procedure using both simulated data from an early study data sets and also the HapMap data sets in a variety of population admixture models generated from empirical data. There are one binary trait and two continuous traits in the rheumatoid arthritis dataset of Problem 1 in the Genetic Analysis Workshop 16 (GAW16): RA status, AntiCCP and IgM. To allow multiple traits, we suggest a set of SNP-level F statistics by the concept of multiple-correlation to measure the genetic association between multiple trait values and SNP-specific genotypic scores and obtain their null distributions. Hereby, we perform 6 genome-wide association analyses using the novel one- and two-stage approaches which are based on single, double and triple traits. Incorporating all these 6 analyses, we successfully validate the SNPs which have been identified to be responsible for rheumatoid arthritis in the literature and detect more disease susceptibility SNPs for follow-up studies in the future. Except for chromosome 13 and 18, each of the others is found to harbour susceptible genetic regions for rheumatoid arthritis or related diseases, i.e., lupus erythematosus. This topic is discussed in part III.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An important problem in unsupervised data clustering is how to determine the number of clusters. Here we investigate how this can be achieved in an automated way by using interrelation matrices of multivariate time series. Two nonparametric and purely data driven algorithms are expounded and compared. The first exploits the eigenvalue spectra of surrogate data, while the second employs the eigenvector components of the interrelation matrix. Compared to the first algorithm, the second approach is computationally faster and not limited to linear interrelation measures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this report, we attempt to define the capabilities of the infrared satellite remote sensor, Multifunctional Transport Satellite-2 (MTSAT-2) (i.e. a geosynchronous instrument), in characterizing volcanic eruptive behavior in the highly active region of Indonesia. Sulfur dioxide data from NASA's Ozone Monitoring Instrument (OMI) (i.e. a polar orbiting instrument) are presented here for validation of the processes interpreted using the thermal infrared datasets. Data provided from two case studies are analyzed specifically for eruptive products producing large thermal anomalies (i.e. lava flows, lava domes, etc.), volcanic ash and SO2 clouds; three distinctly characteristic and abundant volcanic emissions. Two primary methods used for detection of heat signatures are used and compared in this report including, single-channel thermal radiance (4-µm) and the normalized thermal index (NTI) algorithm. For automated purposes, fixed thresholds must be determined for these methods. A base minimum detection limit (MDL) for single-channel thermal radiance of 2.30E+05 Wm- 2sr-1m-1 and -0.925 for NTI generate false alarm rates of 35.78% and 34.16%, respectively. A spatial comparison method, developed here specifically for use in Indonesia and used as a second parameter for detection, is implemented to address the high false alarm rate. For the single-channel thermal radiance method, the utilization of the spatial comparison method eliminated 100% of the false alarms while maintaining every true anomaly. The NTI algorithm showed similar results with only 2 false alarms remaining. No definitive difference is observed between the two thermal detection methods for automated use; however, the single-channel thermal radiance method coupled with the SO2 mass abundance data can be used to interpret volcanic processes including the identification of lava dome activity at Sinabung as well as the mechanism for the dome emplacement (i.e. endogenous or exogenous). Only one technique, the brightness temperature difference (BTD) method, is used for the detection of ash. Trends of ash area, water/ice area, and their respective concentrations yield interpretations of increased ice formation, aggregation, and sedimentation processes that only a high-temporal resolution instrument like the MTSAT-2 can analyze. A conceptual model of a secondary zone of aggregation occurring in the migrating Kelut ash cloud, which decreases the distal fine-ash component and hazards to flight paths, is presented in this report. Unfortunately, SO2 data was unable to definitively reinforce the concept of a secondary zone of aggregation due to the lack of a sufficient temporal resolution. However, a detailed study of the Kelut SO2 cloud is used to determine that there was no climatic impacts generated from this eruption due to the atmospheric residence times and e-folding rate of ~14 days for the SO2. This report applies the complementary assets offered by utilizing a high-temporal and a high-spatial resolution satellite, and it demonstrates that these two instruments can provide unparalleled observations of dynamic volcanic processes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Real-time battery impedance spectrum is acquired using a one-time record. Fast Summation Transformation (FST) is a parallel method of acquiring a real-time battery impedance spectrum using a one-time record that enables battery diagnostics. An excitation current to a battery is a sum of equal amplitude sine waves of frequencies that are octave harmonics spread over a range of interest. A sample frequency is also octave and harmonically related to all frequencies in the sum. The time profile of this signal has a duration that is a few periods of the lowest frequency. The voltage response of the battery, average deleted, is the impedance of the battery in the time domain. Since the excitation frequencies are known and octave and harmonically related, a simple algorithm, FST, processes the time record by rectifying relative to the sine and cosine of each frequency. Another algorithm yields real and imaginary components for each frequency.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In terms of atmospheric impact, the volcanic eruption of Mt. Pinatubo (1991) is the best characterized large eruption on record. We investigate here the model-derived stratospheric warming following the Pinatubo eruption as derived from SAGE II extinction data including recent improvements in the processing algorithm. This method, termed SAGE_4λ, makes use of the four wavelengths (385, 452, 525 and 1024 nm) of the SAGE II data when available, and uses a data-filling procedure in the opacity-induced "gap" regions. Using SAGE_4λ, we derived aerosol size distributions that properly reproduce extinction coefficients also at much longer wavelengths. This provides a good basis for calculating the absorption of terrestrial infrared radiation and the resulting stratospheric heating. However, we also show that the use of this data set in a global chemistry–climate model (CCM) still leads to stronger aerosol-induced stratospheric heating than observed, with temperatures partly even higher than the already too high values found by many models in recent general circulation model (GCM) and CCM intercomparisons. This suggests that the overestimation of the stratospheric warming after the Pinatubo eruption may not be ascribed to an insufficient observational database but instead to using outdated data sets, to deficiencies in the implementation of the forcing data, or to radiative or dynamical model artifacts. Conversely, the SAGE_4λ approach reduces the infrared absorption in the tropical tropopause region, resulting in a significantly better agreement with the post-volcanic temperature record at these altitudes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In contrast to the treatment of avulsion lesions of the anterior cruciate ligament (ACL) the management of intrasubstance ACL tears in the skeletally immature patient remains controversial. Prospective studies could show that conservative treatment results in severe instability with concomitant intraarticular damage and poor function of the knee. Reconstruction of a torn ACL always carries the risk of damaging the open growth plates; with consecutively affecting the longitudinal or axial growth of the lower extremity either on the femoral or the tibial side. Thus, several surgical procedures are available to prevent adverse events mentioned above. The purpose of this study is to review the recent literature regarding the treatment algorithm for ACL injuries in skeletally immature patients. This review will (1) investigate the indications for ACL surgery in children; (2) determine if a surgical procedure is clinically superior in skeletally immature patients; and (3) correlate the adverse events with the surgical technique.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The current article presents a novel physiological control algorithm for ventricular assist devices (VADs), which is inspired by the preload recruitable stroke work. This controller adapts the hydraulic power output of the VAD to the end-diastolic volume of the left ventricle. We tested this controller on a hybrid mock circulation where the left ventricular volume (LVV) is known, i.e., the problem of measuring the LVV is not addressed in the current article. Experiments were conducted to compare the response of the controller with the physiological and with the pathological circulation, with and without VAD support. A sensitivity analysis was performed to analyze the influence of the controller parameters and the influence of the quality of the LVV signal on the performance of the control algorithm. The results show that the controller induces a response similar to the physiological circulation and effectively prevents over- and underpumping, i.e., ventricular suction and backflow from the aorta to the left ventricle, respectively. The same results are obtained in the case of a disturbed LVV signal. The results presented in the current article motivate the development of a robust, long-term stable sensor to measure the LVV.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PURPOSE To extend the capabilities of the Cone Location and Magnitude Index algorithm to include a combination of topographic information from the anterior and posterior corneal surfaces and corneal thickness measurements to further improve our ability to correctly identify keratoconus using this new index: ConeLocationMagnitudeIndex_X. DESIGN Retrospective case-control study. METHODS Three independent data sets were analyzed: 1 development and 2 validation. The AnteriorCornealPower index was calculated to stratify the keratoconus data from mild to severe. The ConeLocationMagnitudeIndex algorithm was applied to all tomography data collected using a dual Scheimpflug-Placido-based tomographer. The ConeLocationMagnitudeIndex_X formula, resulting from analysis of the Development set, was used to determine the logistic regression model that best separates keratoconus from normal and was applied to all data sets to calculate PercentProbabilityKeratoconus_X. The sensitivity/specificity of PercentProbabilityKeratoconus_X was compared with the original PercentProbabilityKeratoconus, which only uses anterior axial data. RESULTS The AnteriorCornealPower severity distribution for the combined data sets are 136 mild, 12 moderate, and 7 severe. The logistic regression model generated for ConeLocationMagnitudeIndex_X produces complete separation for the Development set. Validation Set 1 has 1 false-negative and Validation Set 2 has 1 false-positive. The overall sensitivity/specificity results for the logistic model produced using the ConeLocationMagnitudeIndex_X algorithm are 99.4% and 99.6%, respectively. The overall sensitivity/specificity results for using the original ConeLocationMagnitudeIndex algorithm are 89.2% and 98.8%, respectively. CONCLUSIONS ConeLocationMagnitudeIndex_X provides a robust index that can detect the presence or absence of a keratoconic pattern in corneal tomography maps with improved sensitivity/specificity from the original anterior surface-only ConeLocationMagnitudeIndex algorithm.