909 resultados para ALGORITHMS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents parallel recursive algorithms for the computation of the inverse discrete Legendre transform (DPT) and the inverse discrete Laguerre transform (IDLT). These recursive algorithms are derived using Clenshaw's recurrence formula, and they are implemented with a set of parallel digital filters with time-varying coefficients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Serologic testing algorithms for recent HIV seroconversion (STARHS) provide important information for HIV surveillance. We have shown that a patient's antibody reaction in a confirmatory line immunoassay (INNO-LIATM HIV I/II Score, Innogenetics) provides information on the duration of infection. Here, we sought to further investigate the diagnostic specificity of various Inno-Lia algorithms and to identify factors affecting it. Methods Plasma samples of 714 selected patients of the Swiss HIV Cohort Study infected for longer than 12 months and representing all viral clades and stages of chronic HIV-1 infection were tested blindly by Inno-Lia and classified as either incident (up to 12 m) or older infection by 24 different algorithms. Of the total, 524 patients received HAART, 308 had HIV-1 RNA below 50 copies/mL, and 620 were infected by a HIV-1 non-B clade. Using logistic regression analysis we evaluated factors that might affect the specificity of these algorithms. Results HIV-1 RNA <50 copies/mL was associated with significantly lower reactivity to all five HIV-1 antigens of the Inno-Lia and impaired specificity of most algorithms. Among 412 patients either untreated or with HIV-1 RNA ≥50 copies/mL despite HAART, the median specificity of the algorithms was 96.5% (range 92.0-100%). The only factor that significantly promoted false-incident results in this group was age, with false-incident results increasing by a few percent per additional year. HIV-1 clade, HIV-1 RNA, CD4 percentage, sex, disease stage, and testing modalities exhibited no significance. Results were similar among 190 untreated patients. Conclusions The specificity of most Inno-Lia algorithms was high and not affected by HIV-1 variability, advanced disease and other factors promoting false-recent results in other STARHS. Specificity should be good in any group of untreated HIV-1 patients.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Serologic testing algorithms for recent HIV seroconversion (STARHS) provide important information for HIV surveillance. We have previously demonstrated that a patient's antibody reaction pattern in a confirmatory line immunoassay (INNO-LIA™ HIV I/II Score) provides information on the duration of infection, which is unaffected by clinical, immunological and viral variables. In this report we have set out to determine the diagnostic performance of Inno-Lia algorithms for identifying incident infections in patients with known duration of infection and evaluated the algorithms in annual cohorts of HIV notifications. Methods Diagnostic sensitivity was determined in 527 treatment-naive patients infected for up to 12 months. Specificity was determined in 740 patients infected for longer than 12 months. Plasma was tested by Inno-Lia and classified as either incident (< = 12 m) or older infection by 26 different algorithms. Incident infection rates (IIR) were calculated based on diagnostic sensitivity and specificity of each algorithm and the rule that the total of incident results is the sum of true-incident and false-incident results, which can be calculated by means of the pre-determined sensitivity and specificity. Results The 10 best algorithms had a mean raw sensitivity of 59.4% and a mean specificity of 95.1%. Adjustment for overrepresentation of patients in the first quarter year of infection further reduced the sensitivity. In the preferred model, the mean adjusted sensitivity was 37.4%. Application of the 10 best algorithms to four annual cohorts of HIV-1 notifications totalling 2'595 patients yielded a mean IIR of 0.35 in 2005/6 (baseline) and of 0.45, 0.42 and 0.35 in 2008, 2009 and 2010, respectively. The increase between baseline and 2008 and the ensuing decreases were highly significant. Other adjustment models yielded different absolute IIR, although the relative changes between the cohorts were identical for all models. Conclusions The method can be used for comparing IIR in annual cohorts of HIV notifications. The use of several different algorithms in combination, each with its own sensitivity and specificity to detect incident infection, is advisable as this reduces the impact of individual imperfections stemming primarily from relatively low sensitivities and sampling bias.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The early detection of subjects with probable Alzheimer's disease (AD) is crucial for effective appliance of treatment strategies. Here we explored the ability of a multitude of linear and non-linear classification algorithms to discriminate between the electroencephalograms (EEGs) of patients with varying degree of AD and their age-matched control subjects. Absolute and relative spectral power, distribution of spectral power, and measures of spatial synchronization were calculated from recordings of resting eyes-closed continuous EEGs of 45 healthy controls, 116 patients with mild AD and 81 patients with moderate AD, recruited in two different centers (Stockholm, New York). The applied classification algorithms were: principal component linear discriminant analysis (PC LDA), partial least squares LDA (PLS LDA), principal component logistic regression (PC LR), partial least squares logistic regression (PLS LR), bagging, random forest, support vector machines (SVM) and feed-forward neural network. Based on 10-fold cross-validation runs it could be demonstrated that even tough modern computer-intensive classification algorithms such as random forests, SVM and neural networks show a slight superiority, more classical classification algorithms performed nearly equally well. Using random forests classification a considerable sensitivity of up to 85% and a specificity of 78%, respectively for the test of even only mild AD patients has been reached, whereas for the comparison of moderate AD vs. controls, using SVM and neural networks, values of 89% and 88% for sensitivity and specificity were achieved. Such a remarkable performance proves the value of these classification algorithms for clinical diagnostics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In mid-July 2003, the U.S. Army Tank-Automotive & Armaments Command (TACOM) performed a series of experiments at Keweenaw Research Center (KRC), with a remote operated mine roller system. This system, named Panther Lite, consists of two M113 Armored Personnel Carriers (APC’s) connected by a Tandem Vehicle Linkage Assembly (TVLA). The system has three sets of mine rollers, two of which are connected to the front of the lead vehicle with one set trailing from the trail vehicle. Currently, the system requires two joystick controllers. One regulates the braking of the tracks, throttle, and transmission of the lead vehicle and the other controls the braking and throttle of the rear vehicle. One operator controls both joysticks, attempting to maneuver the lead vehicle along a desired path. At the same time, this operator makes compensation maneuvers to reduce lateral loads in the TVLA and to guide the rear mine rollers along the desired path. The purpose of this project is to create algorithms that would allow the slave (trail) vehicle to operate using inputs that maneuver the control (lead) vehicle. The project will be completed by first reconstructing the experimental data. Kinematic models will be generated and simulations created. The models will then be correlated with the reconstructions of the experimental data. The successful completion of this project will be a first step to eliminating the need for the second joystick.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this dissertation, the problem of creating effective large scale Adaptive Optics (AO) systems control algorithms for the new generation of giant optical telescopes is addressed. The effectiveness of AO control algorithms is evaluated in several respects, such as computational complexity, compensation error rejection and robustness, i.e. reasonable insensitivity to the system imperfections. The results of this research are summarized as follows: 1. Robustness study of Sparse Minimum Variance Pseudo Open Loop Controller (POLC) for multi-conjugate adaptive optics (MCAO). The AO system model that accounts for various system errors has been developed and applied to check the stability and performance of the POLC algorithm, which is one of the most promising approaches for the future AO systems control. It has been shown through numerous simulations that, despite the initial assumption that the exact system knowledge is necessary for the POLC algorithm to work, it is highly robust against various system errors. 2. Predictive Kalman Filter (KF) and Minimum Variance (MV) control algorithms for MCAO. The limiting performance of the non-dynamic Minimum Variance and dynamic KF-based phase estimation algorithms for MCAO has been evaluated by doing Monte-Carlo simulations. The validity of simple near-Markov autoregressive phase dynamics model has been tested and its adequate ability to predict the turbulence phase has been demonstrated both for single- and multiconjugate AO. It has also been shown that there is no performance improvement gained from the use of the more complicated KF approach in comparison to the much simpler MV algorithm in the case of MCAO. 3. Sparse predictive Minimum Variance control algorithm for MCAO. The temporal prediction stage has been added to the non-dynamic MV control algorithm in such a way that no additional computational burden is introduced. It has been confirmed through simulations that the use of phase prediction makes it possible to significantly reduce the system sampling rate and thus overall computational complexity while both maintaining the system stable and effectively compensating for the measurement and control latencies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ABSTRACT: BACKGROUND: Pelvic x-ray is a routine part of the primary survey of polytraumatized patients according to Advanced Trauma Life Support (ATLS) guidelines. However, pelvic CT is the gold standard imaging technique in the diagnosis of pelvic fractures. This study was conducted to confirm the safety of a modified ATLS algorithm omitting pelvic x-ray in hemodynamically stable polytraumatized patients with clinically stable pelvis in favour of later pelvic examination by CT scan. METHODS: We conducted a retrospective analysis of all polytraumatized patients in our emergency room between 01.07.2004 and 31.01.2006. Inclusion criteria were blunt abdominal trauma, initial hemodynamic stability and a stable pelvis on clinical examination. We excluded patients requiring immediate intervention because of hemodynamic instability. RESULTS: We reviewed the records of n = 452 polytraumatized patients, of which n = 91 fulfilled inclusion criteria (56% male, mean age = 45 years). The mechanism of trauma included 43% road traffic accidents, 47% falls. In 68/91 (75%) patients, both a pelvic x-ray and a CT examination were performed; the remainder had only pelvic CT. In 6/68 (9%) patients, pelvic fracture was diagnosed by pelvic x-ray. None of these 6 patients was found having a false positive pelvic x-ray, i.e. there was no fracture on pelvic CT scan. In 3/68 (4%) cases a fracture was missed in the pelvic x-ray, but confirmed on CT (false negative on x-ray). None of the diagnosed fractures needed an immediate therapeutic intervention. 5 (56%) were classified type A fractures, and another 4 (44%) B 2.1 in computed tomography (AO classification). One A 2.1 fracture was found in a clinically stable patient who only received CT scan (1/23). CONCLUSION: While pelvic x-ray is an integral part of ATLS assessment, this retrospective study suggests that in hemodynamically stable patients with clinically stable pevis, its sensitivity is only 67% and it may safely be omitted in favor of a pelvic CT examination if such is planned in adjunct assessment and available. The results support the safety and utility of our modified ATLS algorithm. A randomized controlled trial using the algorithm can safely be conducted to confirm the results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The numerical solution of the incompressible Navier-Stokes Equations offers an effective alternative to the experimental analysis of Fluid-Structure interaction i.e. dynamical coupling between a fluid and a solid which otherwise is very complex, time consuming and very expensive. To have a method which can accurately model these types of mechanical systems by numerical solutions becomes a great option, since these advantages are even more obvious when considering huge structures like bridges, high rise buildings, or even wind turbine blades with diameters as large as 200 meters. The modeling of such processes, however, involves complex multiphysics problems along with complex geometries. This thesis focuses on a novel vorticity-velocity formulation called the KLE to solve the incompressible Navier-stokes equations for such FSI problems. This scheme allows for the implementation of robust adaptive ODE time integration schemes and thus allows us to tackle the various multiphysics problems as separate modules. The current algorithm for KLE employs a structured or unstructured mesh for spatial discretization and it allows the use of a self-adaptive or fixed time step ODE solver while dealing with unsteady problems. This research deals with the analysis of the effects of the Courant-Friedrichs-Lewy (CFL) condition for KLE when applied to unsteady Stoke’s problem. The objective is to conduct a numerical analysis for stability and, hence, for convergence. Our results confirmthat the time step ∆t is constrained by the CFL-like condition ∆t ≤ const. hα, where h denotes the variable that represents spatial discretization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Spacecraft formation flying navigation continues to receive a great deal of interest. The research presented in this dissertation focuses on developing methods for estimating spacecraft absolute and relative positions, assuming measurements of only relative positions using wireless sensors. The implementation of the extended Kalman filter to the spacecraft formation navigation problem results in high estimation errors and instabilities in state estimation at times. This is due tp the high nonlinearities in the system dynamic model. Several approaches are attempted in this dissertation aiming at increasing the estimation stability and improving the estimation accuracy. A differential geometric filter is implemented for spacecraft positions estimation. The differential geometric filter avoids the linearization step (which is always carried out in the extended Kalman filter) through a mathematical transformation that converts the nonlinear system into a linear system. A linear estimator is designed in the linear domain, and then transformed back to the physical domain. This approach demonstrated better estimation stability for spacecraft formation positions estimation, as detailed in this dissertation. The constrained Kalman filter is also implemented for spacecraft formation flying absolute positions estimation. The orbital motion of a spacecraft is characterized by two range extrema (perigee and apogee). At the extremum, the rate of change of a spacecraft’s range vanishes. This motion constraint can be used to improve the position estimation accuracy. The application of the constrained Kalman filter at only two points in the orbit causes filter instability. Two variables are introduced into the constrained Kalman filter to maintain the stability and improve the estimation accuracy. An extended Kalman filter is implemented as a benchmark for comparison with the constrained Kalman filter. Simulation results show that the constrained Kalman filter provides better estimation accuracy as compared with the extended Kalman filter. A Weighted Measurement Fusion Kalman Filter (WMFKF) is proposed in this dissertation. In wireless localizing sensors, a measurement error is proportional to the distance of the signal travels and sensor noise. In this proposed Weighted Measurement Fusion Kalman Filter, the signal traveling time delay is not modeled; however, each measurement is weighted based on the measured signal travel distance. The obtained estimation performance is compared to the standard Kalman filter in two scenarios. The first scenario assumes using a wireless local positioning system in a GPS denied environment. The second scenario assumes the availability of both the wireless local positioning system and GPS measurements. The simulation results show that the WMFKF has similar accuracy performance as the standard Kalman Filter (KF) in the GPS denied environment. However, the WMFKF maintains the position estimation error within its expected error boundary when the WLPS detection range limit is above 30km. In addition, the WMFKF has a better accuracy and stability performance when GPS is available. Also, the computational cost analysis shows that the WMFKF has less computational cost than the standard KF, and the WMFKF has higher ellipsoid error probable percentage than the standard Measurement Fusion method. A method to determine the relative attitudes between three spacecraft is developed. The method requires four direction measurements between the three spacecraft. The simulation results and covariance analysis show that the method’s error falls within a three sigma boundary without exhibiting any singularity issues. A study of the accuracy of the proposed method with respect to the shape of the spacecraft formation is also presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of optimal design of a multi-gravity-assist space trajectories, with free number of deep space maneuvers (MGADSM) poses multi-modal cost functions. In the general form of the problem, the number of design variables is solution dependent. To handle global optimization problems where the number of design variables varies from one solution to another, two novel genetic-based techniques are introduced: hidden genes genetic algorithm (HGGA) and dynamic-size multiple population genetic algorithm (DSMPGA). In HGGA, a fixed length for the design variables is assigned for all solutions. Independent variables of each solution are divided into effective and ineffective (hidden) genes. Hidden genes are excluded in cost function evaluations. Full-length solutions undergo standard genetic operations. In DSMPGA, sub-populations of fixed size design spaces are randomly initialized. Standard genetic operations are carried out for a stage of generations. A new population is then created by reproduction from all members based on their relative fitness. The resulting sub-populations have different sizes from their initial sizes. The process repeats, leading to increasing the size of sub-populations of more fit solutions. Both techniques are applied to several MGADSM problems. They have the capability to determine the number of swing-bys, the planets to swing by, launch and arrival dates, and the number of deep space maneuvers as well as their locations, magnitudes, and directions in an optimal sense. The results show that solutions obtained using the developed tools match known solutions for complex case studies. The HGGA is also used to obtain the asteroids sequence and the mission structure in the global trajectory optimization competition (GTOC) problem. As an application of GA optimization to Earth orbits, the problem of visiting a set of ground sites within a constrained time frame is solved. The J2 perturbation and zonal coverage are considered to design repeated Sun-synchronous orbits. Finally, a new set of orbits, the repeated shadow track orbits (RSTO), is introduced. The orbit parameters are optimized such that the shadow of a spacecraft on the Earth visits the same locations periodically every desired number of days.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

All optical systems that operate in or through the atmosphere suffer from turbulence induced image blur. Both military and civilian surveillance, gun-sighting, and target identification systems are interested in terrestrial imaging over very long horizontal paths, but atmospheric turbulence can blur the resulting images beyond usefulness. My dissertation explores the performance of a multi-frame-blind-deconvolution technique applied under anisoplanatic conditions for both Gaussian and Poisson noise model assumptions. The technique is evaluated for use in reconstructing images of scenes corrupted by turbulence in long horizontal-path imaging scenarios and compared to other speckle imaging techniques. Performance is evaluated via the reconstruction of a common object from three sets of simulated turbulence degraded imagery representing low, moderate and severe turbulence conditions. Each set consisted of 1000 simulated, turbulence degraded images. The MSE performance of the estimator is evaluated as a function of the number of images, and the number of Zernike polynomial terms used to characterize the point spread function. I will compare the mean-square-error (MSE) performance of speckle imaging methods and a maximum-likelihood, multi-frame blind deconvolution (MFBD) method applied to long-path horizontal imaging scenarios. Both methods are used to reconstruct a scene from simulated imagery featuring anisoplanatic turbulence induced aberrations. This comparison is performed over three sets of 1000 simulated images each for low, moderate and severe turbulence-induced image degradation. The comparison shows that speckle-imaging techniques reduce the MSE 46 percent, 42 percent and 47 percent on average for low, moderate, and severe cases, respectively using 15 input frames under daytime conditions and moderate frame rates. Similarly, the MFBD method provides, 40 percent, 29 percent, and 36 percent improvements in MSE on average under the same conditions. The comparison is repeated under low light conditions (less than 100 photons per pixel) where improvements of 39 percent, 29 percent and 27 percent are available using speckle imaging methods and 25 input frames and 38 percent, 34 percent and 33 percent respectively for the MFBD method and 150 input frames. The MFBD estimator is applied to three sets of field data and the results presented. Finally, a combined Bispectrum-MFBD Hybrid estimator is proposed and investigated. This technique consistently provides a lower MSE and smaller variance in the estimate under all three simulated turbulence conditions.