72 resultados para RM algorithm
Resumo:
Cataloging geocentric objects can be put in the framework of Multiple Target Tracking (MTT). Current work tends to focus on the S = 2 MTT problem because of its favorable computational complexity of O(n²). The MTT problem becomes NP-hard for a dimension of S˃3. The challenge is to find an approximation to the solution within a reasonable computation time. To effciently approximate this solution a Genetic Algorithm is used. The algorithm is applied to a simulated test case. These results represent the first steps towards a method that can treat the S˃3 problem effciently and with minimal manual intervention.
Resumo:
Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. The problem faced in this framework is that of Multiple Target Tracking (MTT). In this context both the correct associations among the observations, and the orbits of the objects have to be determined. The complexity of the MTT problem is defined by its dimension S. Where S stands for the number of ’fences’ used in the problem, each fence consists of a set of observations that all originate from dierent targets. For a dimension of S ˃ the MTT problem becomes NP-hard. As of now no algorithm exists that can solve an NP-hard problem in an optimal manner within a reasonable (polynomial) computation time. However, there are algorithms that can approximate the solution with a realistic computational e ort. To this end an Elitist Genetic Algorithm is implemented to approximately solve the S ˃ MTT problem in an e cient manner. Its complexity is studied and it is found that an approximate solution can be obtained in a polynomial time. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the correlation and orbit determination problems simultaneously, and is able to e ciently process large data sets with minimal manual intervention.
Resumo:
Behavior is one of the most important indicators for assessing cattle health and well-being. The objective of this study was to develop and validate a novel algorithm to monitor locomotor behavior of loose-housed dairy cows based on the output of the RumiWatch pedometer (ITIN+HOCH GmbH, Fütterungstechnik, Liestal, Switzerland). Data of locomotion were acquired by simultaneous pedometer measurements at a sampling rate of 10 Hz and video recordings for manual observation later. The study consisted of 3 independent experiments. Experiment 1 was carried out to develop and validate the algorithm for lying behavior, experiment 2 for walking and standing behavior, and experiment 3 for stride duration and stride length. The final version was validated, using the raw data, collected from cows not included in the development of the algorithm. Spearman correlation coefficients were calculated between accelerometer variables and respective data derived from the video recordings (gold standard). Dichotomous data were expressed as the proportion of correctly detected events, and the overall difference for continuous data was expressed as the relative measurement error. The proportions for correctly detected events or bouts were 1 for stand ups, lie downs, standing bouts, and lying bouts and 0.99 for walking bouts. The relative measurement error and Spearman correlation coefficient for lying time were 0.09% and 1; for standing time, 4.7% and 0.96; for walking time, 17.12% and 0.96; for number of strides, 6.23% and 0.98; for stride duration, 6.65% and 0.75; and for stride length, 11.92% and 0.81, respectively. The strong to very high correlations of the variables between visual observation and converted pedometer data indicate that the novel RumiWatch algorithm may markedly improve automated livestock management systems for efficient health monitoring of dairy cows.
Resumo:
Any image processing object detection algorithm somehow tries to integrate the object light (Recognition Step) and applies statistical criteria to distinguish objects of interest from other objects or from pure background (Decision Step). There are various possibilities how these two basic steps can be realized, as can be seen in the different proposed detection methods in the literature. An ideal detection algorithm should provide high recognition sensitiv ity with high decision accuracy and require a reasonable computation effort . In reality, a gain in sensitivity is usually only possible with a loss in decision accuracy and with a higher computational effort. So, automatic detection of faint streaks is still a challenge. This paper presents a detection algorithm using spatial filters simulating the geometrical form of possible streaks on a CCD image. This is realized by image convolution. The goal of this method is to generate a more or less perfect match between a streak and a filter by varying the length and orientation of the filters. The convolution answers are accepted or rejected according to an overall threshold given by the ackground statistics. This approach yields as a first result a huge amount of accepted answers due to filters partially covering streaks or remaining stars. To avoid this, a set of additional acceptance criteria has been included in the detection method. All criteria parameters are justified by background and streak statistics and they affect the detection sensitivity only marginally. Tests on images containing simulated streaks and on real images containing satellite streaks show a very promising sensitivity, reliability and running speed for this detection method. Since all method parameters are based on statistics, the true alarm, as well as the false alarm probability, are well controllable. Moreover, the proposed method does not pose any extraordinary demands on the computer hardware and on the image acquisition process.
Resumo:
When considering data from many trials, it is likely that some of them present a markedly different intervention effect or exert an undue influence on the summary results. We develop a forward search algorithm for identifying outlying and influential studies in meta-analysis models. The forward search algorithm starts by fitting the hypothesized model to a small subset of likely outlier-free studies and proceeds by adding studies into the set one-by-one that are determined to be closest to the fitted model of the existing set. As each study is added to the set, plots of estimated parameters and measures of fit are monitored to identify outliers by sharp changes in the forward plots. We apply the proposed outlier detection method to two real data sets; a meta-analysis of 26 studies that examines the effect of writing-to-learn interventions on academic achievement adjusting for three possible effect modifiers, and a meta-analysis of 70 studies that compares a fluoride toothpaste treatment to placebo for preventing dental caries in children. A simple simulated example is used to illustrate the steps of the proposed methodology, and a small-scale simulation study is conducted to evaluate the performance of the proposed method. Copyright © 2016 John Wiley & Sons, Ltd.
Resumo:
OBJECTIVE In this study, the "Progressive Resolution Optimizer PRO3" (Varian Medical Systems) is compared to the previous version "PRO2" with respect to its potential to improve dose sparing to the organs at risk (OAR) and dose coverage of the PTV for head and neck cancer patients. MATERIALS AND METHODS For eight head and neck cancer patients, volumetric modulated arc therapy (VMAT) treatment plans were generated in this study. All cases have 2-3 phases and the total prescribed dose (PD) was 60-72Gy in the PTV. The study is mainly focused on the phase 1 plans, which all have an identical PD of 54Gy, and complex PTV structures with an overlap to the parotids. Optimization was performed based on planning objectives for the PTV according to ICRU83, and with minimal dose to spinal cord, and parotids outside PTV. In order to assess the quality of the optimization algorithms, an identical set of constraints was used for both, PRO2 and PRO3. The resulting treatment plans were investigated with respect to dose distribution based on the analysis of the dose volume histograms. RESULTS For the phase 1 plans (PD=54Gy) the near maximum dose D2% of the spinal cord, could be minimized to 22±5 Gy with PRO3, as compared to 32±12Gy with PRO2, averaged for all patients. The mean dose to the parotids was also lower in PRO3 plans compared to PRO2, but the differences were less pronounced. A PTV coverage of V95%=97±1% could be reached with PRO3, as compared to 86±5% with PRO2. In clinical routine, these PRO2 plans would require modifications to obtain better PTV coverage at the cost of higher OAR doses. CONCLUSION A comparison between PRO3 and PRO2 optimization algorithms was performed for eight head and neck cancer patients. In general, the quality of VMAT plans for head and neck patients are improved with PRO3 as compared to PRO2. The dose to OARs can be reduced significantly, especially for the spinal cord. These reductions are achieved with better PTV coverage as compared to PRO2. The improved spinal cord sparing offers new opportunities for all types of paraspinal tumors and for re-irradiation of recurrent tumors or second malignancies.
Resumo:
BACKGROUND AND AIMS The Barcelona Clinic Liver Cancer (BCLC) staging system is the algorithm most widely used to manage patients with hepatocellular carcinoma (HCC). We aimed to investigate the extent to which the BCLC recommendations effectively guide clinical practice and assess the reasons for any deviation from the recommendations. MATERIAL AND METHODS The first-line treatments assigned to patients included in the prospective Bern HCC cohort were analyzed. RESULTS Among 223 patients included in the cohort, 116 were not treated according to the BCLC algorithm. Eighty percent of the patients in BCLC stage 0 (very early HCC) and 60% of the patients in BCLC stage A (early HCC) received recommended curative treatment. Only 29% of the BCLC stage B patients (intermediate HCC) and 33% of the BCLC stage C patients (advanced HCC) were treated according to the algorithm. Eighty-nine percent of the BCLC stage D patients (terminal HCC) were treated with best supportive care, as recommended. In 98 patients (44%) the performance status was disregarded in the stage assignment. CONCLUSION The management of HCC in clinical practice frequently deviates from the BCLC recommendations. Most of the curative therapy options, which have well-defined selection criteria, were allocated according to the recommendations, while the majority of the palliative therapy options were assigned to patients with tumor stages not aligned with the recommendations. The only parameter which is subjective in the algorithm, the performance status, is also the least respected.
Resumo:
OBJECTIVE The aim of this study was to directly compare metal artifact reduction (MAR) of virtual monoenergetic extrapolations (VMEs) from dual-energy computed tomography (CT) with iterative MAR (iMAR) from single energy in pelvic CT with hip prostheses. MATERIALS AND METHODS A human pelvis phantom with unilateral or bilateral metal inserts of different material (steel and titanium) was scanned with third-generation dual-source CT using single (120 kVp) and dual-energy (100/150 kVp) at similar radiation dose (CT dose index, 7.15 mGy). Three image series for each phantom configuration were reconstructed: uncorrected, VME, and iMAR. Two independent, blinded radiologists assessed image quality quantitatively (noise and attenuation) and subjectively (5-point Likert scale). Intraclass correlation coefficients (ICCs) and Cohen κ were calculated to evaluate interreader agreements. Repeated measures analysis of variance and Friedman test were used to compare quantitative and qualitative image quality. Post hoc testing was performed using a corrected (Bonferroni) P < 0.017. RESULTS Agreements between readers were high for noise (all, ICC ≥ 0.975) and attenuation (all, ICC ≥ 0.986); agreements for qualitative assessment were good to perfect (all, κ ≥ 0.678). Compared with uncorrected images, VME showed significant noise reduction in the phantom with titanium only (P < 0.017), and iMAR showed significantly lower noise in all regions and phantom configurations (all, P < 0.017). In all phantom configurations, deviations of attenuation were smallest in images reconstructed with iMAR. For VME, there was a tendency toward higher subjective image quality in phantoms with titanium compared with uncorrected images, however, without reaching statistical significance (P > 0.017). Subjective image quality was rated significantly higher for images reconstructed with iMAR than for uncorrected images in all phantom configurations (all, P < 0.017). CONCLUSIONS Iterative MAR showed better MAR capabilities than VME in settings with bilateral hip prosthesis or unilateral steel prosthesis. In settings with unilateral hip prosthesis made of titanium, VME and iMAR performed similarly well.
Resumo:
SOMS is a general surrogate-based multistart algorithm, which is used in combination with any local optimizer to find global optima for computationally expensive functions with multiple local minima. SOMS differs from previous multistart methods in that a surrogate approximation is used by the multistart algorithm to help reduce the number of function evaluations necessary to identify the most promising points from which to start each nonlinear programming local search. SOMS’s numerical results are compared with four well-known methods, namely, Multi-Level Single Linkage (MLSL), MATLAB’s MultiStart, MATLAB’s GlobalSearch, and GLOBAL. In addition, we propose a class of wavy test functions that mimic the wavy nature of objective functions arising in many black-box simulations. Extensive comparisons of algorithms on the wavy testfunctions and on earlier standard global-optimization test functions are done for a total of 19 different test problems. The numerical results indicate that SOMS performs favorably in comparison to alternative methods and does especially well on wavy functions when the number of function evaluations allowed is limited.
Resumo:
Many attempts have already been made to detect exomoons around transiting exoplanets, but the first confirmed discovery is still pending. The experiences that have been gathered so far allow us to better optimize future space telescopes for this challenge already during the development phase. In this paper we focus on the forthcoming CHaraterising ExOPlanet Satellite (CHEOPS), describing an optimized decision algorithm with step-by-step evaluation, and calculating the number of required transits for an exomoon detection for various planet moon configurations that can be observable by CHEOPS. We explore the most efficient way for such an observation to minimize the cost in observing time. Our study is based on PTV observations (photocentric transit timing variation) in simulated CHEOPS data, but the recipe does not depend on the actual detection method, and it can be substituted with, e.g., the photodynamical method for later applications. Using the current state-of-the-art level simulation of CHEOPS data we analyzed transit observation sets for different star planet moon configurations and performed a bootstrap analysis to determine their detection statistics. We have found that the detection limit is around an Earth-sized moon. In the case of favorable spatial configurations, systems with at least a large moon and a Neptune-sized planet, an 80% detection chance requires at least 5-6 transit observations on average. There is also a nonzero chance in the case of smaller moons, but the detection statistics deteriorate rapidly, while the necessary transit measurements increase quickly. After the CoRoT and Kepler spacecrafts, CHEOPS will be the next dedicated space telescope that will observe exoplanetary transits and characterize systems with known Doppler-planets. Although it has a smaller aperture than Kepler (the ratio of the mirror diameters is about 1/3) and is mounted with a CCD that is similar to Kepler's, it will observe brighter stars and operate with larger sampling rate; therefore, the detection limit for an exomoon can be the same as or better, which will make CHEOPS a competitive instruments in the quest for exomoons.