7 resultados para Density-based Scanning Algorithm
em Brock University, Canada
Resumo:
This research focuses on generating aesthetically pleasing images in virtual environments using the particle swarm optimization (PSO) algorithm. The PSO is a stochastic population based search algorithm that is inspired by the flocking behavior of birds. In this research, we implement swarms of cameras flying through a virtual world in search of an image that is aesthetically pleasing. Virtual world exploration using particle swarm optimization is considered to be a new research area and is of interest to both the scientific and artistic communities. Aesthetic rules such as rule of thirds, subject matter, colour similarity and horizon line are all analyzed together as a multi-objective problem to analyze and solve with rendered images. A new multi-objective PSO algorithm, the sum of ranks PSO, is introduced. It is empirically compared to other single-objective and multi-objective swarm algorithms. An advantage of the sum of ranks PSO is that it is useful for solving high-dimensional problems within the context of this research. Throughout many experiments, we show that our approach is capable of automatically producing images satisfying a variety of supplied aesthetic criteria.
Resumo:
This thesis introduces the Salmon Algorithm, a search meta-heuristic which can be used for a variety of combinatorial optimization problems. This algorithm is loosely based on the path finding behaviour of salmon swimming upstream to spawn. There are a number of tunable parameters in the algorithm, so experiments were conducted to find the optimum parameter settings for different search spaces. The algorithm was tested on one instance of the Traveling Salesman Problem and found to have superior performance to an Ant Colony Algorithm and a Genetic Algorithm. It was then tested on three coding theory problems - optimal edit codes, optimal Hamming distance codes, and optimal covering codes. The algorithm produced improvements on the best known values for five of six of the test cases using edit codes. It matched the best known results on four out of seven of the Hamming codes as well as three out of three of the covering codes. The results suggest the Salmon Algorithm is competitive with established guided random search techniques, and may be superior in some search spaces.
Resumo:
In 2003, prostate cancer (PCa) is estimated to be the most commonly diagnosed cancer and third leading cause of cancer death in Canada. During PCa population screening, approximately 25% of patients with a normal digital rectal examination (DRE) and intermediate serum prostate specific antigen (PSA) level have PCa. Since all patients typically undergo biopsy, it is expected that approximately 75% of these procedures are unnecessary. The purpose of this study was to compare the degree of efficacy of clinical tests and algorithms in stage II screening for PCa while preventing unnecessary biopsies from occurring. The sample consisted of 201 consecutive men who were suspected of PCa based on the results of a DRE and serum PSA. These men were referred for venipuncture and transrectal ultrasound (TRUS). Clinical tests included TRUS, agespecific reference range PSA (Age-PSA), prostate specific antigen density (PSAD), and free-to-total prostate specific antigen ratio (%fPSA). Clinical results were evaluated individually and within algorithms. Cutoffs of 0.12 and 0.15 ng/ml/cc were employed for PSAD. Cutoffs that would provide a minimum sensitivity of 0.90 and 0.95, respectively were utilized for %fPSA. Statistical analysis included ROC curve analysis, calculated sensitivity (Sens), specificity (Spec), and positive likelihood ratio (LR), with corresponding confidence intervals (Cl). The %fPSA, at a 23% cutoff ({ Sens=0.92; CI, 0.06}, {Spec=0.4l; CI, 0.09}, {LR=1.56; CI, O.ll}), proved to be the most efficacious independent clinical test. The combination of PSAD (cutoff 0.15 ng/ml/cc) and %fPSA (cutoff 23%) ({Sens=0.93; CI, 0.06}, {Spec=0.38; CI, 0.08}, {LR=1.50; CI, 0.10}) was the most efficacious clinical algorithm. This study advocates the use of %fPSA at a cutoff of 23% when screening patients with an intermediate serum PSA and benign DRE.
Resumo:
The effects. of moisture, cation concentration, dens ity , temper~ t ure and grai n si ze on the electrical resistivity of so il s are examined using laboratory prepared soils. An i nexpen si ve method for preparing soils of different compositions was developed by mixing various size fractions i n the laboratory. Moisture and cation c oncentration are related to soil resistivity by powe r functions, whereas soil resistiv ity and temperature, density, Yo gravel, sand , sil t, and clay are related by exponential functions . A total of 1066 cases (8528 data) from all the experiments were used in a step-wise multiple linear r egression to determine the effect of each variable on soil resistivity. Six variables out of the eight variables studied account for 92.57/. of the total variance in so il resistivity with a correlation coefficient of 0.96. The other two variables (silt and gravel) did not increase the · variance. Moisture content was found to be - the most important Yo clay. variable- affecting s oil res istivi ty followed by These two variables account for 90.81Yo of the total variance in soil resistivity with a correlation ~oefficient ·.of 0 . 95. Based on these results an equation to ' ~~ed{ ct soil r esist ivi ty using moisture and Yo clay is developed . To t est the predicted equation, resistivity measurements were made on natural soils both in s i tu a nd i n the laboratory. The data show that field and laboratory measurements are comparable. The predicted regression line c losely coinciqes with resistivity data from area A and area B soils ~clayey and silty~clayey sands). Resistivity data and the predicted regression line in the case of c layey soils (clays> 40%) do not coincide, especially a t l ess than 15% moisture. The regression equation overestimates the resistivity of so i l s from area C and underestimates for area D soils. Laboratory prepared high clay soils give similar trends. The deviations are probably caused by heterogeneous distribution of mo i sture and difference in the type o f cl ays present in these soils.
Resumo:
Age-related differences in information processing have often been explained through deficits in older adults' ability to ignore irrelevant stimuli and suppress inappropriate responses through inhibitory control processes. Functional imaging work on young adults by Nelson and colleagues (2003) has indicated that inferior frontal and anterior cingulate cortex playa key role in resolving interference effects during a delay-to-match memory task. Specifically, inferior frontal cortex appeared to be recruited under conditions of context interference while the anterior cingulate was associated with interference resolution at the stage of response selection. Related work has shown that specific neural activities related to interference resolution are not preserved in older adults, supporting the notion of age-related declines in inhibitory control (Jonides et aI., 2000, West et aI., 2004b). In this study the time course and nature of these inhibition-related processes were investigated in young and old adults using high-density ERPs collected during a modified Sternberg task. Participants were presented with four target letters followed by a probe that either did or did not match one of the target letters held in working memory. Inhibitory processes were evoked by manipulating the nature of cognitive conflict in a particular trial. Conflict in working memory was elicited through the presentation of a probe letter in immediately previous target sets. Response-based conflict was produced by presenting a negative probe that had just been viewed as a positive probe on the previous trial. Younger adults displayed a larger orienting response (P3a and P3b) to positive probes relative to a non-target baseline. Older adults produced the orienting P3a and 3 P3b waveforms but their responses did not differentiate between target and non-target stimuli. This age-related change in response to targetness is discussed in terms of "early selection/late correction" models of cognitive ageing. Younger adults also showed a sensitivity in their N450 response to different levels of interference. Source analysis of the N450 responses to the conflict trials of younger adults indicated an initial dipole in inferior frontal cortex and a subsequent dipole in anterior cingulate cortex, suggesting that inferior prefrontal regions may recruit the anterior cingulate to exert cognitive control functions. Individual older adults did show some evidence of an N450 response to conflict; however, this response was attenuated by a co-occurring positive deflection in the N450 time window. It is suggested that this positivity may reflect a form of compensatory activity in older adults to adapt to their decline in inhibitory control.
Resumo:
Second-rank tensor interactions, such as quadrupolar interactions between the spin- 1 deuterium nuclei and the electric field gradients created by chemical bonds, are affected by rapid random molecular motions that modulate the orientation of the molecule with respect to the external magnetic field. In biological and model membrane systems, where a distribution of dynamically averaged anisotropies (quadrupolar splittings, chemical shift anisotropies, etc.) is present and where, in addition, various parts of the sample may undergo a partial magnetic alignment, the numerical analysis of the resulting Nuclear Magnetic Resonance (NMR) spectra is a mathematically ill-posed problem. However, numerical methods (de-Pakeing, Tikhonov regularization) exist that allow for a simultaneous determination of both the anisotropy and orientational distributions. An additional complication arises when relaxation is taken into account. This work presents a method of obtaining the orientation dependence of the relaxation rates that can be used for the analysis of the molecular motions on a broad range of time scales. An arbitrary set of exponential decay rates is described by a three-term truncated Legendre polynomial expansion in the orientation dependence, as appropriate for a second-rank tensor interaction, and a linear approximation to the individual decay rates is made. Thus a severe numerical instability caused by the presence of noise in the experimental data is avoided. At the same time, enough flexibility in the inversion algorithm is retained to achieve a meaningful mapping from raw experimental data to a set of intermediate, model-free
Resumo:
Feature selection plays an important role in knowledge discovery and data mining nowadays. In traditional rough set theory, feature selection using reduct - the minimal discerning set of attributes - is an important area. Nevertheless, the original definition of a reduct is restrictive, so in one of the previous research it was proposed to take into account not only the horizontal reduction of information by feature selection, but also a vertical reduction considering suitable subsets of the original set of objects. Following the work mentioned above, a new approach to generate bireducts using a multi--objective genetic algorithm was proposed. Although the genetic algorithms were used to calculate reduct in some previous works, we did not find any work where genetic algorithms were adopted to calculate bireducts. Compared to the works done before in this area, the proposed method has less randomness in generating bireducts. The genetic algorithm system estimated a quality of each bireduct by values of two objective functions as evolution progresses, so consequently a set of bireducts with optimized values of these objectives was obtained. Different fitness evaluation methods and genetic operators, such as crossover and mutation, were applied and the prediction accuracies were compared. Five datasets were used to test the proposed method and two datasets were used to perform a comparison study. Statistical analysis using the one-way ANOVA test was performed to determine the significant difference between the results. The experiment showed that the proposed method was able to reduce the number of bireducts necessary in order to receive a good prediction accuracy. Also, the influence of different genetic operators and fitness evaluation strategies on the prediction accuracy was analyzed. It was shown that the prediction accuracies of the proposed method are comparable with the best results in machine learning literature, and some of them outperformed it.