842 resultados para Graph Based Algorithms
Resumo:
The use of intensity-modulated radiotherapy (IMRT) has increased extensively in the modern radiotherapy (RT) treatments over the past two decades. Radiation dose distributions can be delivered with higher conformality with IMRT when compared to the conventional 3D-conformal radiotherapy (3D-CRT). Higher conformality and target coverage increases the probability of tumour control and decreases the normal tissue complications. The primary goal of this work is to improve and evaluate the accuracy, efficiency and delivery techniques of RT treatments by using IMRT. This study evaluated the dosimetric limitations and possibilities of IMRT in small (treatments of head-and-neck, prostate and lung cancer) and large volumes (primitive neuroectodermal tumours). The dose coverage of target volumes and the sparing of critical organs were increased with IMRT when compared to 3D-CRT. The developed split field IMRT technique was found to be safe and accurate method in craniospinal irradiations. By using IMRT in simultaneous integrated boosting of biologically defined target volumes of localized prostate cancer high doses were achievable with only small increase in the treatment complexity. Biological plan optimization increased the probability of uncomplicated control on average by 28% when compared to standard IMRT delivery. Unfortunately IMRT carries also some drawbacks. In IMRT the beam modulation is realized by splitting a large radiation field to small apertures. The smaller the beam apertures are the larger the rebuild-up and rebuild-down effects are at the tissue interfaces. The limitations to use IMRT with small apertures in the treatments of small lung tumours were investigated with dosimetric film measurements. The results confirmed that the peripheral doses of the small lung tumours were decreased as the effective field size was decreased. The studied calculation algorithms were not able to model the dose deficiency of the tumours accurately. The use of small sliding window apertures of 2 mm and 4 mm decreased the tumour peripheral dose by 6% when compared to 3D-CRT treatment plan. A direct aperture based optimization (DABO) technique was examined as a solution to decrease the treatment complexity. The DABO IMRT technique was able to achieve treatment plans equivalent with the conventional IMRT fluence based optimization techniques in the concave head-and-neck target volumes. With DABO the effective field sizes were increased and the number of MUs was reduced with a factor of two. The optimality of a treatment plan and the therapeutic ratio can be further enhanced by using dose painting based on regional radiosensitivities imaged with functional imaging methods.
Resumo:
The Laboratory of Intelligent Machine researches and develops energy-efficient power transmissions and automation for mobile construction machines and industrial processes. The laboratory's particular areas of expertise include mechatronic machine design using virtual technologies and simulators and demanding industrial robotics. The laboratory has collaborated extensively with industrial actors and it has participated in significant international research projects, particularly in the field of robotics. For years, dSPACE tools were the lonely hardware which was used in the lab to develop different control algorithms in real-time. dSPACE's hardware systems are in widespread use in the automotive industry and are also employed in drives, aerospace, and industrial automation. But new competitors are developing new sophisticated systems and their features convinced the laboratory to test new products. One of these competitors is National Instrument (NI). In order to get to know the specifications and capabilities of NI tools, an agreement was made to test a NI evolutionary system. This system is used to control a 1-D hydraulic slider. The objective of this research project is to develop a control scheme for the teleoperation of a hydraulically driven manipulator, and to implement a control algorithm between human and machine interaction, and machine and task environment interaction both on NI and dSPACE systems simultaneously and to compare the results.
Resumo:
The amount of biological data has grown exponentially in recent decades. Modern biotechnologies, such as microarrays and next-generation sequencing, are capable to produce massive amounts of biomedical data in a single experiment. As the amount of the data is rapidly growing there is an urgent need for reliable computational methods for analyzing and visualizing it. This thesis addresses this need by studying how to efficiently and reliably analyze and visualize high-dimensional data, especially that obtained from gene expression microarray experiments. First, we will study the ways to improve the quality of microarray data by replacing (imputing) the missing data entries with the estimated values for these entries. Missing value imputation is a method which is commonly used to make the original incomplete data complete, thus making it easier to be analyzed with statistical and computational methods. Our novel approach was to use curated external biological information as a guide for the missing value imputation. Secondly, we studied the effect of missing value imputation on the downstream data analysis methods like clustering. We compared multiple recent imputation algorithms against 8 publicly available microarray data sets. It was observed that the missing value imputation indeed is a rational way to improve the quality of biological data. The research revealed differences between the clustering results obtained with different imputation methods. On most data sets, the simple and fast k-NN imputation was good enough, but there were also needs for more advanced imputation methods, such as Bayesian Principal Component Algorithm (BPCA). Finally, we studied the visualization of biological network data. Biological interaction networks are examples of the outcome of multiple biological experiments such as using the gene microarray techniques. Such networks are typically very large and highly connected, thus there is a need for fast algorithms for producing visually pleasant layouts. A computationally efficient way to produce layouts of large biological interaction networks was developed. The algorithm uses multilevel optimization within the regular force directed graph layout algorithm.
Resumo:
The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.
Resumo:
We compared the cost-benefit of two algorithms, recently proposed by the Centers for Disease Control and Prevention, USA, with the conventional one, the most appropriate for the diagnosis of hepatitis C virus (HCV) infection in the Brazilian population. Serum samples were obtained from 517 ELISA-positive or -inconclusive blood donors who had returned to Fundação Pró-Sangue/Hemocentro de São Paulo to confirm previous results. Algorithm A was based on signal-to-cut-off (s/co) ratio of ELISA anti-HCV samples that show s/co ratio ³95% concordance with immunoblot (IB) positivity. For algorithm B, reflex nucleic acid amplification testing by PCR was required for ELISA-positive or -inconclusive samples and IB for PCR-negative samples. For algorithm C, all positive or inconclusive ELISA samples were submitted to IB. We observed a similar rate of positive results with the three algorithms: 287, 287, and 285 for A, B, and C, respectively, and 283 were concordant with one another. Indeterminate results from algorithms A and C were elucidated by PCR (expanded algorithm) which detected two more positive samples. The estimated cost of algorithms A and B was US$21,299.39 and US$32,397.40, respectively, which were 43.5 and 14.0% more economic than C (US$37,673.79). The cost can vary according to the technique used. We conclude that both algorithms A and B are suitable for diagnosing HCV infection in the Brazilian population. Furthermore, algorithm A is the more practical and economical one since it requires supplemental tests for only 54% of the samples. Algorithm B provides early information about the presence of viremia.
Resumo:
The increasing performance of computers has made it possible to solve algorithmically problems for which manual and possibly inaccurate methods have been previously used. Nevertheless, one must still pay attention to the performance of an algorithm if huge datasets are used or if the problem iscomputationally difficult. Two geographic problems are studied in the articles included in this thesis. In the first problem the goal is to determine distances from points, called study points, to shorelines in predefined directions. Together with other in-formation, mainly related to wind, these distances can be used to estimate wave exposure at different areas. In the second problem the input consists of a set of sites where water quality observations have been made and of the results of the measurements at the different sites. The goal is to select a subset of the observational sites in such a manner that water quality is still measured in a sufficient accuracy when monitoring at the other sites is stopped to reduce economic cost. Most of the thesis concentrates on the first problem, known as the fetch length problem. The main challenge is that the two-dimensional map is represented as a set of polygons with millions of vertices in total and the distances may also be computed for millions of study points in several directions. Efficient algorithms are developed for the problem, one of them approximate and the others exact except for rounding errors. The solutions also differ in that three of them are targeted for serial operation or for a small number of CPU cores whereas one, together with its further developments, is suitable also for parallel machines such as GPUs.
Resumo:
This study assessed the effectiveness of a reciprocal teaching program as a method of teaching reading comprehension, using narrative text material in a t.ypical grade seven classroom. In order to determine the effectiveness of the reciprocal teaching program, this method was compared to two other reading instruction approaches that, unlike rcciprocal teaching, did not include social interaction components. Two intact grade scven classes, and a grade seven teacher, participated in this study. Students were appropriately assigned to three treatment groups by reading achievement level as determined from a norm-referenced test. Training proceeded for a five week intervention period during regularly scheduled English periods. Throughout the program curriculum-based tests were administered. These tests were designed to assess comprehension in two distinct ways; namely, character analysis components as they relate to narrative text, and strategy use components as they contribute to student understanding of narrative and expository text. Pre, post, and maintenance tests were administered to measure overall training effects. Moreover, during intervention, training probes were administered in the last period of each week to evaluate treatment group performance. AU curriculum-based tests were coded and comparisons of pre, post, maintenance tests and training probes were presented in graph form. Results showed that the reciprocal group achieved some improvement in reading comprehension scores in the strategy use component of the tests. No improvements were observed for the character analysis components of the curriculum-based tests and the norm-referenced tests. At pre and post intervention, interviews requiring students to respond to questions that addressed metacomprehension awareness of study strategies were administered. The intelviews were coded and comparisons were made between the two intelVicws. No significant improvements were observed regarding student awareness of ten identified study strategies . This study indicated that reciprocal teaching is a viable approach that can be utilized to help students acquire more effective comprehension strategies. However, the maximum utility of the technique when administered to a population of grade seven students performing at average to above average levels of reading achievement has yet to be determined. In order to explore this issue, the refinement of training materials and curriculum-based measurements need to be explored. As well, this study revealed that reciprocal teaching placed heavier demands on the classroom teacher when compared to other reading instruction methods. This may suggest that innovative and intensive teacher training techniques are required before it is feasible to use this method in the classroom.
Resumo:
This study assessed the usefulness of a cognitive behavior modification (CBM) intervention package with mentally retarded students in overcoming learned helplessness and improving learning strategies. It also examined the feasibility of instructing teachers in the use of such a training program for a classroom setting. A modified single subject design across individuals was employed using two groups of three subjects. Three students from each of two segregated schools for the mentally retarded were selected using a teacher questionnaire and pupil checklist of the most learned helpless students enrolled there. Three additional learned helplessness assessments were conducted on each subject before and after the intervention in order to evaluate the usefulness of the program in alleviating learned helplessness. A classroom environment was created with the three students from each school engaged in three twenty minute work sessions a week with the experimenter and a tutor experimenter (TE) as instructors. Baseline measurements were established on seven targeted behaviors for each subject: task-relevant speech, task-irrelevant speech, speech denoting a positive evaluation of performance, speech denoting a negative evaluation of performance, proportion of time on task, non-verbal positive evaluation of performance and non-verbal negative evaluation of performance. The intervention package combined a variety of CBM techniques such as Meichenbaum's (1977) Stop, Look and Listen approach, role rehearsal and feedback. During the intervention each subject met with his TE twice a week for an individual half-hour session and one joint twenty minute session with all three students, the experimentor and one TE. Five weeks after the end of this experiment one follow up probe was conducted. All baseline, post-intervention and probe sessions were videotaped. The seven targeted behaviors were coded and comparisons of baseline, post intervention, and probe testing were presented in graph form. Results showed a reduction in learned helplessness in all subjects. Improvement was noted in each of the seven targeted behaviors for each of the six subjects. This study indicated that mentally retarded children can be taught to reduce learned helplessness with the aid of a CBM intervention package. It also showed that CBM is a viable approach in helping mentally retarded students acquire more effective learning strategies. Because the TEs (Tutor experimenters) had no trouble learning and implementing this program, it was considered feasible for teachers to use similar methods in the classroom.
Resumo:
A feature-based fitness function is applied in a genetic programming system to synthesize stochastic gene regulatory network models whose behaviour is defined by a time course of protein expression levels. Typically, when targeting time series data, the fitness function is based on a sum-of-errors involving the values of the fluctuating signal. While this approach is successful in many instances, its performance can deteriorate in the presence of noise. This thesis explores a fitness measure determined from a set of statistical features characterizing the time series' sequence of values, rather than the actual values themselves. Through a series of experiments involving symbolic regression with added noise and gene regulatory network models based on the stochastic 'if-calculus, it is shown to successfully target oscillating and non-oscillating signals. This practical and versatile fitness function offers an alternate approach, worthy of consideration for use in algorithms that evaluate noisy or stochastic behaviour.
Resumo:
The (n, k)-star interconnection network was proposed in 1995 as an attractive alternative to the n-star topology in parallel computation. The (n, k )-star has significant advantages over the n-star which itself was proposed as an attractive alternative to the popular hypercube. The major advantage of the (n, k )-star network is its scalability, which makes it more flexible than the n-star as an interconnection network. In this thesis, we will focus on finding graph theoretical properties of the (n, k )-star as well as developing parallel algorithms that run on this network. The basic topological properties of the (n, k )-star are first studied. These are useful since they can be used to develop efficient algorithms on this network. We then study the (n, k )-star network from algorithmic point of view. Specifically, we will investigate both fundamental and application algorithms for basic communication, prefix computation, and sorting, etc. A literature review of the state-of-the-art in relation to the (n, k )-star network as well as some open problems in this area are also provided.
Resumo:
The (n, k)-arrangement interconnection topology was first introduced in 1992. The (n, k )-arrangement graph is a class of generalized star graphs. Compared with the well known n-star, the (n, k )-arrangement graph is more flexible in degree and diameter. However, there are few algorithms designed for the (n, k)-arrangement graph up to present. In this thesis, we will focus on finding graph theoretical properties of the (n, k)- arrangement graph and developing parallel algorithms that run on this network. The topological properties of the arrangement graph are first studied. They include the cyclic properties. We then study the problems of communication: broadcasting and routing. Embedding problems are also studied later on. These are very useful to develop efficient algorithms on this network. We then study the (n, k )-arrangement network from the algorithmic point of view. Specifically, we will investigate both fundamental and application algorithms such as prefix sums computation, sorting, merging and basic geometry computation: finding convex hull on the (n, k )-arrangement graph. A literature review of the state-of-the-art in relation to the (n, k)-arrangement network is also provided, as well as some open problems in this area.
Resumo:
Hub location problem is an NP-hard problem that frequently arises in the design of transportation and distribution systems, postal delivery networks, and airline passenger flow. This work focuses on the Single Allocation Hub Location Problem (SAHLP). Genetic Algorithms (GAs) for the capacitated and uncapacitated variants of the SAHLP based on new chromosome representations and crossover operators are explored. The GAs is tested on two well-known sets of real-world problems with up to 200 nodes. The obtained results are very promising. For most of the test problems the GA obtains improved or best-known solutions and the computational time remains low. The proposed GAs can easily be extended to other variants of location problems arising in network design planning in transportation systems.
Resumo:
Complex networks can arise naturally and spontaneously from all things that act as a part of a larger system. From the patterns of socialization between people to the way biological systems organize themselves, complex networks are ubiquitous, but are currently poorly understood. A number of algorithms, designed by humans, have been proposed to describe the organizational behaviour of real-world networks. Consequently, breakthroughs in genetics, medicine, epidemiology, neuroscience, telecommunications and the social sciences have recently resulted. The algorithms, called graph models, represent significant human effort. Deriving accurate graph models is non-trivial, time-intensive, challenging and may only yield useful results for very specific phenomena. An automated approach can greatly reduce the human effort required and if effective, provide a valuable tool for understanding the large decentralized systems of interrelated things around us. To the best of the author's knowledge this thesis proposes the first method for the automatic inference of graph models for complex networks with varied properties, with and without community structure. Furthermore, to the best of the author's knowledge it is the first application of genetic programming for the automatic inference of graph models. The system and methodology was tested against benchmark data, and was shown to be capable of reproducing close approximations to well-known algorithms designed by humans. Furthermore, when used to infer a model for real biological data the resulting model was more representative than models currently used in the literature.
Resumo:
The KCube interconnection topology was rst introduced in 2010. The KCube graph is a compound graph of a Kautz digraph and hypercubes. Compared with the at- tractive Kautz digraph and well known hypercube graph, the KCube graph could accommodate as many nodes as possible for a given indegree (and outdegree) and the diameter of interconnection networks. However, there are few algorithms designed for the KCube graph. In this thesis, we will concentrate on nding graph theoretical properties of the KCube graph and designing parallel algorithms that run on this network. We will explore several topological properties, such as bipartiteness, Hamiltonianicity, and symmetry property. These properties for the KCube graph are very useful to develop efficient algorithms on this network. We will then study the KCube network from the algorithmic point of view, and will give an improved routing algorithm. In addition, we will present two optimal broadcasting algorithms. They are fundamental algorithms to many applications. A literature review of the state of the art network designs in relation to the KCube network as well as some open problems in this field will also be given.