953 resultados para Machines


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we introduce the Generalized Equality Classifier (GEC) for use as an unsupervised clustering algorithm in categorizing analog data. GEC is based on a formal definition of inexact equality originally developed for voting in fault tolerant software applications. GEC is defined using a metric space framework. The only parameter in GEC is a scalar threshold which defines the approximate equality of two patterns. Here, we compare the characteristics of GEC to the ART2-A algorithm (Carpenter, Grossberg, and Rosen, 1991). In particular, we show that GEC with the Hamming distance performs the same optimization as ART2. Moreover, GEC has lower computational requirements than AR12 on serial machines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is focused on the design and synthesis of a diverse range of novel organosulfur compounds (sulfides, sulfoxides and sulfones), with the objective of studying their solid state properties and thereby developing an understanding of how the molecular structure of the compounds impacts upon their solid state crystalline structure. In particular, robust intermolecular interactions which determine the overall structure were investigated. These synthons were then exploited in the development of a molecular switch. Chapter One provides a brief overview of crystal engineering, the key hydrogen bonding interactions utilized in this work and also a general insight into “molecular machines” reported in the literature of relevance to this work. Chapter Two outlines the design and synthetic strategies for the development of two scaffolds suitable for incorporation of terminal alkynes, organosulfur and ether functionalities, in order to investigate the robustness and predictability of the S=O•••H-C≡C- and S=O•••H-C(α) supramolecular synthons. Crystal structures and a detailed analysis of the hydrogen bond interactions observed in these compounds are included in this chapter. Also the biological activities of four novel tertiary amines are discussed. Chapter Three focuses on the design and synthesis of diphenylacetylene compounds bearing amide and sulfur functionalities, and the exploitation of the N-H•••O=S interactions to develop a “molecular switch”. The crystal structures, hydrogen bonding patterns observed, NMR variable temperature studies and computer modelling studies are discussed in detail. Chapter Four provides the overall conclusions from chapter two and chapter three and also gives an indication of how the results of this work may be developed in the future. Chapter Five contains the full experimental details and spectral characterisation of all novel compounds synthesised in this project, while details of the NCI (National Cancer Institute) biological test results are included in the appendix.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The electroencephalogram (EEG) is a medical technology that is used in the monitoring of the brain and in the diagnosis of many neurological illnesses. Although coarse in its precision, the EEG is a non-invasive tool that requires minimal set-up times, and is suitably unobtrusive and mobile to allow continuous monitoring of the patient, either in clinical or domestic environments. Consequently, the EEG is the current tool-of-choice with which to continuously monitor the brain where temporal resolution, ease-of- use and mobility are important. Traditionally, EEG data are examined by a trained clinician who identifies neurological events of interest. However, recent advances in signal processing and machine learning techniques have allowed the automated detection of neurological events for many medical applications. In doing so, the burden of work on the clinician has been significantly reduced, improving the response time to illness, and allowing the relevant medical treatment to be administered within minutes rather than hours. However, as typical EEG signals are of the order of microvolts (μV ), contamination by signals arising from sources other than the brain is frequent. These extra-cerebral sources, known as artefacts, can significantly distort the EEG signal, making its interpretation difficult, and can dramatically disimprove automatic neurological event detection classification performance. This thesis therefore, contributes to the further improvement of auto- mated neurological event detection systems, by identifying some of the major obstacles in deploying these EEG systems in ambulatory and clinical environments so that the EEG technologies can emerge from the laboratory towards real-world settings, where they can have a real-impact on the lives of patients. In this context, the thesis tackles three major problems in EEG monitoring, namely: (i) the problem of head-movement artefacts in ambulatory EEG, (ii) the high numbers of false detections in state-of-the-art, automated, epileptiform activity detection systems and (iii) false detections in state-of-the-art, automated neonatal seizure detection systems. To accomplish this, the thesis employs a wide range of statistical, signal processing and machine learning techniques drawn from mathematics, engineering and computer science. The first body of work outlined in this thesis proposes a system to automatically detect head-movement artefacts in ambulatory EEG and utilises supervised machine learning classifiers to do so. The resulting head-movement artefact detection system is the first of its kind and offers accurate detection of head-movement artefacts in ambulatory EEG. Subsequently, addtional physiological signals, in the form of gyroscopes, are used to detect head-movements and in doing so, bring additional information to the head- movement artefact detection task. A framework for combining EEG and gyroscope signals is then developed, offering improved head-movement arte- fact detection. The artefact detection methods developed for ambulatory EEG are subsequently adapted for use in an automated epileptiform activity detection system. Information from support vector machines classifiers used to detect epileptiform activity is fused with information from artefact-specific detection classifiers in order to significantly reduce the number of false detections in the epileptiform activity detection system. By this means, epileptiform activity detection which compares favourably with other state-of-the-art systems is achieved. Finally, the problem of false detections in automated neonatal seizure detection is approached in an alternative manner; blind source separation techniques, complimented with information from additional physiological signals are used to remove respiration artefact from the EEG. In utilising these methods, some encouraging advances have been made in detecting and removing respiration artefacts from the neonatal EEG, and in doing so, the performance of the underlying diagnostic technology is improved, bringing its deployment in the real-world, clinical domain one step closer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditionally, attacks on cryptographic algorithms looked for mathematical weaknesses in the underlying structure of a cipher. Side-channel attacks, however, look to extract secret key information based on the leakage from the device on which the cipher is implemented, be it smart-card, microprocessor, dedicated hardware or personal computer. Attacks based on the power consumption, electromagnetic emanations and execution time have all been practically demonstrated on a range of devices to reveal partial secret-key information from which the full key can be reconstructed. The focus of this thesis is power analysis, more specifically a class of attacks known as profiling attacks. These attacks assume a potential attacker has access to, or can control, an identical device to that which is under attack, which allows him to profile the power consumption of operations or data flow during encryption. This assumes a stronger adversary than traditional non-profiling attacks such as differential or correlation power analysis, however the ability to model a device allows templates to be used post-profiling to extract key information from many different target devices using the power consumption of very few encryptions. This allows an adversary to overcome protocols intended to prevent secret key recovery by restricting the number of available traces. In this thesis a detailed investigation of template attacks is conducted, along with how the selection of various attack parameters practically affect the efficiency of the secret key recovery, as well as examining the underlying assumption of profiling attacks in that the power consumption of one device can be used to extract secret keys from another. Trace only attacks, where the corresponding plaintext or ciphertext data is unavailable, are then investigated against both symmetric and asymmetric algorithms with the goal of key recovery from a single trace. This allows an adversary to bypass many of the currently proposed countermeasures, particularly in the asymmetric domain. An investigation into machine-learning methods for side-channel analysis as an alternative to template or stochastic methods is also conducted, with support vector machines, logistic regression and neural networks investigated from a side-channel viewpoint. Both binary and multi-class classification attack scenarios are examined in order to explore the relative strengths of each algorithm. Finally these machine-learning based alternatives are empirically compared with template attacks, with their respective merits examined with regards to attack efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

On-board image guidance, such as cone-beam CT (CBCT) and kV/MV 2D imaging, is essential in many radiation therapy procedures, such as intensity modulated radiotherapy (IMRT) and stereotactic body radiation therapy (SBRT). These imaging techniques provide predominantly anatomical information for treatment planning and target localization. Recently, studies have shown that treatment planning based on functional and molecular information about the tumor and surrounding tissue could potentially improve the effectiveness of radiation therapy. However, current on-board imaging systems are limited in their functional and molecular imaging capability. Single Photon Emission Computed Tomography (SPECT) is a candidate to achieve on-board functional and molecular imaging. Traditional SPECT systems typically take 20 minutes or more for a scan, which is too long for on-board imaging. A robotic multi-pinhole SPECT system was proposed in this dissertation to provide shorter imaging time by using a robotic arm to maneuver the multi-pinhole SPECT system around the patient in position for radiation therapy.

A 49-pinhole collimated SPECT detector and its shielding were designed and simulated in this work using the computer-aided design (CAD) software. The trajectories of robotic arm about the patient, treatment table and gantry in the radiation therapy room and several detector assemblies such as parallel holes, single pinhole and 49 pinholes collimated detector were investigated. The rail mounted system was designed to enable a full range of detector positions and orientations to various crucial treatment sites including head and torso, while avoiding collision with linear accelerator (LINAC), patient table and patient.

An alignment method was developed in this work to calibrate the on-board robotic SPECT to the LINAC coordinate frame and to the coordinate frames of other on-board imaging systems such as CBCT. This alignment method utilizes line sources and one pinhole projection of these line sources. The model consists of multiple alignment parameters which maps line sources in 3-dimensional (3D) space to their 2-dimensional (2D) projections on the SPECT detector. Computer-simulation studies and experimental evaluations were performed as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise and acquisition geometry. In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, the six alignment parameters (3 translational and 3 rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by Radon transform, the estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution and detector acquisition geometry. The estimation accuracy was significantly improved by using 4 line sources rather than 3 and also by using thinner line-source projections (obtained by better intrinsic detector resolution). With 5 line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist.

Simulation studies were performed to investigate the improvement of imaging sensitivity and accuracy of hot sphere localization for breast imaging of patients in prone position. A 3D XCAT phantom was simulated in the prone position with nine hot spheres of 10 mm diameter added in the left breast. A no-treatment-table case and two commercial prone breast boards, 7 and 24 cm thick, were simulated. Different pinhole focal lengths were assessed for root-mean-square-error (RMSE). The pinhole focal lengths resulting in the lowest RMSE values were 12 cm, 18 cm and 21 cm for no table, thin board, and thick board, respectively. In both no table and thin board cases, all 9 hot spheres were easily visualized above background with 4-minute scans utilizing the 49-pinhole SPECT system while seven of nine hot spheres were visible with the thick board. In comparison with parallel-hole system, our 49-pinhole system shows reduction in noise and bias under these simulation cases. These results correspond to smaller radii of rotation for no-table case and thinner prone board. Similarly, localization accuracy with the 49-pinhole system was significantly better than with the parallel-hole system for both the thin and thick prone boards. Median localization errors for the 49-pinhole system with the thin board were less than 3 mm for 5 of 9 hot spheres, and less than 6 mm for the other 4 hot spheres. Median localization errors of 49-pinhole system with the thick board were less than 4 mm for 5 of 9 hot spheres, and less than 8 mm for the other 4 hot spheres.

Besides prone breast imaging, respiratory-gated region-of-interest (ROI) imaging of lung tumor was also investigated. A simulation study was conducted on the potential of multi-pinhole, region-of-interest (ROI) SPECT to alleviate noise effects associated with respiratory-gated SPECT imaging of the thorax. Two 4D XCAT digital phantoms were constructed, with either a 10 mm or 20 mm diameter tumor added in the right lung. The maximum diaphragm motion was 2 cm (for 10 mm tumor) or 4 cm (for 20 mm tumor) in superior-inferior direction and 1.2 cm in anterior-posterior direction. Projections were simulated with a 4-minute acquisition time (40 seconds per each of 6 gates) using either the ROI SPECT system (49-pinhole) or reference single and dual conventional broad cross-section, parallel-hole collimated SPECT. The SPECT images were reconstructed using OSEM with up to 6 iterations. Images were evaluated as a function of gate by profiles, noise versus bias curves, and a numerical observer performing a forced-choice localization task. Even for the 20 mm tumor, the 49-pinhole imaging ROI was found sufficient to encompass fully usual clinical ranges of diaphragm motion. Averaged over the 6 gates, noise at iteration 6 of 49-pinhole ROI imaging (10.9 µCi/ml) was approximately comparable to noise at iteration 2 of the two dual and single parallel-hole, broad cross-section systems (12.4 µCi/ml and 13.8 µCi/ml, respectively). Corresponding biases were much lower for the 49-pinhole ROI system (3.8 µCi/ml), versus 6.2 µCi/ml and 6.5 µCi/ml for the dual and single parallel-hole systems, respectively. Median localization errors averaged over 6 gates, for the 10 mm and 20 mm tumors respectively, were 1.6 mm and 0.5 mm using the ROI imaging system and 6.6 mm and 2.3 mm using the dual parallel-hole, broad cross-section system. The results demonstrate substantially improved imaging via ROI methods. One important application may be gated imaging of patients in position for radiation therapy.

A robotic SPECT imaging system was constructed utilizing a gamma camera detector (Digirad 2020tc) and a robot (KUKA KR150-L110 robot). An imaging study was performed with a phantom (PET CT PhantomTM), which includes 5 spheres of 10, 13, 17, 22 and 28 mm in diameter. The phantom was placed on a flat-top couch. SPECT projections were acquired with a parallel-hole collimator and a single-pinhole collimator both without background in the phantom, and with background at 1/10th the sphere activity concentration. The imaging trajectories of parallel-hole and pinhole collimated detectors spanned 180 degrees and 228 degrees respectively. The pinhole detector viewed a 14.7 cm-diameter common volume which encompassed the 28 mm and 22 mm spheres. The common volume for parallel-hole was a 20.8-cm-diameter cylinder which encompassed all five spheres in the phantom. The maneuverability of the robotic system was tested by navigating the detector to trace the flat-top table while avoiding collision with the table and maintaining the closest possible proximity to the common volume. For image reconstruction, detector trajectories were described by radius-of-rotation and detector rotation angle θ. These reconstruction parameters were obtained from the robot base and tool coordinates. The robotic SPECT system was able to maneuver the parallel-hole and pinhole collimated SPECT detectors in close proximity to the phantom, minimizing impact of the flat-top couch on detector to center-of-rotation (COR) distance. In no background case, all five spheres were visible in the reconstructed parallel-hole and pinhole images. In with background case, three spheres of 17, 22 and 28 mm diameter were readily observed with the parallel-hole imaging, and the targeted spheres (22 and 28 mm diameter) were readily observed in the pinhole ROI imaging.

In conclusion, the proposed on-board robotic SPECT can be aligned to LINAC/CBCT with a single pinhole projection of the line-source phantom. Alignment parameters can be estimated using one pinhole projection of line sources. This alignment method may be important for multi-pinhole SPECT, where relative pinhole alignment may vary during rotation. For single pinhole and multi-pinhole SPECT imaging onboard radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC. In simulation studies of prone breast imaging and respiratory-gated lung imaging, the 49-pinhole detector showed better tumor contrast recovery and localization in a 4-minute scan compared to parallel-hole detector. On-board SPECT could be achieved by a robot maneuvering a SPECT detector about patients in position for radiation therapy on a flat-top couch. The robot inherent coordinate frames could be an effective means to estimate detector pose for use in SPECT image reconstruction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scheduling a set of jobs over a collection of machines to optimize a certain quality-of-service measure is one of the most important research topics in both computer science theory and practice. In this thesis, we design algorithms that optimize {\em flow-time} (or delay) of jobs for scheduling problems that arise in a wide range of applications. We consider the classical model of unrelated machine scheduling and resolve several long standing open problems; we introduce new models that capture the novel algorithmic challenges in scheduling jobs in data centers or large clusters; we study the effect of selfish behavior in distributed and decentralized environments; we design algorithms that strive to balance the energy consumption and performance.

The technically interesting aspect of our work is the surprising connections we establish between approximation and online algorithms, economics, game theory, and queuing theory. It is the interplay of ideas from these different areas that lies at the heart of most of the algorithms presented in this thesis.

The main contributions of the thesis can be placed in one of the following categories.

1. Classical Unrelated Machine Scheduling: We give the first polygorithmic approximation algorithms for minimizing the average flow-time and minimizing the maximum flow-time in the offline setting. In the online and non-clairvoyant setting, we design the first non-clairvoyant algorithm for minimizing the weighted flow-time in the resource augmentation model. Our work introduces iterated rounding technique for the offline flow-time optimization, and gives the first framework to analyze non-clairvoyant algorithms for unrelated machines.

2. Polytope Scheduling Problem: To capture the multidimensional nature of the scheduling problems that arise in practice, we introduce Polytope Scheduling Problem (\psp). The \psp problem generalizes almost all classical scheduling models, and also captures hitherto unstudied scheduling problems such as routing multi-commodity flows, routing multicast (video-on-demand) trees, and multi-dimensional resource allocation. We design several competitive algorithms for the \psp problem and its variants for the objectives of minimizing the flow-time and completion time. Our work establishes many interesting connections between scheduling and market equilibrium concepts, fairness and non-clairvoyant scheduling, and queuing theoretic notion of stability and resource augmentation analysis.

3. Energy Efficient Scheduling: We give the first non-clairvoyant algorithm for minimizing the total flow-time + energy in the online and resource augmentation model for the most general setting of unrelated machines.

4. Selfish Scheduling: We study the effect of selfish behavior in scheduling and routing problems. We define a fairness index for scheduling policies called {\em bounded stretch}, and show that for the objective of minimizing the average (weighted) completion time, policies with small stretch lead to equilibrium outcomes with small price of anarchy. Our work gives the first linear/ convex programming duality based framework to bound the price of anarchy for general equilibrium concepts such as coarse correlated equilibrium.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper considers the open shop scheduling problem to minimize the make-span, provided that one of the machines has to process the jobs according to a given sequence. We show that in the preemptive case the problem is polynomially solvable for an arbitrary number of machines. If preemption is not allowed, the problem is NP-hard in the strong sense if the number of machines is variable, and is NP-hard in the ordinary sense in the case of two machines. For the latter case we give a heuristic algorithm that runs in linear time and produces a schedule with the makespan that is at most 5/4 times the optimal value. We also show that the two-machine problem in the nonpreemptive case is solvable in pseudopolynomial time by a dynamic programming algorithm, and that the algorithm can be converted into a fully polynomial approximation scheme. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 705–731, 1998

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper presents an improved version of the greedy open shop approximation algorithm with pre-ordering of jobs. It is shown that the algorithm compares favorably with the greedy algorithm with no pre-ordering by reducing either its absolute or relative error. In the case of three machines, the new algorithm creates a schedule with the makespan that is at most 3/2 times the optimal value.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider two “minimum”NP-hard job shop scheduling problems to minimize the makespan. In one of the problems every job has to be processed on at most two out of three available machines. In the other problem there are two machines, and a job may visit one of the machines twice. For each problem, we define a class of heuristic schedules in which certain subsets of operations are kept as blocks on the corresponding machines. We show that for each problem the value of the makespan of the best schedule in that class cannot be less than 3/2 times the optimal value, and present algorithms that guarantee a worst-case ratio of 3/2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Temperature distributions involved in some metal-cutting or surface-milling processes may be obtained by solving a non-linear inverse problem. A two-level concept on parallelism is introduced to compute such temperature distribution. The primary level is based on a problem-partitioning concept driven by the nature and properties of the non-linear inverse problem. Such partitioning results to a coarse-grained parallel algorithm. A simplified 2-D metal-cutting process is used as an example to illustrate the concept. A secondary level exploitation of further parallel properties based on the concept of domain-data parallelism is explained and implemented using MPI. Some experiments were performed on a network of loosely coupled machines consist of SUN Sparc Classic workstations and a network of tightly coupled processors, namely the Origin 2000.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper considers the job shop scheduling problem to minimize the makespan. It is assumed that each job consists of at most two operations, one of which is to be processed on one of m⩾2 machines, while the other operation must be performed on a single bottleneck machine, the same for all jobs. For this strongly NP-hard problem we present two heuristics with improved worst-case performance. One of them guarantees a worst-case performance ratio of 3/2. The other algorithm creates a schedule with the makespan that exceeds the largest machine workload by at most the length of the largest operation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper considers the problem of processing n jobs in a two-machine non-preemptive open shop to minimize the makespan, i.e., the maximum completion time. One of the machines is assumed to be non-bottleneck. It is shown that, unlike its flow shop counterpart, the problem is NP-hard in the ordinary sense. On the other hand, the problem is shown to be solvable by a dynamic programming algorithm that requires pseudopolynomial time. The latter algorithm can be converted into a fully polynomial approximation scheme that runs in time. An O(n log n) approximation algorithm is also designed whi finds a schedule with makespan at most 5/4 times the optimal value, and this bound is tight.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the special case of the m machine flow shop problem in which the processing time of each operation of job j is equal to pj; this variant of the flow shop problem is known as the proportionate flow shop problem. We show that for any number of machines and for any regular performance criterion we can restrict our search for an optimal schedule to permutation schedules. Moreover, we show that the problem of minimizing total weighted completion time is solvable in O(n2) time. © 1998 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper considers a special class of flow-shop problems, known as the proportionate flow shop. In such a shop, each job flows through the machines in the same order and has equal processing times on the machines. The processing times of different jobs may be different. It is assumed that all operations of a job may be compressed by the same amount which will incur an additional cost. The objective is to minimize the makespan of the schedule together with a compression cost function which is non-decreasing with respect to the amount of compression. For a bicriterion problem of minimizing the makespan and a linear cost function, an O(n log n) algorithm is developed to construct the Pareto optimal set. For a single criterion problem, an O(n2) algorithm is developed to minimize the sum of the makespan and compression cost. Copyright © 1999 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper deals with the determination of an optimal schedule for the so-called mixed shop problem when the makespan has to be minimized. In such a problem, some jobs have fixed machine orders (as in the job-shop), while the operations of the other jobs may be processed in arbitrary order (as in the open-shop). We prove binary NP-hardness of the preemptive problem with three machines and three jobs (two jobs have fixed machine orders and one may have an arbitrary machine order). We answer all other remaining open questions on the complexity status of mixed-shop problems with the makespan criterion by presenting different polynomial and pseudopolynomial algorithms.