12 resultados para Slot-based task-splitting algorithms

em CORA - Cork Open Research Archive - University College Cork - Ireland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A model for representing music scores in a form suitable for general processing by a music-analyst-programmer is proposed and implemented. Typical input to the model consists of one or more pieces of music which are encoded in a file-based score representation. File-based representations are in a form unsuited for general processing, as they do not provide a suitable level of abstraction for a programmer-analyst. Instead, a representation is created giving a programmer's view of the score. This frees the analyst-programmer from implementation details, that otherwise would form a substantial barrier to progress. The score representation uses an object-oriented approach to create a natural and robust software environment for the musicologist. The system is used to explore ways in which it could benefit musicologists. Methodologies for analysing music corpora are presented in a series of analytic examples which illustrate some of the potential of this model. Proving hypotheses or performing analysis on corpora involves the construction of algorithms. Some unique aspects of using this score model for corpus-based musicology are: - Algorithms impose a discipline which arises from the necessity for formalism. - Automatic analysis enables musicologists to complete tasks that otherwise would be infeasible because of limitations of their energy, attentiveness, accuracy and time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Prior work of our research group, that quantified the alarming levels of radiation dose to patients with Crohn’s disease from medical imaging and the notable shift towards CT imaging making these patients an at risk group, provided context for this work. CT delivers some of the highest doses of ionising radiation in diagnostic radiology. Once a medical imaging examination is deemed justified, there is an onus on the imaging team to endeavour to produce diagnostic quality CT images at the lowest possible radiation dose to that patient. The fundamental limitation with conventional CT raw data reconstruction was the inherent coupling of administered radiation dose with observed image noise – the lower the radiation dose, the noisier the image. The renaissance, rediscovery and refinement of iterative reconstruction removes this limitation allowing either an improvement in image quality without increasing radiation dose or maintenance of image quality at a lower radiation dose compared with traditional image reconstruction. This thesis is fundamentally an exercise in optimisation in clinical CT practice with the objectives of assessment of iterative reconstruction as a method for improvement of image quality in CT, exploration of the associated potential for radiation dose reduction, and development of a new split dose CT protocol with the aim of achieving and validating diagnostic quality submillisiever t CT imaging in patients with Crohn’s disease. In this study, we investigated the interplay of user-selected parameters on radiation dose and image quality in phantoms and cadavers, comparing traditional filtered back projection (FBP) with iterative reconstruction algorithms. This resulted in the development of an optimised, refined and appropriate split dose protocol for CT of the abdomen and pelvis in clinical patients with Crohn’s disease allowing contemporaneous acquisition of both modified and conventional dose CT studies. This novel algorithm was then applied to 50 patients with a suspected acute complication of known Crohn’s disease and the raw data reconstructed with FBP, adaptive statistical iterative reconstruction (ASiR) and model based iterative reconstruction (MBIR). Conventional dose CT images with FBP reconstruction were used as the reference standard with which the modified dose CT images were compared in terms of radiation dose, diagnostic findings and image quality indices. As there are multiple possible user-selected strengths of ASiR available, these were compared in terms of image quality to determine the optimal strength for this modified dose CT protocol. Modified dose CT images with MBIR were also compared with contemporaneous abdominal radiograph, where performed, in terms of diagnostic yield and radiation dose. Finally, attenuation measurements in organs, tissues, etc. with each reconstruction algorithm were compared to assess for preservation of tissue characterisation capabilities. In the phantom and cadaveric models, both forms of iterative reconstruction examined (ASiR and MBIR) were superior to FBP across a wide variety of imaging protocols, with MBIR superior to ASiR in all areas other than reconstruction speed. We established that ASiR appears to work to a target percentage noise reduction whilst MBIR works to a target residual level of absolute noise in the image. Modified dose CT images reconstructed with both ASiR and MBIR were non-inferior to conventional dose CT with FBP in terms of diagnostic findings, despite reduced subjective and objective indices of image quality. Mean dose reductions of 72.9-73.5% were achieved with the modified dose protocol with a mean effective dose of 1.26mSv. MBIR was again demonstrated superior to ASiR in terms of image quality. The overall optimal ASiR strength for the modified dose protocol used in this work is ASiR 80%, as this provides the most favourable balance of peak subjective image quality indices with less objective image noise than the corresponding conventional dose CT images reconstructed with FBP. Despite guidelines to the contrary, abdominal radiographs are still often used in the initial imaging of patients with a suspected complication of Crohn’s disease. We confirmed the superiority of modified dose CT with MBIR over abdominal radiographs at comparable doses in detection of Crohn’s disease and non-Crohn’s disease related findings. Finally, we demonstrated (in phantoms, cadavers and in vivo) that attenuation values do not change significantly across reconstruction algorithms meaning preserved tissue characterisation capabilities with iterative reconstruction. Both adaptive statistical and model based iterative reconstruction algorithms represent feasible methods of facilitating acquisition diagnostic quality CT images of the abdomen and pelvis in patients with Crohn’s disease at markedly reduced radiation doses. Our modified dose CT protocol allows dose savings of up to 73.5% compared with conventional dose CT, meaning submillisievert imaging is possible in many of these patients.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Constraint programming has emerged as a successful paradigm for modelling combinatorial problems arising from practical situations. In many of those situations, we are not provided with an immutable set of constraints. Instead, a user will modify his requirements, in an interactive fashion, until he is satisfied with a solution. Examples of such applications include, amongst others, model-based diagnosis, expert systems, product configurators. The system he interacts with must be able to assist him by showing the consequences of his requirements. Explanations are the ideal tool for providing this assistance. However, existing notions of explanations fail to provide sufficient information. We define new forms of explanations that aim to be more informative. Even if explanation generation is a very hard task, in the applications we consider, we must manage to provide a satisfactory level of interactivity and, therefore, we cannot afford long computational times. We introduce the concept of representative sets of relaxations, a compact set of relaxations that shows the user at least one way to satisfy each of his requirements and at least one way to relax them, and present an algorithm that efficiently computes such sets. We introduce the concept of most soluble relaxations, maximising the number of products they allow. We present algorithms to compute such relaxations in times compatible with interactivity, achieving this by indifferently making use of different types of compiled representations. We propose to generalise the concept of prime implicates to constraint problems with the concept of domain consequences, and suggest to generate them as a compilation strategy. This sets a new approach in compilation, and allows to address explanation-related queries in an efficient way. We define ordered automata to compactly represent large sets of domain consequences, in an orthogonal way from existing compilation techniques that represent large sets of solutions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Choosing the right or the best option is often a demanding and challenging task for the user (e.g., a customer in an online retailer) when there are many available alternatives. In fact, the user rarely knows which offering will provide the highest value. To reduce the complexity of the choice process, automated recommender systems generate personalized recommendations. These recommendations take into account the preferences collected from the user in an explicit (e.g., letting users express their opinion about items) or implicit (e.g., studying some behavioral features) way. Such systems are widespread; research indicates that they increase the customers' satisfaction and lead to higher sales. Preference handling is one of the core issues in the design of every recommender system. This kind of system often aims at guiding users in a personalized way to interesting or useful options in a large space of possible options. Therefore, it is important for them to catch and model the user's preferences as accurately as possible. In this thesis, we develop a comparative preference-based user model to represent the user's preferences in conversational recommender systems. This type of user model allows the recommender system to capture several preference nuances from the user's feedback. We show that, when applied to conversational recommender systems, the comparative preference-based model is able to guide the user towards the best option while the system is interacting with her. We empirically test and validate the suitability and the practical computational aspects of the comparative preference-based user model and the related preference relations by comparing them to a sum of weights-based user model and the related preference relations. Product configuration, scheduling a meeting and the construction of autonomous agents are among several artificial intelligence tasks that involve a process of constrained optimization, that is, optimization of behavior or options subject to given constraints with regards to a set of preferences. When solving a constrained optimization problem, pruning techniques, such as the branch and bound technique, point at directing the search towards the best assignments, thus allowing the bounding functions to prune more branches in the search tree. Several constrained optimization problems may exhibit dominance relations. These dominance relations can be particularly useful in constrained optimization problems as they can instigate new ways (rules) of pruning non optimal solutions. Such pruning methods can achieve dramatic reductions in the search space while looking for optimal solutions. A number of constrained optimization problems can model the user's preferences using the comparative preferences. In this thesis, we develop a set of pruning rules used in the branch and bound technique to efficiently solve this kind of optimization problem. More specifically, we show how to generate newly defined pruning rules from a dominance algorithm that refers to a set of comparative preferences. These rules include pruning approaches (and combinations of them) which can drastically prune the search space. They mainly reduce the number of (expensive) pairwise comparisons performed during the search while guiding constrained optimization algorithms to find optimal solutions. Our experimental results show that the pruning rules that we have developed and their different combinations have varying impact on the performance of the branch and bound technique.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this thesis, extensive experiments are firstly conducted to characterize the performance of using the emerging IEEE 802.15.4-2011 ultra wideband (UWB) for indoor localization, and the results demonstrate the accuracy and precision of using time of arrival measurements for ranging applications. A multipath propagation controlling technique is synthesized which considers the relationship between transmit power, transmission range and signal-to-noise ratio. The methodology includes a novel bilateral transmitter output power control algorithm which is demonstrated to be able to stabilize the multipath channel, and enable sub 5cm instant ranging accuracy in line of sight conditions. A fully-coupled architecture is proposed for the localization system using a combination of IEEE 802.15.4-2011 UWB and inertial sensors. This architecture not only implements the position estimation of the object by fusing the UWB and inertial measurements, but enables the nodes in the localization network to mutually share positional and other useful information via the UWB channel. The hybrid system has been demonstrated to be capable of simultaneous local-positioning and remote-tracking of the mobile object. Three fusion algorithms for relative position estimation are proposed, including internal navigation system (INS), INS with UWB ranging correction, and orientation plus ranging. Experimental results show that the INS with UWB correction algorithm achieves an average position accuracy of 0.1883m, and gets 83% and 62% improvements on the accuracy of the INS (1.0994m) and the existing extended Kalman filter tracking algorithm (0.5m), respectively.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Many studies have shown the considerable potential for the application of remote-sensing-based methods for deriving estimates of lake water quality. However, the reliable application of these methods across time and space is complicated by the diversity of lake types, sensor configuration, and the multitude of different algorithms proposed. This study tested one operational and 46 empirical algorithms sourced from the peer-reviewed literature that have individually shown potential for estimating lake water quality properties in the form of chlorophyll-a (algal biomass) and Secchi disc depth (SDD) (water transparency) in independent studies. Nearly half (19) of the algorithms were unsuitable for use with the remote-sensing data available for this study. The remaining 28 were assessed using the Terra/Aqua satellite archive to identify the best performing algorithms in terms of accuracy and transferability within the period 2001–2004 in four test lakes, namely Vänern, Vättern, Geneva, and Balaton. These lakes represent the broad continuum of large European lake types, varying in terms of eco-region (latitude/longitude and altitude), morphology, mixing regime, and trophic status. All algorithms were tested for each lake separately and combined to assess the degree of their applicability in ecologically different sites. None of the algorithms assessed in this study exhibited promise when all four lakes were combined into a single data set and most algorithms performed poorly even for specific lake types. A chlorophyll-a retrieval algorithm originally developed for eutrophic lakes showed the most promising results (R2 = 0.59) in oligotrophic lakes. Two SDD retrieval algorithms, one originally developed for turbid lakes and the other for lakes with various characteristics, exhibited promising results in relatively less turbid lakes (R2 = 0.62 and 0.76, respectively). The results presented here highlight the complexity associated with remotely sensed lake water quality estimates and the high degree of uncertainty due to various limitations, including the lake water optical properties and the choice of methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Case-Based Reasoning (CBR) uses past experiences to solve new problems. The quality of the past experiences, which are stored as cases in a case base, is a big factor in the performance of a CBR system. The system's competence may be improved by adding problems to the case base after they have been solved and their solutions verified to be correct. However, from time to time, the case base may have to be refined to reduce redundancy and to get rid of any noisy cases that may have been introduced. Many case base maintenance algorithms have been developed to delete noisy and redundant cases. However, different algorithms work well in different situations and it may be difficult for a knowledge engineer to know which one is the best to use for a particular case base. In this thesis, we investigate ways to combine algorithms to produce better deletion decisions than the decisions made by individual algorithms, and ways to choose which algorithm is best for a given case base at a given time. We analyse five of the most commonly-used maintenance algorithms in detail and show how the different algorithms perform better on different datasets. This motivates us to develop a new approach: maintenance by a committee of experts (MACE). MACE allows us to combine maintenance algorithms to produce a composite algorithm which exploits the merits of each of the algorithms that it contains. By combining different algorithms in different ways we can also define algorithms that have different trade-offs between accuracy and deletion. While MACE allows us to define an infinite number of new composite algorithms, we still face the problem of choosing which algorithm to use. To make this choice, we need to be able to identify properties of a case base that are predictive of which maintenance algorithm is best. We examine a number of measures of dataset complexity for this purpose. These provide a numerical way to describe a case base at a given time. We use the numerical description to develop a meta-case-based classification system. This system uses previous experience about which maintenance algorithm was best to use for other case bases to predict which algorithm to use for a new case base. Finally, we give the knowledge engineer more control over the deletion process by creating incremental versions of the maintenance algorithms. These incremental algorithms suggest one case at a time for deletion rather than a group of cases, which allows the knowledge engineer to decide whether or not each case in turn should be deleted or kept. We also develop incremental versions of the complexity measures, allowing us to create an incremental version of our meta-case-based classification system. Since the case base changes after each deletion, the best algorithm to use may also change. The incremental system allows us to choose which algorithm is the best to use at each point in the deletion process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ever increasing demand for broadband communications requires sophisticated devices. Photonic integrated circuits (PICs) are an approach that fulfills those requirements. PICs enable the integration of different optical modules on a single chip. Low loss fiber coupling and simplified packaging are key issues in keeping the price of PICs at a low level. Integrated spot size converters (SSC) offer an opportunity to accomplish this. Design, fabrication and characterization of SSCs based on an asymmetric twin waveguide (ATG) at a wavelength of 1.55 μm are the main elements of this dissertation. It is theoretically and experimentally shown that a passive ATG facilitates a polarization filter mechanism. A reproducible InP process guideline is developed that achieves vertical waveguides with smooth sidewalls. Birefringence and resonant coupling are used in an ATG to enable a polarization filtering and splitting mechanism. For the first time such a filter is experimentally shown. At a wavelength of 1610 nm a power extinction ratio of (1.6 ± 0.2) dB was measured for the TE- polarization in a single approximately 372 μm long TM- pass polarizer. A TE-pass polarizer with a similar length was demonstrated with a TM/TE-power extinction ratio of (0.7 ± 0.2) dB at 1610 nm. The refractive indices of two different InGaAsP compositions, required for a SSC, are measured by the reflection spectroscopy technique. A SSC layout for dielectric-free fabricated compact photodetectors is adjusted to those index values. The development and the results of the final fabrication procedure for the ATG concept are outlined. The etch rate, sidewall roughness and selectivity of a Cl2/CH4/H2 based inductively coupled plasma (ICP) etch are investigated by a design of experiment approach. The passivation effect of CH4 is illustrated for the first time. Conditions are determined for etching smooth and vertical sidewalls up to a depth of 5 μm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Along with the growing demand for cryptosystems in systems ranging from large servers to mobile devices, suitable cryptogrophic protocols for use under certain constraints are becoming more and more important. Constraints such as calculation time, area, efficiency and security, must be considered by the designer. Elliptic curves, since their introduction to public key cryptography in 1985 have challenged established public key and signature generation schemes such as RSA, offering more security per bit. Amongst Elliptic curve based systems, pairing based cryptographies are thoroughly researched and can be used in many public key protocols such as identity based schemes. For hardware implementions of pairing based protocols, all components which calculate operations over Elliptic curves can be considered. Designers of the pairing algorithms must choose calculation blocks and arrange the basic operations carefully so that the implementation can meet the constraints of time and hardware resource area. This thesis deals with different hardware architectures to accelerate the pairing based cryptosystems in the field of characteristic two. Using different top-level architectures the hardware efficiency of operations that run at different times is first considered in this thesis. Security is another important aspect of pairing based cryptography to be considered in practically Side Channel Analysis (SCA) attacks. The naively implemented hardware accelerators for pairing based cryptographies can be vulnerable when taking the physical analysis attacks into consideration. This thesis considered the weaknesses in pairing based public key cryptography and addresses the particular calculations in the systems that are insecure. In this case, countermeasures should be applied to protect the weak link of the implementation to improve and perfect the pairing based algorithms. Some important rules that the designers must obey to improve the security of the cryptosystems are proposed. According to these rules, three countermeasures that protect the pairing based cryptosystems against SCA attacks are applied. The implementations of the countermeasures are presented and their performances are investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The electroencephalogram (EEG) is a medical technology that is used in the monitoring of the brain and in the diagnosis of many neurological illnesses. Although coarse in its precision, the EEG is a non-invasive tool that requires minimal set-up times, and is suitably unobtrusive and mobile to allow continuous monitoring of the patient, either in clinical or domestic environments. Consequently, the EEG is the current tool-of-choice with which to continuously monitor the brain where temporal resolution, ease-of- use and mobility are important. Traditionally, EEG data are examined by a trained clinician who identifies neurological events of interest. However, recent advances in signal processing and machine learning techniques have allowed the automated detection of neurological events for many medical applications. In doing so, the burden of work on the clinician has been significantly reduced, improving the response time to illness, and allowing the relevant medical treatment to be administered within minutes rather than hours. However, as typical EEG signals are of the order of microvolts (μV ), contamination by signals arising from sources other than the brain is frequent. These extra-cerebral sources, known as artefacts, can significantly distort the EEG signal, making its interpretation difficult, and can dramatically disimprove automatic neurological event detection classification performance. This thesis therefore, contributes to the further improvement of auto- mated neurological event detection systems, by identifying some of the major obstacles in deploying these EEG systems in ambulatory and clinical environments so that the EEG technologies can emerge from the laboratory towards real-world settings, where they can have a real-impact on the lives of patients. In this context, the thesis tackles three major problems in EEG monitoring, namely: (i) the problem of head-movement artefacts in ambulatory EEG, (ii) the high numbers of false detections in state-of-the-art, automated, epileptiform activity detection systems and (iii) false detections in state-of-the-art, automated neonatal seizure detection systems. To accomplish this, the thesis employs a wide range of statistical, signal processing and machine learning techniques drawn from mathematics, engineering and computer science. The first body of work outlined in this thesis proposes a system to automatically detect head-movement artefacts in ambulatory EEG and utilises supervised machine learning classifiers to do so. The resulting head-movement artefact detection system is the first of its kind and offers accurate detection of head-movement artefacts in ambulatory EEG. Subsequently, addtional physiological signals, in the form of gyroscopes, are used to detect head-movements and in doing so, bring additional information to the head- movement artefact detection task. A framework for combining EEG and gyroscope signals is then developed, offering improved head-movement arte- fact detection. The artefact detection methods developed for ambulatory EEG are subsequently adapted for use in an automated epileptiform activity detection system. Information from support vector machines classifiers used to detect epileptiform activity is fused with information from artefact-specific detection classifiers in order to significantly reduce the number of false detections in the epileptiform activity detection system. By this means, epileptiform activity detection which compares favourably with other state-of-the-art systems is achieved. Finally, the problem of false detections in automated neonatal seizure detection is approached in an alternative manner; blind source separation techniques, complimented with information from additional physiological signals are used to remove respiration artefact from the EEG. In utilising these methods, some encouraging advances have been made in detecting and removing respiration artefacts from the neonatal EEG, and in doing so, the performance of the underlying diagnostic technology is improved, bringing its deployment in the real-world, clinical domain one step closer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The topic of this thesis is impulsivity. The meaning and measurement of impulse control is explored, with a particular focus on forensic settings. Impulsivity is central to many areas of psychology; it is one of the most common diagnostic criteria of mental disorders and is fundamental to the understanding of forensic personalities. Despite this widespread importance there is little agreement as to the definition or structure of impulsivity, and its measurement is fraught with difficulty owing to a reliance on self-report methods. This research aims to address this problem by investigating the viability of using simple computerised cognitive performance tasks as complementary components of a multi-method assessment strategy for impulse control. Ultimately, the usefulness of this measurement strategy for a forensic sample is assessed. Impulsivity is found to be a multifaceted construct comprised of a constellation of distinct sub-dimensions. Computerised cognitive performance tasks are valid and reliable measures that can assess impulsivity at a neuronal level. Self-report and performance task methods assess distinct components of impulse control and, for the optimal assessment of impulse control, a multi-method battery of self-report and performance task measures is advocated. Such a battery is shown to have demonstrated utility in a forensic sample, and recommendations for forensic assessment in the Irish context are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditionally, attacks on cryptographic algorithms looked for mathematical weaknesses in the underlying structure of a cipher. Side-channel attacks, however, look to extract secret key information based on the leakage from the device on which the cipher is implemented, be it smart-card, microprocessor, dedicated hardware or personal computer. Attacks based on the power consumption, electromagnetic emanations and execution time have all been practically demonstrated on a range of devices to reveal partial secret-key information from which the full key can be reconstructed. The focus of this thesis is power analysis, more specifically a class of attacks known as profiling attacks. These attacks assume a potential attacker has access to, or can control, an identical device to that which is under attack, which allows him to profile the power consumption of operations or data flow during encryption. This assumes a stronger adversary than traditional non-profiling attacks such as differential or correlation power analysis, however the ability to model a device allows templates to be used post-profiling to extract key information from many different target devices using the power consumption of very few encryptions. This allows an adversary to overcome protocols intended to prevent secret key recovery by restricting the number of available traces. In this thesis a detailed investigation of template attacks is conducted, along with how the selection of various attack parameters practically affect the efficiency of the secret key recovery, as well as examining the underlying assumption of profiling attacks in that the power consumption of one device can be used to extract secret keys from another. Trace only attacks, where the corresponding plaintext or ciphertext data is unavailable, are then investigated against both symmetric and asymmetric algorithms with the goal of key recovery from a single trace. This allows an adversary to bypass many of the currently proposed countermeasures, particularly in the asymmetric domain. An investigation into machine-learning methods for side-channel analysis as an alternative to template or stochastic methods is also conducted, with support vector machines, logistic regression and neural networks investigated from a side-channel viewpoint. Both binary and multi-class classification attack scenarios are examined in order to explore the relative strengths of each algorithm. Finally these machine-learning based alternatives are empirically compared with template attacks, with their respective merits examined with regards to attack efficiency.