970 resultados para synchroton-based techniques


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of discovering frequent arrangements of temporal intervals is studied. It is assumed that the database consists of sequences of events, where an event occurs during a time-interval. The goal is to mine temporal arrangements of event intervals that appear frequently in the database. The motivation of this work is the observation that in practice most events are not instantaneous but occur over a period of time and different events may occur concurrently. Thus, there are many practical applications that require mining such temporal correlations between intervals including the linguistic analysis of annotated data from American Sign Language as well as network and biological data. Two efficient methods to find frequent arrangements of temporal intervals are described; the first one is tree-based and uses depth first search to mine the set of frequent arrangements, whereas the second one is prefix-based. The above methods apply efficient pruning techniques that include a set of constraints consisting of regular expressions and gap constraints that add user-controlled focus into the mining process. Moreover, based on the extracted patterns a standard method for mining association rules is employed that applies different interestingness measures to evaluate the significance of the discovered patterns and rules. The performance of the proposed algorithms is evaluated and compared with other approaches on real (American Sign Language annotations and network data) and large synthetic datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Constraint programming has emerged as a successful paradigm for modelling combinatorial problems arising from practical situations. In many of those situations, we are not provided with an immutable set of constraints. Instead, a user will modify his requirements, in an interactive fashion, until he is satisfied with a solution. Examples of such applications include, amongst others, model-based diagnosis, expert systems, product configurators. The system he interacts with must be able to assist him by showing the consequences of his requirements. Explanations are the ideal tool for providing this assistance. However, existing notions of explanations fail to provide sufficient information. We define new forms of explanations that aim to be more informative. Even if explanation generation is a very hard task, in the applications we consider, we must manage to provide a satisfactory level of interactivity and, therefore, we cannot afford long computational times. We introduce the concept of representative sets of relaxations, a compact set of relaxations that shows the user at least one way to satisfy each of his requirements and at least one way to relax them, and present an algorithm that efficiently computes such sets. We introduce the concept of most soluble relaxations, maximising the number of products they allow. We present algorithms to compute such relaxations in times compatible with interactivity, achieving this by indifferently making use of different types of compiled representations. We propose to generalise the concept of prime implicates to constraint problems with the concept of domain consequences, and suggest to generate them as a compilation strategy. This sets a new approach in compilation, and allows to address explanation-related queries in an efficient way. We define ordered automata to compactly represent large sets of domain consequences, in an orthogonal way from existing compilation techniques that represent large sets of solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A massive change is currently taking place in the manner in which power networks are operated. Traditionally, power networks consisted of large power stations which were controlled from centralised locations. The trend in modern power networks is for generated power to be produced by a diverse array of energy sources which are spread over a large geographical area. As a result, controlling these systems from a centralised controller is impractical. Thus, future power networks will be controlled by a large number of intelligent distributed controllers which must work together to coordinate their actions. The term Smart Grid is the umbrella term used to denote this combination of power systems, artificial intelligence, and communications engineering. This thesis focuses on the application of optimal control techniques to Smart Grids with a focus in particular on iterative distributed MPC. A novel convergence and stability proof for iterative distributed MPC based on the Alternating Direction Method of Multipliers is derived. Distributed and centralised MPC, and an optimised PID controllers' performance are then compared when applied to a highly interconnected, nonlinear, MIMO testbed based on a part of the Nordic power grid. Finally, a novel tuning algorithm is proposed for iterative distributed MPC which simultaneously optimises both the closed loop performance and the communication overhead associated with the desired control.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is much common ground between the areas of coding theory and systems theory. Fitzpatrick has shown that a Göbner basis approach leads to efficient algorithms in the decoding of Reed-Solomon codes and in scalar interpolation and partial realization. This thesis simultaneously generalizes and simplifies that approach and presents applications to discrete-time modeling, multivariable interpolation and list decoding. Gröbner basis theory has come into its own in the context of software and algorithm development. By generalizing the concept of polynomial degree, term orders are provided for multivariable polynomial rings and free modules over polynomial rings. The orders are not, in general, unique and this adds, in no small way, to the power and flexibility of the technique. As well as being generating sets for ideals or modules, Gröbner bases always contain a element which is minimal with respect tot the corresponding term order. Central to this thesis is a general algorithm, valid for any term order, that produces a Gröbner basis for the solution module (or ideal) of elements satisfying a sequence of generalized congruences. These congruences, based on shifts and homomorphisms, are applicable to a wide variety of problems, including key equations and interpolations. At the core of the algorithm is an incremental step. Iterating this step lends a recursive/iterative character to the algorithm. As a consequence, not all of the input to the algorithm need be available from the start and different "paths" can be taken to reach the final solution. The existence of a suitable chain of modules satisfying the criteria of the incremental step is a prerequisite for applying the algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An aim of proactive risk management strategies is the timely identification of safety related risks. One way to achieve this is by deploying early warning systems. Early warning systems aim to provide useful information on the presence of potential threats to the system, the level of vulnerability of a system, or both of these, in a timely manner. This information can then be used to take proactive safety measures. The United Nation’s has recommended that any early warning system need to have four essential elements, which are the risk knowledge element, a monitoring and warning service, dissemination and communication and a response capability. This research deals with the risk knowledge element of an early warning system. The risk knowledge element of an early warning system contains models of possible accident scenarios. These accident scenarios are created by using hazard analysis techniques, which are categorised as traditional and contemporary. The assumption in traditional hazard analysis techniques is that accidents are occurred due to a sequence of events, whereas, the assumption of contemporary hazard analysis techniques is that safety is an emergent property of complex systems. The problem is that there is no availability of a software editor which can be used by analysts to create models of accident scenarios based on contemporary hazard analysis techniques and generate computer code that represent the models at the same time. This research aims to enhance the process of generating computer code based on graphical models that associate early warning signs and causal factors to a hazard, based on contemporary hazard analyses techniques. For this purpose, the thesis investigates the use of Domain Specific Modeling (DSM) technologies. The contributions of this thesis is the design and development of a set of three graphical Domain Specific Modeling languages (DSML)s, that when combined together, provide all of the necessary constructs that will enable safety experts and practitioners to conduct hazard and early warning analysis based on a contemporary hazard analysis approach. The languages represent those elements and relations necessary to define accident scenarios and their associated early warning signs. The three DSMLs were incorporated in to a prototype software editor that enables safety scientists and practitioners to create and edit hazard and early warning analysis models in a usable manner and as a result to generate executable code automatically. This research proves that the DSM technologies can be used to develop a set of three DSMLs which can allow user to conduct hazard and early warning analysis in more usable manner. Furthermore, the three DSMLs and their dedicated editor, which are presented in this thesis, may provide a significant enhancement to the process of creating the risk knowledge element of computer based early warning systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Choosing the right or the best option is often a demanding and challenging task for the user (e.g., a customer in an online retailer) when there are many available alternatives. In fact, the user rarely knows which offering will provide the highest value. To reduce the complexity of the choice process, automated recommender systems generate personalized recommendations. These recommendations take into account the preferences collected from the user in an explicit (e.g., letting users express their opinion about items) or implicit (e.g., studying some behavioral features) way. Such systems are widespread; research indicates that they increase the customers' satisfaction and lead to higher sales. Preference handling is one of the core issues in the design of every recommender system. This kind of system often aims at guiding users in a personalized way to interesting or useful options in a large space of possible options. Therefore, it is important for them to catch and model the user's preferences as accurately as possible. In this thesis, we develop a comparative preference-based user model to represent the user's preferences in conversational recommender systems. This type of user model allows the recommender system to capture several preference nuances from the user's feedback. We show that, when applied to conversational recommender systems, the comparative preference-based model is able to guide the user towards the best option while the system is interacting with her. We empirically test and validate the suitability and the practical computational aspects of the comparative preference-based user model and the related preference relations by comparing them to a sum of weights-based user model and the related preference relations. Product configuration, scheduling a meeting and the construction of autonomous agents are among several artificial intelligence tasks that involve a process of constrained optimization, that is, optimization of behavior or options subject to given constraints with regards to a set of preferences. When solving a constrained optimization problem, pruning techniques, such as the branch and bound technique, point at directing the search towards the best assignments, thus allowing the bounding functions to prune more branches in the search tree. Several constrained optimization problems may exhibit dominance relations. These dominance relations can be particularly useful in constrained optimization problems as they can instigate new ways (rules) of pruning non optimal solutions. Such pruning methods can achieve dramatic reductions in the search space while looking for optimal solutions. A number of constrained optimization problems can model the user's preferences using the comparative preferences. In this thesis, we develop a set of pruning rules used in the branch and bound technique to efficiently solve this kind of optimization problem. More specifically, we show how to generate newly defined pruning rules from a dominance algorithm that refers to a set of comparative preferences. These rules include pruning approaches (and combinations of them) which can drastically prune the search space. They mainly reduce the number of (expensive) pairwise comparisons performed during the search while guiding constrained optimization algorithms to find optimal solutions. Our experimental results show that the pruning rules that we have developed and their different combinations have varying impact on the performance of the branch and bound technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are difficulties with utilising self- report and physiological measures of assessment amongst forensic populations. This study investigates implicit based measures amongst sexual offenders, nonsexual offenders and low risk samples. Implicit measurement is a term applied to measurement methods that makes it difficult to influence responses through conscious control. The test battery includes the Implicit Association Test (IAT), Rapid Serial Visual Presentation (RSVP), Viewing Time (VT) and the Structured Clinical interview for disorders. The IAT proposes that people will perform better on a task when they depend on well-practiced cognitive associations. The RSVP task requires participants to identify a single target image that is presented amongst a series of rapidly presented visual images. RSVP operates on the premise that if two target images are presented within 500milliseconds of each other, the possibility that the participant will recognize the second target is significantly reduced when the first target is of salience to the individual. This is the attentional blink phenomenon. VT is based on the principle that people will look longer at images that are of salience. Results showed that on the VT task, child sexual offenders took longer to view images of children than low risk groups. Nude over clothed images induced a greater attentional blink amongst low risk and offending samples on the RSVP task. Sexual offenders took longer than low risk groups on word pairing tasks where sexual words were paired with adult words on the IAT. The SCID highlighted differences between the offending and non offending groups on the sub scales for personality disorders. More erotic stimulus items on the VT and RSVP measures is recommended to better differentiate sexual preference between offending and non offending samples. A pictorial IAT is recommended. Findings provide the basis for further development of implicit measures within the assessment of sexual offenders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In developing a biosensor, the utmost important aspects that need to be emphasized are the specificity and selectivity of the transducer. These two vital prerequisites are of paramount in ensuring a robust and reliable biosensor. Improvements in electrochemical sensors can be achieved by using microelectrodes and to modify the electrode surface (using chemical or biological recognition layers to improve the sensitivity and selectivity). The fabrication and characterisations of silicon-based and glass-based gold microelectrode arrays with various geometries (band and disc) and dimension (ranging from 10 μm-100 nm) were reported. It was found that silicon-based transducers of 10 μm gold microelectrode array exhibited the most stable and reproducible electrochemical measurements hence this dimension was selected for further study. Chemical electrodeposition on both 10 μm microband and microdisc were found viable by electro-assisted self-assembled sol-gel silica film and nanoporous-gold electrodeposition respectively. The fabrication and characterisations of on-chip electrochemical cell was also reported with a fixed diameter/width dimension and interspacing variation. With this regard, the 10 μm microelectrode array with interspacing distance of 100 μm exhibited the best electrochemical response. Surface functionalisations on single chip of planar gold macroelectrodes were also studied for the immobilisation of histidine-tagged protein and antibody. Imaging techniques such as atomic force microscopy, fluorescent microscopy or scanning electron microscope were employed to complement the electrochemical characterisations. The long-chain thiol of self-assembled monolayer with NTA-metal ligand coordination was selected for the histidine-tagged protein while silanisation technique was selected for the antibody immobilisation. The final part of the thesis described the development of a T-2 labelless immunosensor using impedimetric approach. Good antibody calibration curve was obtained for both 10 μm microband and 10 μm microdisc array. For the establishment of the T-2/HT-2 toxin calibration curve, it was found that larger microdisc array dimension was required to produce better calibration curve. The calibration curves established in buffer solution show that the microelectrode arrays were sensitive and able to detect levels of T-2/HT-2 toxin as low as 25 ppb (25 μg kg-1) with a limit of quantitation of 4.89 ppb for a 10 μm microband array and 1.53 ppb for the 40 μm microdisc array.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work presented in this thesis described the development of low-cost sensing and separation devices with electrochemical detections for health applications. This research employs macro, micro and nano technology. The first sensing device developed was a tonerbased micro-device. The initial development of microfluidic devices was based on glass or quartz devices that are often expensive to fabricate; however, the introduction of new types of materials, such as plastics, offered a new way for fast prototyping and the development of disposable devices. One such microfluidic device is based on the lamination of laser-printed polyester films using a computer, printer and laminator. The resulting toner-based microchips demonstrated a potential viability for chemical assays, coupled with several detection methods, particularly Chip-Electrophoresis-Chemiluminescence (CE-CL) detection which has never been reported in the literature. Following on from the toner-based microchip, a three-electrode micro-configuration was developed on acetate substrate. This is the first time that a micro-electrode configuration made from gold; silver and platinum have been fabricated onto acetate by means of patterning and deposition techniques using the central fabrication facilities in Tyndall National Institute. These electrodes have been designed to facilitate the integration of a 3- electrode configuration as part of the fabrication process. Since the electrodes are on acetate the dicing step can automatically be eliminated. The stability of these sensors has been investigated using electrochemical techniques with excellent outcomes. Following on from the generalised testing of the electrodes these sensors were then coupled with capillary electrophoresis. The final sensing devices were on a macro scale and involved the modifications of screenprinted electrodes. Screen-printed electrodes (SPE) are generally seen to be far less sensitive than the more expensive electrodes including the gold, boron-doped diamond and glassy carbon electrodes. To enhance the sensitivity of these electrodes they were treated with metal nano-particles, gold and palladium. Following on from this, another modification was introduced. The carbonaceous material carbon monolith was drop-cast onto the SPE and then the metal nano-particles were electrodeposited onto the monolith material

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantitative analysis of penetrative deformation in sedimentary rocks of fold and thrust belts has largely been carried out using clast based strain analysis techniques. These methods analyse the geometric deviations from an original state that populations of clasts, or strain markers, have undergone. The characterisation of these geometric changes, or strain, in the early stages of rock deformation is not entirely straight forward. This is in part due to the paucity of information on the original state of the strain markers, but also the uncertainty of the relative rheological properties of the strain markers and their matrix during deformation, as well as the interaction of two competing fabrics, such as bedding and cleavage. Furthermore one of the single largest setbacks for accurate strain analysis has been associated with the methods themselves, they are traditionally time consuming, labour intensive and results can vary between users. A suite of semi-automated techniques have been tested and found to work very well, but in low strain environments the problems discussed above persist. Additionally these techniques have been compared to Anisotropy of Magnetic Susceptibility (AMS) analyses, which is a particularly sensitive tool for the characterisation of low strain in sedimentary lithologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Outcome assessment can support the therapeutic process by providing a way to track symptoms and functionality over time, providing insights to clinicians and patients, as well as offering a common language to discuss patient behavior/functioning. OBJECTIVES: In this article, we examine the patient-based outcome assessment (PBOA) instruments that have been used to determine outcomes in acupuncture clinical research and highlight measures that are feasible, practical, economical, reliable, valid, and responsive to clinical change. The aims of this review were to assess and identify the commonly available PBOA measures, describe a framework for identifying appropriate sets of measures, and address the challenges associated with these measures and acupuncture. Instruments were evaluated in terms of feasibility, practicality, economy, reliability, validity, and responsiveness to clinical change. METHODS: This study was a systematic review. A total of 582 abstracts were reviewed using PubMed (from inception through April 2009). RESULTS: A total of 582 citations were identified. After screening of title/abstract, 212 articles were excluded. From the remaining 370 citations, 258 manuscripts identified explicit PBOA; 112 abstracts did not include any PBOA. The five most common PBOA instruments identified were the Visual Analog Scale, Symptom Diary, Numerical Pain Rating Scales, SF-36, and depression scales such as the Beck Depression Inventory. CONCLUSIONS: The way a questionnaire or scale is administered can have an effect on the outcome. Also, developing and validating outcome measures can be costly and difficult. Therefore, reviewing the literature on existing measures before creating or modifying PBOA instruments can significantly reduce the burden of developing a new measure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The opera ION serves as my Doctoral Dissertation at the University of Maryland School of Music. The librettist of the opera is Nick Olcott, Opera Assistant Director at the University. My interest in this little-known play of Euripides began with my work with Professor Lillian Doherty of the University's Classics Department. Since I am fluent in Greek, I was able to read the play in original, becoming aware of nuances of meaning absent in the standard English translations. Professor Leon Major, Artistic Director of the University's Opera Studio, was enthusiastic about the choice of this play as the basis for an opera, and has been very generous of his time in showing me what must be done to turn a play into an opera. ION is my first complete stage work for voices and constitutes an ambitious project. The opera is scored for a small chamber orchestra, consisting of Saxophone, Percussion (many types), Piano, a Small Chorus of six singers, as well as five Soloists. An orchestra of this size is adequate for the plot, and also provides support for various new vocal techniques, alternating between singing and speaking, as well as traditional arias. In ION, I incorporate Greek folk elements, which I know first-hand from my Balkan background, as well as contemporary techniques which I have absorbed during my graduate work at Boston University and the University of Maryland. Euripides' ION has fascinated me for two reasons in particular: its connection with founding myth of Athens, and the suggestiveness of its plot, which turns on the relationship of parents to children. In my interpretation, the leading character Ion is seen as emblematic for today's teenagers. Using the setting of the classic play, I hope to create a modern transformation of a myth, not to simply retell it. To this end, hopefully a new opera form will rise, as valid for our times as Verdi and Wagner were for theirs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Protein engineering over the past four years has made rhodopsin-based genetically encoded voltage indicators a leading candidate to achieve the task of reporting action potentials from a population of genetically targeted neurons in vivo. Rational design and large-scale screening efforts have steadily improved the dynamic range and kinetics of the rhodopsin voltage-sensing domain, and coupling these rhodopsins to bright fluorescent proteins has supported bright fluorescence readout of the large and rapid rhodopsin voltage response. The rhodopsin-fluorescent protein fusions have the highest achieved signal-to-noise ratios for detecting action potentials in neuronal cultures to date, and have successfully reported single spike events in vivo. Given the rapid pace of current development, the genetically encoded voltage indicator class is nearing the goal of robust spike imaging during live-animal behavioral experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Much of the contemporary concert (i.e. “classical”) saxophone literature has connections to compositional styles found in other genres like jazz, rock, or pop. Although improvisation exists as a dominant compositional device in jazz, improvisation as a performance technique is not confined to a single genre. This study looks at twelve concert saxophone pieces that are grouped into three primary categories of compositional techniques: 1) those containing unmeasured phrases, 2) those containing limited relation to improvisation but a close relationship to jazz styles, and 3) those containing jazz improvisation. In concert saxophone music, specific crossover pieces use the compositional technique of jazz improvisation. Four examples of such jazz works were composed by Dexter Morrill, Phil Woods, Bill Dobbins, and Ramon Ricker, all of which provide a foundation for this study. In addition, pieces containing varying degrees of unmeasured phrases are highlighted. As this dissertation project is based in performance, the twelve pieces were divided into three recitals that summarize a pedagogical sequence. Any concert saxophonist interested in developing jazz improvisational skills can use the pieces in this study as a method to progress toward the performance of pieces that merge jazz improvisation with the concert format. The three compositional techniques examined here will provide the performer with the necessary material to develop this individualized approach to improvisation. Specific compositional and performance techniques vary depending on the stylistic content: this study examines improvisation in the context of concert saxophone repertoire.