35 resultados para Branch and bound algorithms
Resumo:
An appreciation of the quantity of streamflow derived from the main hydrological pathways involved in transporting diffuse contaminants is critical when addressing a wide range of water resource management issues. In order to assess hydrological pathway contributions to streams, it is necessary to provide feasible upper and lower bounds for flows in each pathway. An important first step in this process is to provide reliable estimates of the slower responding groundwater pathways and subsequently the quicker overland and interflow pathways. This paper investigates the effectiveness of a multi-faceted approach applying different hydrograph separation techniques, supplemented by lumped hydrological modelling, for calculating the Baseflow Index (BFI), for the development of an integrated approach to hydrograph separation. A semi-distributed, lumped and deterministic rainfall runoff model known as NAM has been applied to ten catchments (ranging from 5 to 699 km2). While this modelling approach is useful as a validation method, NAM itself is also an important tool for investigation. These separation techniques provide a large variation in BFI, a difference of 0.741 predicted for BFI in a catchment with the less reliable fixed and sliding interval methods and local minima turning point methods included. This variation is reduced to 0.167 with these methods omitted. The Boughton and Eckhardt algorithms, while quite subjective in their use, provide quick and easily implemented approaches for obtaining physically realistic hydrograph separations. It is observed that while the different separation techniques give varying BFI values for each of the catchments, a recharge coefficient approach developed in Ireland, when applied in conjunction with the Master recession Curve Tabulation method, predict estimates in agreement with those obtained using the NAM model, and these estimates are also consistent with the study catchments’ geology. These two separation methods, in conjunction with the NAM model, were selected to form an integrated approach to assessing BFI in catchments.
Resumo:
PURPOSE: To evaluate the sensitivity and specificity of the screening mode of the Humphrey-Welch Allyn frequency-doubling technology (FDT), Octopus tendency-oriented perimetry (TOP), and the Humphrey Swedish Interactive Threshold Algorithm (SITA)-fast (HSF) in patients with glaucoma. DESIGN: A comparative consecutive case series. METHODS: This was a prospective study which took place in the glaucoma unit of an academic department of ophthalmology. One eye of 70 consecutive glaucoma patients and 28 age-matched normal subjects was studied. Eyes were examined with the program C-20 of FDT, G1-TOP, and 24-2 HSF in one visit and in random order. The gold standard for glaucoma was presence of a typical glaucomatous optic disk appearance on stereoscopic examination, which was judged by a glaucoma expert. The sensitivity and specificity, positive and negative predictive value, and receiver operating characteristic (ROC) curves of two algorithms for the FDT screening test, two algorithms for TOP, and three algorithms for HSF, as defined before the start of this study, were evaluated. The time required for each test was also analyzed. RESULTS: Values for area under the ROC curve ranged from 82.5%-93.9%. The largest area (93.9%) under the ROC curve was obtained with the FDT criteria, defining abnormality as presence of at least one abnormal location. Mean test time was 1.08 ± 0.28 minutes, 2.31 ± 0.28 minutes, and 4.14 ± 0.57 minutes for the FDT, TOP, and HSF, respectively. The difference in testing time was statistically significant (P <.0001). CONCLUSIONS: The C-20 FDT, G1-TOP, and 24-2 HSF appear to be useful tools to diagnose glaucoma. The test C-20 FDT and G1-TOP take approximately 1/4 and 1/2 of the time taken by 24 to 2 HSF. © 2002 by Elsevier Science Inc. All rights reserved.
Resumo:
Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This paper focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. The "obvious" lower bounds of O(m) messages (m is the number of edges in the network) and O(D) time (D is the network diameter) are non-trivial to show for randomized (Monte Carlo) algorithms. (Recent results that show that even O(n) (n is the number of nodes in the network) is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms (except for the limited case of comparison algorithms, where it was also required that some nodes may not wake up spontaneously, and that D and n were not known).
We establish these fundamental lower bounds in this paper for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (such algorithms should work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make anyuse of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time algorithm is known. A slight adaptation of our lower bound technique gives rise to an O(m) message lower bound for randomized broadcast algorithms.
An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. (The answer is known to be negative in the deterministic setting). We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that trade-off messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.
Resumo:
Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This article focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. In particular, the seemingly obvious lower bounds of Ω(m) messages, where m is the number of edges in the network, and Ω(D) time, where D is the network diameter, are nontrivial to show for randomized (Monte Carlo) algorithms. (Recent results, showing that even Ω(n), where n is the number of nodes in the network, is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms, except for the restricted case of comparison algorithms, where it was also required that nodes may not wake up spontaneously and that D and n were not known. We establish these fundamental lower bounds in this article for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (namely, algorithms that work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make any use of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time leader election algorithm is known. A slight adaptation of our lower bound technique gives rise to an Ω(m) message lower bound for randomized broadcast algorithms.
An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. The answer is known to be negative in the deterministic setting. We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that tradeoff messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.
Resumo:
Background: Late-onset Alzheimer's disease (AD) is heritable with 20 genes showing genome-wide association in the International Genomics of Alzheimer's Project (IGAP). To identify the biology underlying the disease, we extended these genetic data in a pathway analysis.
Methods: The ALIGATOR and GSEA algorithms were used in the IGAP data to identify associated functional pathways and correlated gene expression networks in human brain.
Results: ALIGATOR identified an excess of curated biological pathways showing enrichment of association. Enriched areas of biology included the immune response (P = 3.27 X 10(-12) after multiple testing correction for pathways), regulation of endocytosis (P = 1.31 X 10(-11)), cholesterol transport (P = 2.96 X 10(-9)), and proteasome-ubiquitin activity (P = 1.34 X 10(-6)). Correlated gene expression analysis identified four significant network modules, all related to the immune response (corrected P = .002-.05).
Conclusions: The immime response, regulation of endocytosis, cholesterol transport, and protein ubiquitination represent prime targets for AD therapeutics. (C) 2015 Published by Elsevier Inc. on behalf of The Alzheimer's Association.
Resumo:
In recent years, a wide variety of centralised and decentralised algorithms have been proposed for residential charging of electric vehicles (EVs). In this paper, we present a mathematical framework which casts the EV charging scenarios addressed by these algorithms as optimisation problems having either temporal or instantaneous optimisation objectives with respect to the different actors in the power system. Using this framework and a realistic distribution network simulation testbed, we provide a comparative evaluation of a range of different residential EV charging strategies, highlighting in each case positive and negative characteristics.
Resumo:
Motivated by the need for designing efficient and robust fully-distributed computation in highly dynamic networks such as Peer-to-Peer (P2P) networks, we study distributed protocols for constructing and maintaining dynamic network topologies with good expansion properties. Our goal is to maintain a sparse (bounded degree) expander topology despite heavy {\em churn} (i.e., nodes joining and leaving the network continuously over time). We assume that the churn is controlled by an adversary that has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Our main contribution is a randomized distributed protocol that guarantees with high probability the maintenance of a {\em constant} degree graph with {\em high expansion} even under {\em continuous high adversarial} churn. Our protocol can tolerate a churn rate of up to $O(n/\poly\log(n))$ per round (where $n$ is the stable network size). Our protocol is efficient, lightweight, and scalable, and it incurs only $O(\poly\log(n))$ overhead for topology maintenance: only polylogarithmic (in $n$) bits needs to be processed and sent by each node per round and any node's computation cost per round is also polylogarithmic. The given protocol is a fundamental ingredient that is needed for the design of efficient fully-distributed algorithms for solving fundamental distributed computing problems such as agreement, leader election, search, and storage in highly dynamic P2P networks and enables fast and scalable algorithms for these problems that can tolerate a large amount of churn.
Resumo:
Nearly 4000 people died in Northern Ireland’s long running conflict, 314 of them police officers (Brewer and Magee 1991, Brewer 1996, Hennessey 1999, Guelke and Milton-Edwards 2000). The republican and loyalist ceasefires of 1994 were the first significant signal that NI society was moving beyond the ‘troubles’ and towards a normalised political environment. The Belfast (Good Friday) Agreement of 1998 cemented that movement (Hennessey 1999). Policing was a key and seemingly unresolvable element of the conflict, seen as unrepresentative and partisan. Its reform or ‘recasting’ in a new dispensation was an integral part of the conflict transformation endeavour(Ellison 2010). As one of the most controversial elements of the conflicted past, it had remained outside the Agreement and was subject to a specific commission of interest (1999), generally known as the Patten Commission. The Commission’s far reaching proposals included a change of name, badge and uniform, the introduction of 50/50 recruitment (50% Roman Catholic and 50% other), a new focus on human rights, a new district command and headquarter structure, a review of ‘Special Branch’ and covert techniques, a concern for ‘policing with the community’ and a significant voluntary severance process to make room for new recruits, unconnected with the past history of the organisation(Murphy 2013).
This paper reflects upon the first data collection phase of a long term processual study of organisational change within the Royal Ulster Constabulary / Police Service of Northern Ireland. This phase (1996-2002) covers early organisational change initiation (including the pre-change period) and implementation including the instigation of symbolic changes (name, badge, and crest) and structural changes (new HQ structure and District Command structure). It utilises internal documentation including messages from the organisations leaders, interviews with forty key informants (identified through a combination of snow-balling from referrals by initial contacts, and key interviews with significant individuals), as well as external documentation and commentary on public perceptions of the change. Using a processual lens (Langley, Smallman et al. 2013) it seeks to understand this initial change phase and its relative success in a highly politicised environment.
By engaging key individuals internally and externally, setting up a dedicated change team, adopting a non normative, non urgent, calming approach to dissent, communicating in orthodox and unorthodox ways with members, acknowledging the huge emotional strain of letting go of the organisation’s name and all it embodied, and re-emphasising the role of officers as ‘police first’, rather than ‘RUC first’, the organisations leadership remained in control of a volatile and unhappy organisational body and succeeded in moving it on through this initial phase, even while much of the political establishment lambasted them externally. Three years into this change process the organisation had a new name, a new crest, new structures, procedures and was deeply engaged in embedding the joint principles of human rights and community policing within its re-woven fabric. While significant problems remained, the new Police Service of Northern Ireland had successfully begun a long journey to full community acceptance in a post conflict context.
This case illustrates the significant challenges of leading change under political pressure, with external oversight and no space for failure(Hannah, Uhl-Bien et al. 2009). It empirically reflects the reality of change implementation as messy, disruptive and unpredictable and highlights the significance of political skill and contextual understanding to success in the early stages(Buchanan and Boddy 1992). The implications of this for change theory and the practice of change implementation are explored (Eisenhardt and Graebner 2007) and some conclusions drawn about what such an extreme case tells us about change generally and change implementation under pressure.
Resumo:
Purpose
The Strengths and Difficulties Questionnaire (SDQ) is a behavioural screening tool for children. The SDQ is increasingly used as the primary outcome measure in population health interventions involving children, but it is not preference based; therefore, its role in allocative economic evaluation is limited. The Child Health Utility 9D (CHU9D) is a generic preference-based health-related quality of-life measure. This study investigates the applicability of the SDQ outcome measure for use in economic evaluations and examines its relationship with the CHU9D by testing previously published mapping algorithms. The aim of the paper is to explore the feasibility of using the SDQ within economic evaluations of school-based population health interventions.
Methods
Data were available from children participating in a cluster randomised controlled trial of the school-based roots of empathy programme in Northern Ireland. Utility was calculated using the original and alternative CHU9D tariffs along with two SDQ mapping algorithms. t tests were performed for pairwise differences in utility values from the preference-based tariffs and mapping algorithms.
Results
Mean (standard deviation) SDQ total difficulties and prosocial scores were 12 (3.2) and 8.3 (2.1). Utility values obtained from the original tariff, alternative tariff, and mapping algorithms using five and three SDQ subscales were 0.84 (0.11), 0.80 (0.13), 0.84 (0.05), and 0.83 (0.04), respectively. Each method for calculating utility produced statistically significantly different values except the original tariff and five SDQ subscale algorithm.
Conclusion
Initial evidence suggests the SDQ and CHU9D are related in some of their measurement properties. The mapping algorithm using five SDQ subscales was found to be optimal in predicting mean child health utility. Future research valuing changes in the SDQ scores would contribute to this research.
Resumo:
This paper provides a summary of our studies on robust speech recognition based on a new statistical approach – the probabilistic union model. We consider speech recognition given that part of the acoustic features may be corrupted by noise. The union model is a method for basing the recognition on the clean part of the features, thereby reducing the effect of the noise on recognition. To this end, the union model is similar to the missing feature method. However, the two methods achieve this end through different routes. The missing feature method usually requires the identity of the noisy data for noise removal, while the union model combines the local features based on the union of random events, to reduce the dependence of the model on information about the noise. We previously investigated the applications of the union model to speech recognition involving unknown partial corruption in frequency band, in time duration, and in feature streams. Additionally, a combination of the union model with conventional noise-reduction techniques was studied, as a means of dealing with a mixture of known or trainable noise and unknown unexpected noise. In this paper, a unified review, in the context of dealing with unknown partial feature corruption, is provided into each of these applications, giving the appropriate theory and implementation algorithms, along with an experimental evaluation.
Resumo:
19 B-type stars, selected from the Palomar-Green Survey, have been observed at infrared wavelengths to search for possible infrared excesses, as part of an ongoing programme to investigate the nature of early-type stars at high Galactic latitudes. The resulting infrared fluxes, along with Stromgren photometry, are compared with theoretical flux profiles to determine whether any of the targets show evidence of circumstellar material, which may be indicative of post-main- sequence evolution. Eighteen of the targets have flux distributions in good agreement with theoretical predictions. However, one star, PG 2120 + 062, shows a small near-infrared excess, which may be due either to a cool companion of spectral type F5-F7, or to circumstellar material, indicating that it may be an evolved object such as a post-asymptotic giant branch star, in the transition region between the asymptotic giant branch and planetary nebula phase, with the infrared excess due to recent mass loss during giant branch evolution.
Resumo:
In this paper we concentrate on the direct semi-blind spatial equalizer design for MIMO systems with Rayleigh fading channels. Our aim is to develop an algorithm which can outperform the classical training based method with the same training information used, and avoid the problems of low convergence speed and local minima due to pure blind methods. A general semi-blind cost function is first constructed which incorporates both the training information from the known data and some kind of higher order statistics (HOS) from the unknown sequence. Then, based on the developed cost function, we propose two semi-blind iterative and adaptive algorithms to find the desired spatial equalizer. To further improve the performance and convergence speed of the proposed adaptive method, we propose a technique to find the optimal choice of step size. Simulation results demonstrate the performance of the proposed algorithms and comparable schemes.
Resumo:
This paper outlines how the immediate life support (ILS) course was incorporated into an undergraduate-nursing curriculum in a university in Northern Ireland. It also reports on how the students perceived the impact of this course on their clinical practice. The aim was to develop the student’s ability to recognise the acutely ill patient and to determine the relevance of this to clinical practice. Prior to this the ILS course was only available to qualified nurses and this paper reports on the first time students were provided with an ILS course in an undergraduate setting. The ILS course was delivered to 89 third year nursing students (Adult Branch) and comprised one full teaching day per week over two weeks. Recognised Advanced Life Support (ALS) instructors, in keeping with the United Kingdom Resuscitation Council guidelines, taught the students. Participants completed a 17 item questionnaire which comprised an open-ended section for student comment. Questionnaire data was analysed descriptively using SSPSS version 15.0. Open-ended responses from the questionnaire data was analysed by content and thematic analysis. Results Student feedback reported that the ILS course helped them understand what constituted the acutely ill patient and the role of the nurse in managing a deteriorating situation. Students also reported that they valued the experience as highlighting gaps in their knowledge Conclusion. The inclusion of the ILS course provides students with necessary skills to assess and manage the deteriorating patient. In addition the data from this study suggest the ILS course should be delivered in an inter-professional setting – i.e taught jointly with medical students. References: Department of Health & Quality Assurance Agency (2006). Department of Health Phase 2 benchmarking project – final report. Gloucester: Department of Health, London and Quality Assurance Agency for Higher Education
Resumo:
We study the predictability of a theoretical model for earthquakes, using a pattern recognition algorithm similar to the CN and M8 algorithms known in seismology. The model, which is a stochastic spring-block model with both global correlation and local interaction, becomes more predictable as the strength of the global correlation or the local interaction is increased.
Resumo:
Many scientific applications are programmed using hybrid programming models that use both message passing and shared memory, due to the increasing prevalence of large-scale systems with multicore, multisocket nodes. Previous work has shown that energy efficiency can be improved using software-controlled execution schemes that consider both the programming model and the power-aware execution capabilities of the system. However, such approaches have focused on identifying optimal resource utilization for one programming model, either shared memory or message passing, in isolation. The potential solution space, thus the challenge, increases substantially when optimizing hybrid models since the possible resource configurations increase exponentially. Nonetheless, with the accelerating adoption of hybrid programming models, we increasingly need improved energy efficiency in hybrid parallel applications on large-scale systems. In this work, we present new software-controlled execution schemes that consider the effects of dynamic concurrency throttling (DCT) and dynamic voltage and frequency scaling (DVFS) in the context of hybrid programming models. Specifically, we present predictive models and novel algorithms based on statistical analysis that anticipate application power and time requirements under different concurrency and frequency configurations. We apply our models and methods to the NPB MZ benchmarks and selected applications from the ASC Sequoia codes. Overall, we achieve substantial energy savings (8.74 percent on average and up to 13.8 percent) with some performance gain (up to 7.5 percent) or negligible performance loss.