178 resultados para intelligent algorithms
Resumo:
Rationale, aims and objectives: This study aimed to determine the value of using a mix of clinical pharmacy data and routine hospital admission spell data in the development of predictive algorithms. Exploration of risk factors in hospitalized patients, together with the targeting strategies devised, will enable the prioritization of clinical pharmacy services to optimize patient outcomes.
Methods: Predictive algorithms were developed using a number of detailed steps using a 75% sample of integrated medicines management (IMM) patients, and validated using the remaining 25%. IMM patients receive targeted clinical pharmacy input throughout their hospital stay. The algorithms were applied to the validation sample, and predicted risk probability was generated for each patient from the coefficients. Risk threshold for the algorithms were determined by identifying the cut-off points of risk scores at which the algorithm would have the highest discriminative performance. Clinical pharmacy staffing levels were obtained from the pharmacy department staffing database.
Results: Numbers of previous emergency admissions and admission medicines together with age-adjusted co-morbidity and diuretic receipt formed a 12-month post-discharge and/or readmission risk algorithm. Age-adjusted co-morbidity proved to be the best index to predict mortality. Increased numbers of clinical pharmacy staff at ward level was correlated with a reduction in risk-adjusted mortality index (RAMI).
Conclusions: Algorithms created were valid in predicting risk of in-hospital and post-discharge mortality and risk of hospital readmission 3, 6 and 12 months post-discharge. The provision of ward-based clinical pharmacy services is a key component to reducing RAMI and enabling the full benefits of pharmacy input to patient care to be realized.
Resumo:
As an important type of spatial keyword query, the m-closest keywords (mCK) query finds a group of objects such that they cover all query keywords and have the smallest diameter, which is defined as the largest distance between any pair of objects in the group. The query is useful in many applications such as detecting locations of web resources. However, the existing work does not study the intractability of this problem and only provides exact algorithms, which are computationally expensive.
In this paper, we prove that the problem of answering mCK queries is NP-hard. We first devise a greedy algorithm that has an approximation ratio of 2. Then, we observe that an mCK query can be approximately answered by finding the circle with the smallest diameter that encloses a group of objects together covering all query keywords. We prove that the group enclosed in the circle can answer the mCK query with an approximation ratio of 2 over 3. Based on this, we develop an algorithm for finding such a circle exactly, which has a high time complexity. To improve efficiency, we propose another two algorithms that find such a circle approximately, with a ratio of 2 over √3 + ε. Finally, we propose an exact algorithm that utilizes the group found by the 2 over √3 + ε)-approximation algorithm to obtain the optimal group. We conduct extensive experiments using real-life datasets. The experimental results offer insights into both efficiency and accuracy of the proposed approximation algorithms, and the results also demonstrate that our exact algorithm outperforms the best known algorithm by an order of magnitude.
Resumo:
CCTV (Closed-Circuit TeleVision) systems are broadly deployed in the present world. To ensure in-time reaction for intelligent surveillance, it is a fundamental task for real-world applications to determine the gender of people of interest. However, normal video algorithms for gender profiling (usually face profiling) have three drawbacks. First, the profiling result is always uncertain. Second, the profiling result is not stable. The degree of certainty usually varies over time, sometimes even to the extent that a male is classified as a female, and vice versa. Third, for a robust profiling result in cases that a person’s face is not visible, other features, such as body shape, are required. These algorithms may provide different recognition results - at the very least, they will provide different degrees of certainties. To overcome these problems, in this paper, we introduce an Dempster-Shafer (DS) evidential approach that makes use of profiling results from multiple algorithms over a period of time, in particular, Denoeux’s cautious rule is applied for fusing mass functions through time lines. Experiments show that this approach does provide better results than single profiling results and classic fusion results. Furthermore, it is found that if severe mis-classification has occurred at the beginning of the time line, the combination can yield undesirable results. To remedy this weakness, we further propose three extensions to the evidential approach proposed above incorporating notions of time-window, time-attenuation, and time-discounting, respectively. These extensions also applies Denoeux’s rule along with time lines and take the DS approach as a special case. Experiments show that these three extensions do provide better results than their predecessor when mis-classifications occur.
Resumo:
The ability of an agent to make quick, rational decisions in an uncertain environment is paramount for its applicability in realistic settings. Markov Decision Processes (MDP) provide such a framework, but can only model uncertainty that can be expressed as probabilities. Possibilistic counterparts of MDPs allow to model imprecise beliefs, yet they cannot accurately represent probabilistic sources of uncertainty and they lack the efficient online solvers found in the probabilistic MDP community. In this paper we advance the state of the art in three important ways. Firstly, we propose the first online planner for possibilistic MDP by adapting the Monte-Carlo Tree Search (MCTS) algorithm. A key component is the development of efficient search structures to sample possibility distributions based on the DPY transformation as introduced by Dubois, Prade, and Yager. Secondly, we introduce a hybrid MDP model that allows us to express both possibilistic and probabilistic uncertainty, where the hybrid model is a proper extension of both probabilistic and possibilistic MDPs. Thirdly, we demonstrate that MCTS algorithms can readily be applied to solve such hybrid models.
Resumo:
Purpose
The Strengths and Difficulties Questionnaire (SDQ) is a behavioural screening tool for children. The SDQ is increasingly used as the primary outcome measure in population health interventions involving children, but it is not preference based; therefore, its role in allocative economic evaluation is limited. The Child Health Utility 9D (CHU9D) is a generic preference-based health-related quality of-life measure. This study investigates the applicability of the SDQ outcome measure for use in economic evaluations and examines its relationship with the CHU9D by testing previously published mapping algorithms. The aim of the paper is to explore the feasibility of using the SDQ within economic evaluations of school-based population health interventions.
Methods
Data were available from children participating in a cluster randomised controlled trial of the school-based roots of empathy programme in Northern Ireland. Utility was calculated using the original and alternative CHU9D tariffs along with two SDQ mapping algorithms. t tests were performed for pairwise differences in utility values from the preference-based tariffs and mapping algorithms.
Results
Mean (standard deviation) SDQ total difficulties and prosocial scores were 12 (3.2) and 8.3 (2.1). Utility values obtained from the original tariff, alternative tariff, and mapping algorithms using five and three SDQ subscales were 0.84 (0.11), 0.80 (0.13), 0.84 (0.05), and 0.83 (0.04), respectively. Each method for calculating utility produced statistically significantly different values except the original tariff and five SDQ subscale algorithm.
Conclusion
Initial evidence suggests the SDQ and CHU9D are related in some of their measurement properties. The mapping algorithm using five SDQ subscales was found to be optimal in predicting mean child health utility. Future research valuing changes in the SDQ scores would contribute to this research.
Resumo:
Bridge weigh-in-motion (B-WIM), a system that uses strain sensors to calculate the weights of trucks passing on bridges overhead, requires accurate axle location and speed information for effective performance. The success of a B-WIM system is dependent upon the accuracy of the axle detection method. It is widely recognised that any form of axle detector on the road surface is not ideal for B-WIM applications as it can cause disruption to the traffic (Ojio & Yamada 2002; Zhao et al. 2005; Chatterjee et al. 2006). Sensors under the bridge, that is Nothing-on-Road (NOR) B-WIM, can perform axle detection via data acquisition systems which can detect a peak in strain as the axle passes. The method is often successful, although not all bridges are suitable for NOR B-WIM due to limitations of the system. Significant research has been carried out to further develop the method and the NOR algorithms, but beam-and-slab bridges with deep beams still present a challenge. With these bridges, the slabs are used for axle detection, but peaks in the slab strains are sensitive to the transverse position of wheels on the beam. This next generation B-WIM research project extends the current B-WIM algorithm to the problem of axle detection and safety, thus overcoming the existing limitations in current state-of–the-art technology. Finite Element Analysis was used to determine the critical locations for axle detecting sensors and the findings were then tested in the field. In this paper, alternative strategies for axle detection were determined using Finite Element analysis and the findings were then tested in the field. The site selected for testing was in Loughbrickland, Northern Ireland, along the A1 corridor connecting the two cities of Belfast and Dublin. The structure is on a central route through the island of Ireland and has a high traffic volume which made it an optimum location for the study. Another huge benefit of the chosen location was its close proximity to a nearby self-operated weigh station. To determine the accuracy of the proposed B-WIM system and develop a knowledge base of the traffic load on the structure, a pavement WIM system was also installed on the northbound lane on the approach to the structure. The bridge structure selected for this B-WIM research comprised of 27 pre-cast prestressed concrete Y4-beams, and a cast in-situ concrete deck. The structure, a newly constructed integral bridge, spans 19 m and has an angle of skew of 22.7°.
Resumo:
Field programmable gate array (FPGA) technology is a powerful platform for implementing computationally complex, digital signal processing (DSP) systems. Applications that are multi-modal, however, are designed for worse case conditions. In this paper, genetic sequencing techniques are applied to give a more sophisticated decomposition of the algorithmic variations, thus allowing an unified hardware architecture which gives a 10-25% area saving and 15% power saving for a digital radar receiver.
Resumo:
This paper presents an approach to develop an intelligent digital mock-up (DMU) through integration of design and manufacturing disciplines to enable a better understanding of assembly related issues during design evolution. The intelligent DMU will contain tolerance information related to manufacturing capabilities so it can be used as a source for assembly simulations of realistic models to support the manufacturing decision making process within the design domain related to tolerance build ups. A literature review of the contributing research areas is presented, from which identification of the need for an intelligent DMU has been developed. The proposed methodology including the applications of cellular modelling and potential features of the intelligent DMU are presented and explained. Finally a conclusion examines the work to date and the future work to achieve an intelligent DMU.