992 resultados para automated dose dispensing service
Resumo:
The INTAMAP FP6 project has developed an interoperable framework for real-time automatic mapping of critical environmental variables by extending spatial statistical methods and employing open, web-based, data exchange protocols and visualisation tools. This paper will give an overview of the underlying problem, of the project, and discuss which problems it has solved and which open problems seem to be most relevant to deal with next. The interpolation problem that INTAMAP solves is the generic problem of spatial interpolation of environmental variables without user interaction, based on measurements of e.g. PM10, rainfall or gamma dose rate, at arbitrary locations or over a regular grid covering the area of interest. It deals with problems of varying spatial resolution of measurements, the interpolation of averages over larger areas, and with providing information on the interpolation error to the end-user. In addition, monitoring network optimisation is addressed in a non-automatic context.
Resumo:
Background: Coronary heart disease (CHD) is a public health priority in the UK. The National Service Framework (NSF) has set standards for the prevention, diagnosis and treatment of CHD, which include the use of cholesterol-lowering agents aimed at achieving targets of blood total cholesterol (TC) < 5.0 mmol/L and low density lipoprotein-cholesterol (LDL-C) < 3.0 mmol/L. In order to achieve these targets cost effectively, prescribers need to make an informed choice from the range of statins available. Aim: To estimate the average and relative cost effectiveness of atorvastatin, fluvastatin, pravastatin and simvastatin in achieving the NSF LDL-C and TC targets. Design: Model-based economic evaluation. Methods: An economic model was constructed to estimate the number of patients achieving the NSF targets for LDL-C and TC at each dose of statin, and to calculate the average drug cost and incremental drug cost per patient achieving the target levels. The population baseline LDL-C and TC, and drug efficacy and drug costs were taken from previously published data. Estimates of the distribution of patients receiving each dose of statin were derived from the UK national DIN-LINK database. Results: The estimated annual drug cost per 1000 patients treated with atorvastatin was £289 000, with simvastatin £315 000, with pravastatin £333 000 and with fluvastatin £167 000. The percentages of patients achieving target are 74.4%, 46.4%, 28.4% and 13.2% for atorvastatin, simvastatin, pravastatin and fluvastatin, respectively. Incremental drug cost per extra patient treated to LDL-C and TC targets compared with fluvastafin were £198 and £226 for atorvastatin, £443 and £567 for simvastatin and £1089 and £2298 for pravastatin, using 2002 drug costs. Conclusions: As a result of its superior efficacy, atorvastatin generates a favourable cost-effectiveness profile as measured by drug cost per patient treated to LDL-C and TC targets. For a given drug budget, more patients would achieve NSF LDL-C and TC targets with atorvastatin than with any of the other statins examined.
Resumo:
Purpose: The aims of this study were to develop an algorithm to accurately quantify Vigabatrin (VGB)-induced central visual field loss and to investigate the relationship between visual field loss and maximum daily dose, cumulative dose and duration of dose. Methods: The sample comprised 31 patients (mean age 37.9 years; SD 14.4 years) diagnosed with epilepsy and exposed to VGB. Each participant underwent standard automated static visual field examination of the central visual field. Central visual field loss was determined using continuous scales quantifying severity in terms of area and depth of defect and additionally by symmetry of defect between the two eyes. A simultaneous multiple regression model was used to explore the relationship between these visual field parameters and the drug predictor variables. Results: The regression model indicated that maximum VGB dose was the only factor to be significantly correlated with individual eye severity (right eye: p = 0.020; left eye: p = 0.012) and symmetry of visual field defect (p = 0.024). Conclusions: Maximum daily dose was the single most reliable indicator of those patients likely to exhibit visual field defects due to VGB. These findings suggest that high maximum dose is more likely to result in visual field defects than high cumulative doses or those of long duration.
Resumo:
SMS (Short Message Service) is now a hugely popular and a very powerful business communication technology for mobile phones. In order to respond correctly to a free form factual question given a large collection of texts, one needs to understand the question at a level that allows determining some of constraints the question imposes on a possible answer. These constraints may include a semantic classification of the sought after answer and may even suggest using different strategies when looking for and verifying a candidate answer. In this paper we focus on various attempts to overcome the major contradiction: the technical limitations of the SMS standard, and the huge number of found information for a possible answer.
Resumo:
Objectives: To assess the association between the use of medications with anticholinergic activity and the subsequent risk of injurious falls in older adults. Design: Prospective, population-based study using data from The Irish Longitudinal Study on Ageing. Setting: Irish population. Participants: Community-dwelling men and women without dementia aged 65 and older (N = 2,696). Measurements: Self-reported injurious falls reported once approximately 2 years after baseline interview. Self-reported regular medication use at baseline interview. Pharmacy dispensing records from the Irish Health Service Executive Primary Care Reimbursement Service in a subset (n = 1,553). Results: Nine percent of men and 17% of women reported injurious falls. In men, the use of medications with definite anticholinergic activity was associated with greater risk of subsequent injurious falls (adjusted relative risk (aRR) = 2.55, 95% confidence interval (CI) = 1.33-4.88), but the risk of having any fall and the number of falls reported were not significantly greater. Greater anticholinergic burden was associated with greater injurious falls risk. No associations were observed for women. Findings were similar using pharmacy dispensing records. The aRR for medications with definite anticholinergic activity dispensed in the month before baseline and subsequent injurious falls in men was 2.53 (95% CI = 1.15-5.54). Conclusion: The regular use of medications with anticholinergic activity is associated with subsequent injurious falls in older men, although falls were self-reported after a 2-year recall and so may have been underreported. Further research is required to validate this finding in men and to consider the effect of duration and dose of anticholinergic medications.
Resumo:
ACM Computing Classification System (1998): D.2.5, D.2.9, D.2.11.
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2015
Resumo:
An Automatic Vehicle Location (AVL) system is a computer-based vehicle tracking system that is capable of determining a vehicle's location in real time. As a major technology of the Advanced Public Transportation System (APTS), AVL systems have been widely deployed by transit agencies for purposes such as real-time operation monitoring, computer-aided dispatching, and arrival time prediction. AVL systems make a large amount of transit performance data available that are valuable for transit performance management and planning purposes. However, the difficulties of extracting useful information from the huge spatial-temporal database have hindered off-line applications of the AVL data. ^ In this study, a data mining process, including data integration, cluster analysis, and multiple regression, is proposed. The AVL-generated data are first integrated into a Geographic Information System (GIS) platform. The model-based cluster method is employed to investigate the spatial and temporal patterns of transit travel speeds, which may be easily translated into travel time. The transit speed variations along the route segments are identified. Transit service periods such as morning peak, mid-day, afternoon peak, and evening periods are determined based on analyses of transit travel speed variations for different times of day. The seasonal patterns of transit performance are investigated by using the analysis of variance (ANOVA). Travel speed models based on the clustered time-of-day intervals are developed using important factors identified as having significant effects on speed for different time-of-day periods. ^ It has been found that transit performance varied from different seasons and different time-of-day periods. The geographic location of a transit route segment also plays a role in the variation of the transit performance. The results of this research indicate that advanced data mining techniques have good potential in providing automated techniques of assisting transit agencies in service planning, scheduling, and operations control. ^
Resumo:
This study explores factors related to the prompt difficulty in Automated Essay Scoring. The sample was composed of 6,924 students. For each student, there were 1-4 essays, across 20 different writing prompts, for a total of 20,243 essays. E-rater® v.2 essay scoring engine developed by the Educational Testing Service was used to score the essays. The scoring engine employs a statistical model that incorporates 10 predictors associated with writing characteristics of which 8 were used. The Rasch partial credit analysis was applied to the scores to determine the difficulty levels of prompts. In addition, the scores were used as outcomes in the series of hierarchical linear models (HLM) in which students and prompts constituted the cross-classification levels. This methodology was used to explore the partitioning of the essay score variance.^ The results indicated significant differences in prompt difficulty levels due to genre. Descriptive prompts, as a group, were found to be more difficult than the persuasive prompts. In addition, the essay score variance was partitioned between students and prompts. The amount of the essay score variance that lies between prompts was found to be relatively small (4 to 7 percent). When the essay-level, student-level-and prompt-level predictors were included in the model, it was able to explain almost all variance that lies between prompts. Since in most high-stakes writing assessments only 1-2 prompts per students are used, the essay score variance that lies between prompts represents an undesirable or "noise" variation. Identifying factors associated with this "noise" variance may prove to be important for prompt writing and for constructing Automated Essay Scoring mechanisms for weighting prompt difficulty when assigning essay score.^
Resumo:
Electronic Perception Technology (EPT) enables automated equipment to gain artificial sight commonly referred to as "machine-vision” by employing specialty software and embedded sensors to create a “Visual" input field that can be used as a front-end application for transactional behavior. The authors review this new technology and present feasible future applications to the food service industry in enhancing guest services while providing a competitive advantage.
Resumo:
Purpose: To investigate the effect of incorporating a beam spreading parameter in a beam angle optimization algorithm and to evaluate its efficacy for creating coplanar IMRT lung plans in conjunction with machine learning generated dose objectives.
Methods: Fifteen anonymized patient cases were each re-planned with ten values over the range of the beam spreading parameter, k, and analyzed with a Wilcoxon signed-rank test to determine whether any particular value resulted in significant improvement over the initially treated plan created by a trained dosimetrist. Dose constraints were generated by a machine learning algorithm and kept constant for each case across all k values. Parameters investigated for potential improvement included mean lung dose, V20 lung, V40 heart, 80% conformity index, and 90% conformity index.
Results: With a confidence level of 5%, treatment plans created with this method resulted in significantly better conformity indices. Dose coverage to the PTV was improved by an average of 12% over the initial plans. At the same time, these treatment plans showed no significant difference in mean lung dose, V20 lung, or V40 heart when compared to the initial plans; however, it should be noted that these results could be influenced by the small sample size of patient cases.
Conclusions: The beam angle optimization algorithm, with the inclusion of the beam spreading parameter k, increases the dose conformity of the automatically generated treatment plans over that of the initial plans without adversely affecting the dose to organs at risk. This parameter can be varied according to physician preference in order to control the tradeoff between dose conformity and OAR sparing without compromising the integrity of the plan.
Resumo:
Abstract Reputation, influenced by ratings from past clients, is crucial for providers competing for custom. For new providers with less track record, a few negative ratings can harm their chances of growing. In the JASPR project, we aim to look at how to ensure automated reputation assessments are justified and informative. Even an honest balanced review of a service provision may still be an unreliable predictor of future performance if the circumstances differ. For example, a service may have previously relied on different sub-providers to now, or been affected by season-specific weather events. A common way to ameliorate the ratings that may not reflect future performance is by weighting by recency. We argue that better results are obtained by querying provenance records on how services are provided for the circumstances of provision, to determine the significance of past interactions. Informed by case studies in global logistics, taxi hire, and courtesy car leasing, we are going on to explore the generation of explanations for reputation assessments, which can be valuable both for clients and for providers wishing to improve their match to the market, and applying machine learning to predict aspects of service provision which may influence decisions on the appropriateness of a provider. In this talk, I will give an overview of the research conducted and planned on JASPR. Speaker Biography Dr Simon Miles Simon Miles is a Reader in Computer Science at King's College London, UK, and head of the Agents and Intelligent Systems group. He conducts research in the areas of normative systems, data provenance, and medical informatics at King's, and has published widely and manages a number of research projects in these areas. He was previously a researcher at the University of Southampton after graduating from his PhD at Warwick. He has twice been an organising committee member for the Autonomous Agents and Multi-Agent Systems conference series, and was a member of the W3C working group which published standards on interoperable provenance data in 2013.
Resumo:
International audience
Resumo:
One of the main unresolved questions in science is how non-living matter became alive in a process known as abiognesis, which aims to explain how from a primordial soup scenario containing simple molecules, by following a ``bottom up'' approach, complex biomolecules emerged forming the first living system, known as a protocell. A protocell is defined by the interplay of three sub-systems which are considered requirements for life: information molecules, metabolism, and compartmentalization. This thesis investigates the role of compartmentalization during the emergence of life, and how simple membrane aggregates could evolve into entities that were able to develop ``life-like'' behaviours, and in particular how such evolution could happen without the presence of information molecules. Our ultimate objective is to create an autonomous evolvable system, and in order tp do so we will try to engineer life following a ``top-down'' approach, where an initial platform capable of evolving chemistry will be constructed, but the chemistry being dependent on the robotic adjunct, and how then this platform can be de-constructed in iterative operations until it is fully disconnected from the evolvable system, the system then being inherently autonomous. The first project of this thesis describes how the initial platform was designed and built. The platform was based on the model of a standard liquid handling robot, with the main difference with respect to other similar robots being that we used a 3D-printer in order to prototype the robot and build its main equipment, like a liquid dispensing system, tool movement mechanism, and washing procedures. The robot was able to mix different components and create populations of droplets in a Petri dish filled with aqueous phase. The Petri dish was then observed by a camera, which analysed the behaviours described by the droplets and fed this information back to the robot. Using this loop, the robot was then able to implement an evolutionary algorithm, where populations of droplets were evolved towards defined life-like behaviours. The second project of this thesis aimed to remove as many mechanical parts as possible from the robot while keeping the evolvable chemistry intact. In order to do so, we encapsulated the functionalities of the previous liquid handling robot into a single monolithic 3D-printed device. This device was able to mix different components, generate populations of droplets in an aqueous phase, and was also equipped with a camera in order to analyse the experiments. Moreover, because the full fabrication process of the devices happened in a 3D-printer, we were also able to alter its experimental arena by adding different obstacles where to evolve the droplets, enabling us to study how environmental changes can shape evolution. By doing so, we were able to embody evolutionary characteristics into our device, removing constraints from the physical platform, and taking one step forward to a possible autonomous evolvable system.