18 resultados para tree-free paper
em Aston University Research Archive
Resumo:
Free Paper Sessions Design. Retrospective analysis. Purpose. To assess the prevalence of center-involving diabetic macular oedema (CIDMO) and risk factors. Methods. Retrospective review of patients who were screen positive for maculopathy (M1) during 2010 in East and North Birmingham. The CIDMO was diagnosed by qualitative identification of definite foveal oedema on optical coherence tomography (OCT). Results. Out of a total of 15,234 patients screened, 1194 (7.8%) were screen positive for M1 (64% bilateral). A total of 137 (11.5% of M1s) were diagnosed with macular oedema after clinical assessment. The OCT results were available for 123/137; 69 (56.1%) of these had CI-DMO (30 bilateral) which is 0.5% of total screens and 5.8% of those screen positive for M1. In those with CIDMO 60.9% were male and 63.8% Caucasian; 90% had type 2 diabetes and mean diabetes duration was 20 years (SD 9.7, range 2-48). Mean HbA1c was 8.34%±1.69, with 25% having an HbA1c =9%. Furthermore, 62% were on insulin, 67% were on antihypertensive therapy, and 64% were on a cholesterol-lowering drug. A total of 37.7% had an eGFR between 30% and 60% and 5.8% had eGFR <30. The only significant difference between the CIDMO and non-CIDMO group was mean age (67.83±12.26 vs 59.69±15.82; p=0.002). A total of 65.2% of those with CIDMO also had proliferative or preproliferative retinopathy in the worst eye and 68.1% had subsequently been treated with macular laser at the time of data review. Conclusions. The results show that the prevalence of CIDMO in our diabetic population was 0.5%. A significant proportion of macula oedema patients were found to have type 2 diabetes with long disease duration, suboptimal glycemic and hypertensive control, and low eGFR. The data support that medical and diabetic review of CIDMO patients is warranted particularly in the substantial number with poor glycemic control and if intravitreal therapies are indicated.
Resumo:
Free paper session INTRODUCTION. Microaneurysms and haemorrhages within the macula area are a poor predictor of macular oedema as shown by optical coherence tomography (OCT). Our research suggests that it is safe and cost effective to screen patients who present with these surrogate markers annually. PURPOSE. To determine whether microaneurysms (ma) and haemorrhages (hm) within one optic disc diameter of the fovea (ma/hm<1DD) are significant predictors of macular oedema. METHODS. Data were collected over a one-year period from patients attending digital diabetic retinopathy screening. Patients who presented with ma/hm<1DD also had an OCT scan. The fast macula scan on the Stratus OCT was used and an ophthalmologist reviewed the scans to determine whether macular oedema was present. Macular oedema was identified by thickening on the OCT cross-sections. Patients were split into two groups. Group one (325 eyes) included those with best VA?6/9 and group two (30 eyes) with best VA =6/12. Only patients who had no other referable features of diabetic retinopathy were selected. RESULTS. In group one, 6 (1.8%) out of 325 eyes showed thickening on the OCT and were referred to hospital eye service (HES) for further investigation. In group two, 6 (20%) out of 30 eyes showed thickening and were referred to HES. CONCLUSIONS. Ma/hm<1DD become more significant predictors of macular oedema when VA is reduced. Results confirm the grading criteria concerning microaneurysms predicting macular oedema for referable maculopathy in the English national screening programme. OCT is a useful method to accurately identify patients requiring referral to HES.
Resumo:
Purpose - Managers at the company attempt to implement a knowledge management information system in an attempt to avoid loss of expertise while improving control and efficiency. The paper seeks to explore the implications of the technological solution to employees within the company. Design/methodology/approach - The paper reports qualitative research conducted in a single organization. Evidence is presented in the form of interview extracts. Findings - The case section of the paper presents the accounts of organizational participants. The accounts reveal the workers' reactions to the technology-based system and something of their strategies of resistance to the system. These accounts also provide glimpses of the identity construction engaged in by these knowledge workers. The setting for the research is in a knowledge-intensive primary industry. Research was conducted through observation and interviews. Research limitations/implications - The issues identified are explored in a single case-study setting. Future research could look at the relevance of the findings to other settings. Practical implications - The case evidence presented indicates some of the complexity of implementation of information systems in organizations. This could certainly be seen as more evidence of the uncertainty associated with organizational change and of the need for managers not to expect an easy adoption of intrusive IT solutions. Originality/value - This paper adds empirical insight to a largely conceptual literature. © Emerald Group Publishing Limited.
Resumo:
In inflammatory diseases, release of oxidants leads to oxidative damage to biomolecules. HOCl (hypochlorous acid), released by the myeloperoxidase/H2O2/Cl- system, can cause formation of phospholipid chlorohydrins, or alpha-chloro-fatty aldehydes from plasmalogens. It can attack several amino acid residues in proteins, causing post-translational oxidative modifications of proteins, but the formation of 3-chlorotyrosine is one of the most stable markers of HOCl-induced damage. Soft-ionization MS has proved invaluable for detecting the occurrence of oxidative modifications to both phospholipids and proteins, and characterizing the products generated by HOCl-induced attack. For both phospholipids and proteins, the application of advanced mass spectrometric methods such as product or precursor ion scanning and neutral loss analysis can yield information both about the specific nature of the oxidative modification and the biomolecule modified. The ideal is to be able to apply these methods to complex biological or clinical samples, to determine the site-specific modifications of particular cellular components. This is important for understanding disease mechanisms and offers potential for development of novel biomarkers of inflammatory diseases. In the present paper, we review some of the progress that has been made towards this goal.
Resumo:
Using current software engineering technology, the robustness required for safety critical software is not assurable. However, different approaches are possible which can help to assure software robustness to some extent. For achieving high reliability software, methods should be adopted which avoid introducing faults (fault avoidance); then testing should be carried out to identify any faults which persist (error removal). Finally, techniques should be used which allow any undetected faults to be tolerated (fault tolerance). The verification of correctness in system design specification and performance analysis of the model, are the basic issues in concurrent systems. In this context, modeling distributed concurrent software is one of the most important activities in the software life cycle, and communication analysis is a primary consideration to achieve reliability and safety. By and large fault avoidance requires human analysis which is error prone; by reducing human involvement in the tedious aspect of modelling and analysis of the software it is hoped that fewer faults will persist into its implementation in the real-time environment. The Occam language supports concurrent programming and is a language where interprocess interaction takes place by communications. This may lead to deadlock due to communication failure. Proper systematic methods must be adopted in the design of concurrent software for distributed computing systems if the communication structure is to be free of pathologies, such as deadlock. The objective of this thesis is to provide a design environment which ensures that processes are free from deadlock. A software tool was designed and used to facilitate the production of fault-tolerant software for distributed concurrent systems. Where Occam is used as a design language then state space methods, such as Petri-nets, can be used in analysis and simulation to determine the dynamic behaviour of the software, and to identify structures which may be prone to deadlock so that they may be eliminated from the design before the program is ever run. This design software tool consists of two parts. One takes an input program and translates it into a mathematical model (Petri-net), which is used for modeling and analysis of the concurrent software. The second part is the Petri-net simulator that takes the translated program as its input and starts simulation to generate the reachability tree. The tree identifies `deadlock potential' which the user can explore further. Finally, the software tool has been applied to a number of Occam programs. Two examples were taken to show how the tool works in the early design phase for fault prevention before the program is ever run.
Resumo:
Analyzing geographical patterns by collocating events, objects or their attributes has a long history in surveillance and monitoring, and is particularly applied in environmental contexts, such as ecology or epidemiology. The identification of patterns or structures at some scales can be addressed using spatial statistics, particularly marked point processes methodologies. Classification and regression trees are also related to this goal of finding "patterns" by deducing the hierarchy of influence of variables on a dependent outcome. Such variable selection methods have been applied to spatial data, but, often without explicitly acknowledging the spatial dependence. Many methods routinely used in exploratory point pattern analysis are2nd-order statistics, used in a univariate context, though there is also a wide literature on modelling methods for multivariate point pattern processes. This paper proposes an exploratory approach for multivariate spatial data using higher-order statistics built from co-occurrences of events or marks given by the point processes. A spatial entropy measure, derived from these multinomial distributions of co-occurrences at a given order, constitutes the basis of the proposed exploratory methods. © 2010 Elsevier Ltd.
Resumo:
Analyzing geographical patterns by collocating events, objects or their attributes has a long history in surveillance and monitoring, and is particularly applied in environmental contexts, such as ecology or epidemiology. The identification of patterns or structures at some scales can be addressed using spatial statistics, particularly marked point processes methodologies. Classification and regression trees are also related to this goal of finding "patterns" by deducing the hierarchy of influence of variables on a dependent outcome. Such variable selection methods have been applied to spatial data, but, often without explicitly acknowledging the spatial dependence. Many methods routinely used in exploratory point pattern analysis are2nd-order statistics, used in a univariate context, though there is also a wide literature on modelling methods for multivariate point pattern processes. This paper proposes an exploratory approach for multivariate spatial data using higher-order statistics built from co-occurrences of events or marks given by the point processes. A spatial entropy measure, derived from these multinomial distributions of co-occurrences at a given order, constitutes the basis of the proposed exploratory methods. © 2010 Elsevier Ltd.
Resumo:
Retrospective clinical data presents many challenges for data mining and machine learning. The transcription of patient records from paper charts and subsequent manipulation of data often results in high volumes of noise as well as a loss of other important information. In addition, such datasets often fail to represent expert medical knowledge and reasoning in any explicit manner. In this research we describe applying data mining methods to retrospective clinical data to build a prediction model for asthma exacerbation severity for pediatric patients in the emergency department. Difficulties in building such a model forced us to investigate alternative strategies for analyzing and processing retrospective data. This paper describes this process together with an approach to mining retrospective clinical data by incorporating formalized external expert knowledge (secondary knowledge sources) into the classification task. This knowledge is used to partition the data into a number of coherent sets, where each set is explicitly described in terms of the secondary knowledge source. Instances from each set are then classified in a manner appropriate for the characteristics of the particular set. We present our methodology and outline a set of experiential results that demonstrate some advantages and some limitations of our approach. © 2008 Springer-Verlag Berlin Heidelberg.
Resumo:
This paper describes the development of a tree-based decision model to predict the severity of pediatric asthma exacerbations in the emergency department (ED) at 2 h following triage. The model was constructed from retrospective patient data abstracted from the ED charts. The original data was preprocessed to eliminate questionable patient records and to normalize values of age-dependent clinical attributes. The model uses attributes routinely collected in the ED and provides predictions even for incomplete observations. Its performance was verified on independent validating data (split-sample validation) where it demonstrated AUC (area under ROC curve) of 0.83, sensitivity of 84%, specificity of 71% and the Brier score of 0.18. The model is intended to supplement an asthma clinical practice guideline, however, it can be also used as a stand-alone decision tool.
Resumo:
Biofuels and chemicals from biomass mean the gasification of biogenic feedstocks and the synthesis via methanol, dimethylester (DME) or Fischer-Tropsch products. To prevent the sensitive synthesis catalysts from poisoning the syngas must be free of tar and particulates. The trace concentrations of S-, C1-, N-species, alkali and heavy metals must be of the order of a few ppb. Moreover maximum conversion efficiency will be achieved performing the gas cleaning above the synthesis conditions. The concept of an innovative dry HTHP syngas cleaning is presented. Based on the HT particle filtration and suitable sorption and catalysis processes for the relevant contaminants a total concept will be derived, which leads to a syngas quality required for synthesis catalysts in only 2 combined stages. The experimental setup for the HT gas cleaning behind the 60 kWtherm entrained flow gasifier REGA of the institute is described. Results from HT filter experiments in pilot scale are presented. The performance of 2 natural minerals for HC1 and H2S sorption is discussed with respect to the parameters temperature, surface and residence time. Results from lab scale investigations on low temperature tar catalysts' performance (commercial and proprietary development) are discussed finally.
Resumo:
In this paper, free surface problems of Stefan-type for the parabolic heat equation are investigated using the method of fundamental solutions. The additional measurement necessary to determine the free surface could be a boundary temperature, a heat flux or an energy measurement. Both one- and two-phase flows are investigated. Numerical results are presented and discussed.
Resumo:
The appealing feature of the arbitrage-free Nelson-Siegel model of the yield curve is the ability to capture movements in the yield curve through readily interpretable shifts in its level, slope or curvature, all within a dynamic arbitrage-free framework. To ensure that the level, slope and curvature factors evolve so as not to admit arbitrage, the model introduces a yield-adjustment term. This paper shows how the yield-adjustment term can also be decomposed into the familiar level, slope and curvature elements plus some additional readily interpretable shape adjustments. This means that, even in an arbitrage-free setting, it continues to be possible to interpret movements in the yield curve in terms of level, slope and curvature influences. © 2014 © 2014 Taylor & Francis.
Resumo:
In this paper, we present syllable-based duration modelling in the context of a prosody model for Standard Yorùbá (SY) text-to-speech (TTS) synthesis applications. Our prosody model is conceptualised around a modular holistic framework. This framework is implemented using the Relational Tree (R-Tree) techniques. An important feature of our R-Tree framework is its flexibility in that it facilitates the independent implementation of the different dimensions of prosody, i.e. duration, intonation, and intensity, using different techniques and their subsequent integration. We applied the Fuzzy Decision Tree (FDT) technique to model the duration dimension. In order to evaluate the effectiveness of FDT in duration modelling, we have also developed a Classification And Regression Tree (CART) based duration model using the same speech data. Each of these models was integrated into our R-Tree based prosody model. We performed both quantitative (i.e. Root Mean Square Error (RMSE) and Correlation (Corr)) and qualitative (i.e. intelligibility and naturalness) evaluations on the two duration models. The results show that CART models the training data more accurately than FDT. The FDT model, however, shows a better ability to extrapolate from the training data since it achieved a better accuracy for the test data set. Our qualitative evaluation results show that our FDT model produces synthesised speech that is perceived to be more natural than our CART model. In addition, we also observed that the expressiveness of FDT is much better than that of CART. That is because the representation in FDT is not restricted to a set of piece-wise or discrete constant approximation. We, therefore, conclude that the FDT approach is a practical approach for duration modelling in SY TTS applications. © 2006 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we demonstrate the possibility of reaching a quasi-stable nonlinear transmission regime with carrier pulses of 12.5 ps width in multi-channel 40 Gbit/s systems. The quasi-stable pulses that are presented in this work for the first time are not dispersion-managed solitons, and are indeed supported by a large normal span average dispersion and misbalanced optical amplification, and representing a new type of nonlinear carrier.
Resumo:
Mobile and wearable computers present input/output prob-lems due to limited screen space and interaction techniques. When mobile, users typically focus their visual attention on navigating their environment - making visually demanding interface designs hard to operate. This paper presents two multimodal interaction techniques designed to overcome these problems and allow truly mobile, 'eyes-free' device use. The first is a 3D audio radial pie menu that uses head gestures for selecting items. An evaluation of a range of different audio designs showed that egocentric sounds re-duced task completion time, perceived annoyance, and al-lowed users to walk closer to their preferred walking speed. The second is a sonically enhanced 2D gesture recognition system for use on a belt-mounted PDA. An evaluation of the system with and without audio feedback showed users' ges-tures were more accurate when dynamically guided by au-dio-feedback. These novel interaction techniques demon-strate effective alternatives to visual-centric interface de-signs on mobile devices.