87 resultados para data movement problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The pathogenesis and medical management of diabetic retinopathy is reviewed. The importance of good control of blood glucose and blood pressure remain key elements in the prevention and treatment of diabetic retinopathy, and a number of specific metabolic pathways have been identified that may be useful additional targets for therapeutic intervention. Trial data, however, aimed specifically to answer the questions of optimum medical management are limited, so the DIRECT study of renin-angiotensin blockade using oral candesartan 32 mg daily is a welcome addition to our knowledge. This arose from the promising improvement of retinopathy outcomes in the EUCLID study of lisinopril in type I diabetes. In DIRECT, 5 years of candesartan treatment in type I diabetes reduced the incidence of retinopathy by two or more steps (EDTRS) in severity by 18% (P = 0.0508) and, in a post hoc analysis, reduced the incidence of retinopathy by three-step progression by 35% (P = 0.034). In type I diabetes patients there was no effect on progression of established retinopathy. In contrast, in type II diabetes, 5 years of candesartan treatment resulted in 34% regression of retinopathy (P ≤0.009). Importantly, an overall significant change towards less-severe retinopathy was noted in both type I and II diabetes (P0.03). Although there is still no absolute proof that these effects were specific to RAS blockade, or just an effect of lower blood pressure, it is reasonable to conclude that candesartan has earned a place in the medical management of diabetic retinopathy, to prevent the problem in type I diabetes and to treat the early stages in type II diabetes. © 2010 Macmillan Publishers Limited All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes the design and development of an eye alignment/tracking system which allows self alignment of the eye’s optical axis with a measurement axis. Eye alignment is an area of research largely over-looked, yet it is a fundamental requirement in the acquisition of clinical data from the eye. New trends in the ophthalmic market, desiring portable hand-held apparatus, and the application of ophthalmic measurements in areas other than vision care have brought eye alignment under new scrutiny. Ophthalmic measurements taken in hand-held devices with out an clinician present requires alignment in an entirely new set of circumstances, requiring a novel solution. In order to solve this problem, the research has drawn upon eye tracking technology to monitor the eye, and a principle of self alignment to perform alignment correction. A handheld device naturally lends itself to the patient performing alignment, thus a technique has been designed to communicate raw eye tracking data to the user in a manner which allows the user to make the necessary corrections. The proposed technique is a novel methodology in which misalignment to the eye’s optical axis can be quantified, corrected and evaluated. The technique uses Purkinje Image tracking to monitor the eye’s movement as well as the orientation of the optical axis. The use of two sets of Purkinje Images allows quantification of the eye’s physical parameters needed for accurate Purkinje Image tracking, negating the need for prior anatomical data. An instrument employing the methodology was subsequently prototyped and validated, allowing a sample group to achieve self alignment of their optical axis with an imaging axis within 16.5-40.8 s, and with a rotational precision of 0.03-0.043°(95% confidence intervals). By encompassing all these factors the technique facilitates self alignment from an unaligned position on the visual axis to an aligned position on the optical axis. The consequence of this is that ophthalmic measurements, specifically pachymetric measurements, can be made in the absence of an optician, allowing the use of ophthalmic instrumentation and measurements in health professions other than vision care.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A spatial object consists of data assigned to points in a space. Spatial objects, such as memory states and three dimensional graphical scenes, are diverse and ubiquitous in computing. We develop a general theory of spatial objects by modelling abstract data types of spatial objects as topological algebras of functions. One useful algebra is that of continuous functions, with operations derived from operations on space and data, and equipped with the compact-open topology. Terms are used as abstract syntax for defining spatial objects and conditional equational specifications are used for reasoning. We pose a completeness problem: Given a selection of operations on spatial objects, do the terms approximate all the spatial objects to arbitrary accuracy? We give some general methods for solving the problem and consider their application to spatial objects with real number attributes. © 2011 British Computer Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Along with other diseases that can affect binocular vision, reducing the visual quality of a subject, Congenital Nystagmus (CN) is of peculiar interest. CN is an ocular-motor disorder characterized by involuntary, conjugated ocular oscillations and, while identified more than forty years ago, its pathogenesis is still under investigation. This kind of nystagmus is termed congenital (or infantile) since it could be present at birth or it can arise in the first months of life. The majority of CN patients show a considerable decrease of their visual acuity: image fixation on the retina is disturbed by nystagmus continuous oscillations, mainly horizontal. However, the image of a given target can still be stable during short periods in which eye velocity slows down while the target image is placed onto the fovea (called foveation intervals). To quantify the extent of nystagmus, eye movement recordings are routinely employed, allowing physicians to extract and analyze nystagmus main features such as waveform shape, amplitude and frequency. Use of eye movement recording, opportunely processed, allows computing "estimated visual acuity" predictors, which are analytical functions that estimate expected visual acuity using signal features such as foveation time and foveation position variability. Hence, it is fundamental to develop robust and accurate methods to measure both those parameters in order to obtain reliable values from the predictors. In this chapter the current methods to record eye movements in subjects with congenital nystagmus will be discussed and the present techniques to accurately compute foveation time and eye position will be presented. This study aims to disclose new methodologies in congenital nystagmus eye movements analysis, in order to identify nystagmus cycles and to evaluate foveation time, reducing the influence of repositioning saccades and data noise on the critical parameters of the estimation functions. Use of those functions extends the information acquired with typical visual acuity measurement (e.g., Landolt C test) and could be a support for treatment planning or therapy monitoring. © 2010 by Nova Science Publishers, Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vibration treatment by oscillating platforms is more and more employed in the fields of exercise physiology and bone research. The rationale of this treatment is based on the neuromuscular system response elicited by vibration loads. surface Electromyography (EMG) is largely utilized to assess muscular response elicited by vibrations and Root Mean Square of the electromyography signals is often used as a concise quantitative index of muscle activity; in general, EMG envelope or RMS is expected to increase during vibration. However, it is well known that during surface bio-potential recording, motion artifacts may arise from relative motion between electrodes and skin and between skin layers. Also the only skin stretch, modifying the internal charge distribution, results in a variation of electrode potential. The aim of this study is to highlight the movements of muscles, and the succeeding relevance of motion artifacts on electrodes, in subjects undergoing vibration treatments. EMGs from quadriceps of fifteen subjects were recorded during vibration at different frequencies (15-40 Hz); Triaxial accelerometers were placed onto quadriceps, as close as possible to muscle belly, to monitor motion. The computed muscle belly displacements showed a peculiar behavior reflecting the mechanical properties of the structures involved. Motion artifact related to the impressed vibration have been recognized and related to movement of the soft tissues. In fact large artifacts are visible on EMGs and patellar electrodes recordings during vibration. Signals spectra also revealed sharp peaks corresponding to vibration frequency and its harmonics, in accordance with accelerometers data. © 2008 Springer-Verlag.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the problem of low-dimensional visualisation of very high dimensional information sources for the purpose of situation awareness in the maritime environment. In response to the requirement for human decision support aids to reduce information overload (and specifically, data amenable to inter-point relative similarity measures) appropriate to the below-water maritime domain, we are investigating a preliminary prototype topographic visualisation model. The focus of the current paper is on the mathematical problem of exploiting a relative dissimilarity representation of signals in a visual informatics mapping model, driven by real-world sonar systems. A realistic noise model is explored and incorporated into non-linear and topographic visualisation algorithms building on the approach of [9]. Concepts are illustrated using a real world dataset of 32 hydrophones monitoring a shallow-water environment in which targets are present and dynamic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The possibility to analyze, quantify and forecast epidemic outbreaks is fundamental when devising effective disease containment strategies. Policy makers are faced with the intricate task of drafting realistically implementable policies that strike a balance between risk management and cost. Two major techniques policy makers have at their disposal are: epidemic modeling and contact tracing. Models are used to forecast the evolution of the epidemic both globally and regionally, while contact tracing is used to reconstruct the chain of people who have been potentially infected, so that they can be tested, isolated and treated immediately. However, both techniques might provide limited information, especially during an already advanced crisis when the need for action is urgent. In this paper we propose an alternative approach that goes beyond epidemic modeling and contact tracing, and leverages behavioral data generated by mobile carrier networks to evaluate contagion risk on a per-user basis. The individual risk represents the loss incurred by not isolating or treating a specific person, both in terms of how likely it is for this person to spread the disease as well as how many secondary infections it will cause. To this aim, we develop a model, named Progmosis, which quantifies this risk based on movement and regional aggregated statistics about infection rates. We develop and release an open-source tool that calculates this risk based on cellular network events. We simulate a realistic epidemic scenarios, based on an Ebola virus outbreak; we find that gradually restricting the mobility of a subset of individuals reduces the number of infected people after 30 days by 24%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diabetes patients might suffer from an unhealthy life, long-term treatment and chronic complicated diseases. The decreasing hospitalization rate is a crucial problem for health care centers. This study combines the bagging method with base classifier decision tree and costs-sensitive analysis for diabetes patients' classification purpose. Real patients' data collected from a regional hospital in Thailand were analyzed. The relevance factors were selected and used to construct base classifier decision tree models to classify diabetes and non-diabetes patients. The bagging method was then applied to improve accuracy. Finally, asymmetric classification cost matrices were used to give more alternative models for diabetes data analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the treatment and monitoring of Parkinson's disease (PD) to be scientific, a key requirement is that measurement of disease stages and severity is quantitative, reliable, and repeatable. The last 50 years in PD research have been dominated by qualitative, subjective ratings obtained by human interpretation of the presentation of disease signs and symptoms at clinical visits. More recently, “wearable,” sensor-based, quantitative, objective, and easy-to-use systems for quantifying PD signs for large numbers of participants over extended durations have been developed. This technology has the potential to significantly improve both clinical diagnosis and management in PD and the conduct of clinical studies. However, the large-scale, high-dimensional character of the data captured by these wearable sensors requires sophisticated signal processing and machine-learning algorithms to transform it into scientifically and clinically meaningful information. Such algorithms that “learn” from data have shown remarkable success in making accurate predictions for complex problems in which human skill has been required to date, but they are challenging to evaluate and apply without a basic understanding of the underlying logic on which they are based. This article contains a nontechnical tutorial review of relevant machine-learning algorithms, also describing their limitations and how these can be overcome. It discusses implications of this technology and a practical road map for realizing the full potential of this technology in PD research and practice. © 2016 International Parkinson and Movement Disorder Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A numerical method based on integral equations is proposed and investigated for the Cauchy problem for the Laplace equation in 3-dimensional smooth bounded doubly connected domains. To numerically reconstruct a harmonic function from knowledge of the function and its normal derivative on the outer of two closed boundary surfaces, the harmonic function is represented as a single-layer potential. Matching this representation against the given data, a system of boundary integral equations is obtained to be solved for two unknown densities. This system is rewritten over the unit sphere under the assumption that each of the two boundary surfaces can be mapped smoothly and one-to-one to the unit sphere. For the discretization of this system, Weinert’s method (PhD, Göttingen, 1990) is employed, which generates a Galerkin type procedure for the numerical solution, and the densities in the system of integral equations are expressed in terms of spherical harmonics. Tikhonov regularization is incorporated, and numerical results are included showing the efficiency of the proposed procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The original contribution of this work is threefold. Firstly, this thesis develops a critical perspective on current evaluation practice of business support, with focus on the timing of evaluation. The general time frame applied for business support policy evaluation is limited to one to two, seldom three years post intervention. This is despite calls for long-term impact studies by various authors, concerned about time lags before effects are fully realised. This desire for long-term evaluation opposes the requirements by policy-makers and funders, seeking quick results. Also, current ‘best practice’ frameworks do not refer to timing or its implications, and data availability affects the ability to undertake long-term evaluation. Secondly, this thesis provides methodological value for follow-up and similar studies by using data linking of scheme-beneficiary data with official performance datasets. Thus data availability problems are avoided through the use of secondary data. Thirdly, this thesis builds the evidence, through the application of a longitudinal impact study of small business support in England, covering seven years of post intervention data. This illustrates the variability of results for different evaluation periods, and the value in using multiple years of data for a robust understanding of support impact. For survival, impact of assistance is found to be immediate, but limited. Concerning growth, significant impact centres on a two to three year period post intervention for the linear selection and quantile regression models – positive for employment and turnover, negative for productivity. Attribution of impact may present a problem for subsequent periods. The results clearly support the argument for the use of longitudinal data and analysis, and a greater appreciation by evaluators of the factor time. This analysis recommends a time frame of four to five years post intervention for soft business support evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The original contribution of this work is threefold. Firstly, this thesis develops a critical perspective on current evaluation practice of business support, with focus on the timing of evaluation. The general time frame applied for business support policy evaluation is limited to one to two, seldom three years post intervention. This is despite calls for long-term impact studies by various authors, concerned about time lags before effects are fully realised. This desire for long-term evaluation opposes the requirements by policy-makers and funders, seeking quick results. Also, current ‘best practice’ frameworks do not refer to timing or its implications, and data availability affects the ability to undertake long-term evaluation. Secondly, this thesis provides methodological value for follow-up and similar studies by using data linking of scheme-beneficiary data with official performance datasets. Thus data availability problems are avoided through the use of secondary data. Thirdly, this thesis builds the evidence, through the application of a longitudinal impact study of small business support in England, covering seven years of post intervention data. This illustrates the variability of results for different evaluation periods, and the value in using multiple years of data for a robust understanding of support impact. For survival, impact of assistance is found to be immediate, but limited. Concerning growth, significant impact centres on a two to three year period post intervention for the linear selection and quantile regression models – positive for employment and turnover, negative for productivity. Attribution of impact may present a problem for subsequent periods. The results clearly support the argument for the use of longitudinal data and analysis, and a greater appreciation by evaluators of the factor time. This analysis recommends a time frame of four to five years post intervention for soft business support evaluation.