876 resultados para semi-automatic method
Resumo:
Semi-implicit algorithms are popularly used to deal with the gravitational term in numerical models. In this paper, we adopt the method of characteristics to compute the solutions for gravity waves on a sphere directly using a semi-Lagrangian advection scheme instead of the semi-implicit method in a shallow water model, to avoid expensive matrix inversions. Adoption of the semi-Lagrangian scheme renders the numerical model always stable for any Courant number, and which saves CPU time. To illustrate the efficiency of the characteristic constrained interpolation profile (CIP) method, some numerical results are shown for idealized test cases on a sphere in the Yin-Yang grid system.
Resumo:
In this paper, the capabilities of laser-induced break down spectroscopy (LIBS) for rapid analysis to multi-component plant are illustrated using a 1064 nm laser focused onto the surface of folium lycii. Based on homogeneous plasma assumption, nine of essential micronutrients in folium lycii are identified. Using Saha equation and Boltzmann plot method electron density and plasma temperature are obtained, and the irrelative concentration (Ca, Mg, Al, Si, Ti, Na, K, Li, and Sr) are obtained employing a semi-quantitative method.
Resumo:
T.Boongoen and Q. Shen. Semi-Supervised OWA Aggregation for Link-Based Similarity Evaluation and Alias Detection. Proceedings of the 18th International Conference on Fuzzy Systems (FUZZ-IEEE'09), pp. 288-293, 2009. Sponsorship: EPSRC
Resumo:
Purpose
This study was designed to investigate methods to help patients suffering from unilateral tinnitus synthesizing an auditory replica of their tinnitus.
Materials and methods
Two semi-automatic methods (A and B) derived from the auditory threshold of the patient and a method (C) combining a pure tone and a narrow band-pass noise centred on an adjustable frequency were devised and rated on their likeness over two test sessions. A third test evaluated the stability over time of the synthesized tinnitus replica built with method C, and its proneness to merge with the patient's tinnitus. Patients were then asked to try and control the lateralisation of this single percept through the adjustment of the tinnitus replica level.
Results
The first two tests showed that seven out of ten patients chose the tinnitus replica built with method C as their preferred one. The third test, performed on twelve patients, revealed pitch tuning was rather stable over a week interval. It showed that eight patients were able to consistently match the central frequency of the synthesized tinnitus (presented to the contralateral ear) to their own tinnitus, which leaded to a unique tinnitus percept. The lateralisation displacement was consistent across patients and revealed an average range of 29dB to obtain a full lateral shift from the ipsilateral to the contralateral side.
Conclusions
Although spectrally simpler than the semi-automatic methods, method C could replicate patients' tinnitus, to some extent. When a unique percept between synthesized tinnitus and patients' tinnitus arose, lateralisation of this percept was achieved.
Resumo:
Coronary CT angiography is widely used in clinical practice for the assessment of coronary artery disease. Several studies have shown that the same exam can also be used to assess left ventricle (LV) function. LV function is usually evaluated using just the data from end-systolic and end-diastolic phases even though coronary CT angiography (CTA) provides data concerning multiple cardiac phases, along the cardiac cycle. This unused wealth of data, mostly due to its complexity and the lack of proper tools, has still to be explored in order to assess if further insight is possible regarding regional LV functional analysis. Furthermore, different parameters can be computed to characterize LV function and while some are well known by clinicians others still need to be evaluated concerning their value in clinical scenarios. The work presented in this thesis covers two steps towards extended use of CTA data: LV segmentation and functional analysis. A new semi-automatic segmentation method is presented to obtain LV data for all cardiac phases available in a CTA exam and a 3D editing tool was designed to allow users to fine tune the segmentations. Regarding segmentation evaluation, a methodology is proposed in order to help choose the similarity metrics to be used to compare segmentations. This methodology allows the detection of redundant measures that can be discarded. The evaluation was performed with the help of three experienced radiographers yielding low intraand inter-observer variability. In order to allow exploring the segmented data, several parameters characterizing global and regional LV function are computed for the available cardiac phases. The data thus obtained is shown using a set of visualizations allowing synchronized visual exploration. The main purpose is to provide means for clinicians to explore the data and gather insight over their meaning, as well as their correlation with each other and with diagnosis outcomes. Finally, an interactive method is proposed to help clinicians assess myocardial perfusion by providing automatic assignment of lesions, detected by clinicians, to a myocardial segment. This new approach has obtained positive feedback from clinicians and is not only an improvement over their current assessment method but also an important first step towards systematic validation of automatic myocardial perfusion assessment measures.
Resumo:
Difficult tracheal intubation assessment is an important research topic in anesthesia as failed intubations are important causes of mortality in anesthetic practice. The modified Mallampati score is widely used, alone or in conjunction with other criteria, to predict the difficulty of intubation. This work presents an automatic method to assess the modified Mallampati score from an image of a patient with the mouth wide open. For this purpose we propose an active appearance models (AAM) based method and use linear support vector machines (SVM) to select a subset of relevant features obtained using the AAM. This feature selection step proves to be essential as it improves drastically the performance of classification, which is obtained using SVM with RBF kernel and majority voting. We test our method on images of 100 patients undergoing elective surgery and achieve 97.9% accuracy in the leave-one-out crossvalidation test and provide a key element to an automatic difficult intubation assessment system.
Resumo:
Les néphropaties (maladie des tissus rénaux) postradiques constituent l'un des facteurs limitants pour l'élaboration des plans de traitement lors des radiothérapies abdominales. Le processus actuel, qui consiste à évaluer la fonctionnalité relative des reins grâce à une scintigraphie gamma deux dimensions, ne permet pas d'identifier les portions fonctionnelles qui pourraient être évitées lors de l' élaboration des plans de traitement. Une méthode permettant de cartographier la fonctionnalité rénale en trois dimensions et d'extraire un contour fonctionnel utilisable lors de la planification a été développée à partir de CT double énergie injectés à l'iode. La concentration en produit de contraste est considérée reliée à la fonctionnalité rénale. La technique utilisée repose sur la décomposition à trois matériaux permettant de reconstruire des images en concentration d'iode. Un algorithme de segmentation semi-automatisé basé sur la déformation hiérarchique et anamorphique de surfaces permet ensuite d'extraire le contour fonctionnel des reins. Les premiers résultats obtenus avec des images patient démontrent qu'une utilisation en clinique est envisageable et pourra être bénéfique.
Resumo:
Scoliosis is a 3D deformity of the spine and rib cage. Extensive validation of 3D reconstruction methods of the spine from biplanar radiography has already been published. In this article, we propose a novel method to reconstruct the rib cage, using the same biplanar views as for the 3D reconstruction of the spine, to allow clinical assessment of whole trunk deformities. This technique uses a semi-automatic segmentation of the ribs in the postero-anterior X-ray view and an interactive segmentation of partial rib edges in the lateral view. The rib midlines are automatically extracted in 2D and reconstructed in 3D using the epipolar geometry. For the ribs not visible in the lateral view, the method predicts their 3D shape. The accuracy of the proposed method has been assessed using data obtained from a synthetic bone model as a gold standard and has also been evaluated using data of real patients with scoliotic deformities. Results show that the reconstructed ribs enable a reliable evaluation of the rib axial rotation, which will allow a 3D clinical assessment of the spine and rib cage deformities.
Resumo:
The storage and processing capacity realised by computing has lead to an explosion of data retention. We now reach the point of information overload and must begin to use computers to process more complex information. In particular, the proposition of the Semantic Web has given structure to this problem, but has yet realised practically. The largest of its problems is that of ontology construction; without a suitable automatic method most will have to be encoded by hand. In this paper we discus the current methods for semi and fully automatic construction and their current shortcomings. In particular we pay attention the application of ontologies to products and the particle application of the ontologies.
Extraction of tidal channel networks from aerial photographs alone and combined with laser altimetry
Resumo:
Tidal channel networks play an important role in the intertidal zone, exerting substantial control over the hydrodynamics and sediment transport of the region and hence over the evolution of the salt marshes and tidal flats. The study of the morphodynamics of tidal channels is currently an active area of research, and a number of theories have been proposed which require for their validation measurement of channels over extensive areas. Remotely sensed data provide a suitable means for such channel mapping. The paper describes a technique that may be adapted to extract tidal channels from either aerial photographs or LiDAR data separately, or from both types of data used together in a fusion approach. Application of the technique to channel extraction from LiDAR data has been described previously. However, aerial photographs of intertidal zones are much more commonly available than LiDAR data, and most LiDAR flights now involve acquisition of multispectral images to complement the LiDAR data. In view of this, the paper investigates the use of multispectral data for semiautomatic identification of tidal channels, firstly from only aerial photographs or linescanner data, and secondly from fused linescanner and LiDAR data sets. A multi-level, knowledge-based approach is employed. The algorithm based on aerial photography can achieve a useful channel extraction, though may fail to detect some of the smaller channels, partly because the spectral response of parts of the non-channel areas may be similar to that of the channels. The algorithm for channel extraction from fused LiDAR and spectral data gives an increased accuracy, though only slightly higher than that obtained using LiDAR data alone. The results illustrate the difficulty of developing a fully automated method, and justify the semi-automatic approach adopted.
Resumo:
The study of the morphodynamics of tidal channel networks is important because of their role in tidal propagation and the evolution of salt-marshes and tidal flats. Channel dimensions range from tens of metres wide and metres deep near the low water mark to only 20-30cm wide and 20cm deep for the smallest channels on the marshes. The conventional method of measuring the networks is cumbersome, involving manual digitising of aerial photographs. This paper describes a semi-automatic knowledge-based network extraction method that is being implemented to work using airborne scanning laser altimetry (and later aerial photography). The channels exhibit a width variation of several orders of magnitude, making an approach based on multi-scale line detection difficult. The processing therefore uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels using a distance-with-destination transform. Breaks in the networks are repaired by extending channel ends in the direction of their ends to join with nearby channels, using domain knowledge that flow paths should proceed downhill and that any network fragment should be joined to a nearby fragment so as to connect eventually to the open sea.
Resumo:
Two ongoing projects at ESSC that involve the development of new techniques for extracting information from airborne LiDAR data and combining this information with environmental models will be discussed. The first project in conjunction with Bristol University is aiming to improve 2-D river flood flow models by using remote sensing to provide distributed data for model calibration and validation. Airborne LiDAR can provide such models with a dense and accurate floodplain topography together with vegetation heights for parameterisation of model friction. The vegetation height data can be used to specify a friction factor at each node of a model’s finite element mesh. A LiDAR range image segmenter has been developed which converts a LiDAR image into separate raster maps of surface topography and vegetation height for use in the model. Satellite and airborne SAR data have been used to measure flood extent remotely in order to validate the modelled flood extent. Methods have also been developed for improving the models by decomposing the model’s finite element mesh to reflect floodplain features such as hedges and trees having different frictional properties to their surroundings. Originally developed for rural floodplains, the segmenter is currently being extended to provide DEMs and friction parameter maps for urban floods, by fusing the LiDAR data with digital map data. The second project is concerned with the extraction of tidal channel networks from LiDAR. These networks are important features of the inter-tidal zone, and play a key role in tidal propagation and in the evolution of salt-marshes and tidal flats. The study of their morphology is currently an active area of research, and a number of theories related to networks have been developed which require validation using dense and extensive observations of network forms and cross-sections. The conventional method of measuring networks is cumbersome and subjective, involving manual digitisation of aerial photographs in conjunction with field measurement of channel depths and widths for selected parts of the network. A semi-automatic technique has been developed to extract networks from LiDAR data of the inter-tidal zone. A multi-level knowledge-based approach has been implemented, whereby low level algorithms first extract channel fragments based mainly on image properties then a high level processing stage improves the network using domain knowledge. The approach adopted at low level uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels. The higher level processing includes a channel repair mechanism.
Resumo:
In any enterprise, decisions need be made during the life cycle of information about its management. This requires information evaluation to take place; a little-understood process. For evaluation support to be both effective and resource efficient, some sort of automatic or semi-automatic evaluation method would be invaluable. Such a method would require an understanding of the diversity of the contexts in which evaluation takes place so that evaluation support can have the necessary context-sensitivity. This paper identifies the dimensions influencing the information evaluation process and defines the elements that characterise them, thus providing the foundations for a context-sensitive evaluation framework.
Resumo:
This paper presents the development and evaluation of a method for enabling quantitative and automatic scoring of alternating tapping performance of patients with Parkinson’s disease (PD). Ten healthy elderly subjects and 95 patients in different clinical stages of PD have utilized a touch-pad handheld computer to perform alternate tapping tests in their home environments. First, a neurologist used a web-based system to visually assess impairments in four tapping dimensions (‘speed’, ‘accuracy’, ‘fatigue’ and ‘arrhythmia’) and a global tapping severity (GTS). Second, tapping signals were processed with time series analysis and statistical methods to derive 24 quantitative parameters. Third, principal component analysis was used to reduce the dimensions of these parameters and to obtain scores for the four dimensions. Finally, a logistic regression classifier was trained using a 10-fold stratified cross-validation to map the reduced parameters to the corresponding visually assessed GTS scores. Results showed that the computed scores correlated well to visually assessed scores and were significantly different across Unified Parkinson’s Disease Rating Scale scores of upper limb motor performance. In addition, they had good internal consistency, had good ability to discriminate between healthy elderly and patients in different disease stages, had good sensitivity to treatment interventions and could reflect the natural disease progression over time. In conclusion, the automatic method can be useful to objectively assess the tapping performance of PD patients and can be included in telemedicine tools for remote monitoring of tapping.
Resumo:
The Topliss method was used to guide a synthetic path in support of drug discovery efforts toward the identification of potent antimycobacterial agents. Salicylic acid and its derivatives, p-chloro, p-methoxy, and m-chlorosalicylic acid, exemplify a series of synthetic compounds whose minimum inhibitory concentrations for a strain of Mycobacterium were determined and compared to those of the reference drug, p-aminosalicylic acid. Several physicochemical descriptors (including Hammett's sigma constant, ionization constant, dipole moment, Hansch constant, calculated partition coefficient, Sterimol-L and -B-4 and molecular volume) were considered to elucidate structure-activity relationships. Molecular electrostatic potential and molecular dipole moment maps were also calculated using the AM1 semi-empirical method. Among the new derivatives, m-chlorosalicylic acid showed the lowest minimum inhibitory concentration. The overall results suggest that both physicochemical properties and electronic features may influence the biological activity of this series of antimycobacterial agents and thus should be considered in designing new p-aminosalicylic acid analogs.