921 resultados para Facial Object Based Method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The focus of this paper is on handling non-monotone information in the modelling process of a single-input target monotone system. On one hand, the monotonicity property is a piece of useful prior (or additional) information which can be exploited for modelling of a monotone target system. On the other hand, it is difficult to model a monotone system if the available information is not monotonically-ordered. In this paper, an interval-based method for analysing non-monotonically ordered information is proposed. The applicability of the proposed method to handling a non-monotone function, a non-monotone data set, and an incomplete and/or non-monotone fuzzy rule base is presented. The upper and lower bounds of the interval are firstly defined. The region governed by the interval is explained as a coverage measure. The coverage size represents uncertainty pertaining to the available information. The proposed approach constitutes a new method to transform non-monotonic information to interval-valued monotone system. The proposed interval-based method to handle an incomplete and/or non-monotone fuzzy rule base constitutes a new fuzzy reasoning approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cropping and random bending are two common attacks in image watermarking. In this paper we propose a novel image-watermarking method to deal with these attacks, as well as other common attacks. In the embedding process, we first preprocess the host image by a Gaussian low-pass filter. Then, a secret key is used to randomly select a number of gray levels and the histogram of the filtered image with respect to these selected gray levels is constructed. After that, a histogram-shape-related index is introduced to choose the pixel groups with the highest number of pixels and a safe band is built between the chosen and nonchosen pixel groups. A watermark-embedding scheme is proposed to insert watermarks into the chosen pixel groups. The usage of the histogram-shape-related index and safe band results in good robustness. Moreover, a novel high-frequency component modification mechanism is also utilized in the embedding scheme to further improve robustness. At the decoding end, based on the available secret key, the watermarked pixel groups are identified and watermarks are extracted from them. The effectiveness of the proposed image-watermarking method is demonstrated by simulation examples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To assess the efficacy, with respect to participant understanding of information, of a computer-based approach to communication about complex, technical issues that commonly arise when seeking informed consent for clinical research trials. DESIGN, SETTING AND PARTICIPANTS: An open, randomised controlled study of 60 patients with diabetes mellitus, aged 27-70 years, recruited between August 2006 and October 2007 from the Department of Diabetes and Endocrinology at the Alfred Hospital and Baker IDI Heart and Diabetes Institute, Melbourne. INTERVENTION: Participants were asked to read information about a mock study via a computer-based presentation (n = 30) or a conventional paper-based information statement (n = 30). The computer-based presentation contained visual aids, including diagrams, video, hyperlinks and quiz pages. MAIN OUTCOME MEASURES: Understanding of information as assessed by quantitative and qualitative means. RESULTS: Assessment scores used to measure level of understanding were significantly higher in the group that completed the computer-based task than the group that completed the paper-based task (82% v 73%; P = 0.005). More participants in the group that completed the computer-based task expressed interest in taking part in the mock study (23 v 17 participants; P = 0.01). Most participants from both groups preferred the idea of a computer-based presentation to the paper-based statement (21 in the computer-based task group, 18 in the paper-based task group). CONCLUSIONS: A computer-based method of providing information may help overcome existing deficiencies in communication about clinical research, and may reduce costs and improve efficiency in recruiting participants for clinical trials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we address the problems of fully automatic localization and segmentation of 3D vertebral bodies from CT/MR images. We propose a learning-based, unified random forest regression and classification framework to tackle these two problems. More specifically, in the first stage, the localization of 3D vertebral bodies is solved with random forest regression where we aggregate the votes from a set of randomly sampled image patches to get a probability map of the center of a target vertebral body in a given image. The resultant probability map is then further regularized by Hidden Markov Model (HMM) to eliminate potential ambiguity caused by the neighboring vertebral bodies. The output from the first stage allows us to define a region of interest (ROI) for the segmentation step, where we use random forest classification to estimate the likelihood of a voxel in the ROI being foreground or background. The estimated likelihood is combined with the prior probability, which is learned from a set of training data, to get the posterior probability of the voxel. The segmentation of the target vertebral body is then done by a binary thresholding of the estimated probability. We evaluated the present approach on two openly available datasets: 1) 3D T2-weighted spine MR images from 23 patients and 2) 3D spine CT images from 10 patients. Taking manual segmentation as the ground truth (each MR image contains at least 7 vertebral bodies from T11 to L5 and each CT image contains 5 vertebral bodies from L1 to L5), we evaluated the present approach with leave-one-out experiments. Specifically, for the T2-weighted MR images, we achieved for localization a mean error of 1.6 mm, and for segmentation a mean Dice metric of 88.7% and a mean surface distance of 1.5 mm, respectively. For the CT images we achieved for localization a mean error of 1.9 mm, and for segmentation a mean Dice metric of 91.0% and a mean surface distance of 0.9 mm, respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper presents a methodology to model three-dimensional reinforced concrete members by means of embedded discontinuity elements based on the Continuum Strong Discontinuous Approach (CSDA). Mixture theory concepts are used to model reinforced concrete as a 31) composite material constituted of concrete with long fibers (rebars) bundles oriented in different directions embedded in it. The effects of the rebars are modeled by phenomenological constitutive models devised to reproduce the axial non-linear behavior, as well as the bond-slip and dowel action. The paper presents the constitutive models assumed for the components and the compatibility conditions chosen to constitute the composite. Numerical analyses of existing experimental reinforced concrete members are presented, illustrating the applicability of the proposed methodology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A neural method is presented in this paper to identify the harmonic components of an ac controller. The components are identified by analyzing the single-phase current waveform. The method effectiveness is verified by applying it to an active power filter (APF) model dedicated to the selective harmonic compensation. Simulation results using theoretical and experimental data are presented to validate the proposed approach. © 2008 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lipid peroxidation (LPO) has been associated with periodontal disease, and the evaluation of malondialdehyde (MDA) in the gingival crevicular fluid (GCF), an inflammatory exudate from the surrounding tissue of the periodontium, may be useful to clarify the role of LPO in the pathogenesis of periodontal disease. We describe the validation of a method to measure MDA in the GCF using high-performance liquid chromatography. MDA calibration curves were prepared with phosphate-buffered solution spiked with increasing known concentrations of MDA. Healthy and diseased GCF samples were collected from the same patient to avoid interindividual variability. MDA response was linear in the range measured, and excellent agreement was observed between added and detected concentrations of MDA. Samples' intra- and interday coefficients of variation were below 6.3% and 12.4%, respectively. The limit of quantitation (signal/noise = 5) was 0.03 mu M. When the validated method was applied to the GCF, excellent agreement was observed in the MDA quantitation from healthy and diseased sites, and diseased sites presented more MDA than healthy sites (P < 0.05). In this study, a validated method for MDA quantitation in GCF was established with satisfactory sensitivity, precision, and accuracy. (C) 2012 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Arrhythmia is one kind of cardiovascular diseases that give rise to the number of deaths and potentially yields immedicable danger. Arrhythmia is a life threatening condition originating from disorganized propagation of electrical signals in heart resulting in desynchronization among different chambers of the heart. Fundamentally, the synchronization process means that the phase relationship of electrical activities between the chambers remains coherent, maintaining a constant phase difference over time. If desynchronization occurs due to arrhythmia, the coherent phase relationship breaks down resulting in chaotic rhythm affecting the regular pumping mechanism of heart. This phenomenon was explored by using the phase space reconstruction technique which is a standard analysis technique of time series data generated from nonlinear dynamical system. In this project a novel index is presented for predicting the onset of ventricular arrhythmias. Analysis of continuously captured long-term ECG data recordings was conducted up to the onset of arrhythmia by the phase space reconstruction method, obtaining 2-dimensional images, analysed by the box counting method. The method was tested using the ECG data set of three different kinds including normal (NR), Ventricular Tachycardia (VT), Ventricular Fibrillation (VF), extracted from the Physionet ECG database. Statistical measures like mean (μ), standard deviation (σ) and coefficient of variation (σ/μ) for the box-counting in phase space diagrams are derived for a sliding window of 10 beats of ECG signal. From the results of these statistical analyses, a threshold was derived as an upper bound of Coefficient of Variation (CV) for box-counting of ECG phase portraits which is capable of reliably predicting the impeding arrhythmia long before its actual occurrence. As future work of research, it was planned to validate this prediction tool over a wider population of patients affected by different kind of arrhythmia, like atrial fibrillation, bundle and brunch block, and set different thresholds for them, in order to confirm its clinical applicability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Regional flood frequency techniques are commonly used to estimate flood quantiles when flood data is unavailable or the record length at an individual gauging station is insufficient for reliable analyses. These methods compensate for limited or unavailable data by pooling data from nearby gauged sites. This requires the delineation of hydrologically homogeneous regions in which the flood regime is sufficiently similar to allow the spatial transfer of information. It is generally accepted that hydrologic similarity results from similar physiographic characteristics, and thus these characteristics can be used to delineate regions and classify ungauged sites. However, as currently practiced, the delineation is highly subjective and dependent on the similarity measures and classification techniques employed. A standardized procedure for delineation of hydrologically homogeneous regions is presented herein. Key aspects are a new statistical metric to identify physically discordant sites, and the identification of an appropriate set of physically based measures of extreme hydrological similarity. A combination of multivariate statistical techniques applied to multiple flood statistics and basin characteristics for gauging stations in the Southeastern U.S. revealed that basin slope, elevation, and soil drainage largely determine the extreme hydrological behavior of a watershed. Use of these characteristics as similarity measures in the standardized approach for region delineation yields regions which are more homogeneous and more efficient for quantile estimation at ungauged sites than those delineated using alternative physically-based procedures typically employed in practice. The proposed methods and key physical characteristics are also shown to be efficient for region delineation and quantile development in alternative areas composed of watersheds with statistically different physical composition. In addition, the use of aggregated values of key watershed characteristics was found to be sufficient for the regionalization of flood data; the added time and computational effort required to derive spatially distributed watershed variables does not increase the accuracy of quantile estimators for ungauged sites. This dissertation also presents a methodology by which flood quantile estimates in Haiti can be derived using relationships developed for data rich regions of the U.S. As currently practiced, regional flood frequency techniques can only be applied within the predefined area used for model development. However, results presented herein demonstrate that the regional flood distribution can successfully be extrapolated to areas of similar physical composition located beyond the extent of that used for model development provided differences in precipitation are accounted for and the site in question can be appropriately classified within a delineated region.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND AND OBJECTIVES: Nerve blocks using local anesthetics are widely used. High volumes are usually injected, which may predispose patients to associated adverse events. Introduction of ultrasound guidance facilitates the reduction of volume, but the minimal effective volume is unknown. In this study, we estimated the 50% effective dose (ED50) and 95% effective dose (ED95) volume of 1% mepivacaine relative to the cross-sectional area of the nerve for an adequate sensory block. METHODS: To reduce the number of healthy volunteers, we used a volume reduction protocol using the up-and-down procedure according to the Dixon average method. The ulnar nerve was scanned at the proximal forearm, and the cross-sectional area was measured by ultrasound. In the first volunteer, a volume of 0.4 mL/mm of nerve cross-sectional area was injected under ultrasound guidance in close proximity to and around the nerve using a multiple injection technique. The volume in the next volunteer was reduced by 0.04 mL/mm in case of complete blockade and augmented by the same amount in case of incomplete sensory blockade within 20 mins. After 3 up-and-down cycles, ED50 and ED95 were estimated. Volunteers and physicians performing the block were blinded to the volume used. RESULTS: A total 17 of volunteers were investigated. The ED50 volume was 0.08 mL/mm (SD, 0.01 mL/mm), and the ED95 volume was 0.11 mL/mm (SD, 0.03 mL/mm). The mean cross-sectional area of the nerves was 6.2 mm (1.0 mm). CONCLUSIONS: Based on the ultrasound measured cross-sectional area and using ultrasound guidance, a mean volume of 0.7 mL represents the ED95 dose of 1% mepivacaine to block the ulnar nerve at the proximal forearm.