46 resultados para Distance-based techniques
Resumo:
This paper analyzes the complexity-performance trade-off of several heuristic near-optimum multiuser detection (MuD) approaches applied to the uplink of synchronous single/multiple-input multiple-output multicarrier code division multiple access (S/MIMO MC-CDMA) systems. Genetic algorithm (GA), short term tabu search (STTS) and reactive tabu search (RTS), simulated annealing (SA), particle swarm optimization (PSO), and 1-opt local search (1-LS) heuristic multiuser detection algorithms (Heur-MuDs) are analyzed in details, using a single-objective antenna-diversity-aided optimization approach. Monte- Carlo simulations show that, after convergence, the performances reached by all near-optimum Heur-MuDs are similar. However, the computational complexities may differ substantially, depending on the system operation conditions. Their complexities are carefully analyzed in order to obtain a general complexity-performance framework comparison and to show that unitary Hamming distance search MuD (uH-ds) approaches (1-LS, SA, RTS and STTS) reach the best convergence rates, and among them, the 1-LS-MuD provides the best trade-off between implementation complexity and bit error rate (BER) performance.
Resumo:
Common sense tells us that the future is an essential element in any strategy. In addition, there is a good deal of literature on scenario planning, which is an important tool in considering the future in terms of strategy. However, in many organizations there is serious resistance to the development of scenarios, and they are not broadly implemented by companies. But even organizations that do not rely heavily on the development of scenarios do, in fact, construct visions to guide their strategies. But it might be asked, what happens when this vision is not consistent with the future? To address this problem, the present article proposes a method for checking the content and consistency of an organization`s vision of the future, no matter how it was conceived. The proposed method is grounded on theoretical concepts from the field of future studies, which are described in this article. This study was motivated by the search for developing new ways of improving and using scenario techniques as a method for making strategic decisions. The method was then tested on a company in the field of information technology in order to check its operational feasibility. The test showed that the proposed method is, in fact, operationally feasible and was capable of analyzing the vision of the company being studied, indicating both its shortcomings and points of inconsistency. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Recent advances in the control of molecular engineering architectures have allowed unprecedented ability of molecular recognition in biosensing, with a promising impact for clinical diagnosis and environment control. The availability of large amounts of data from electrical, optical, or electrochemical measurements requires, however, sophisticated data treatment in order to optimize sensing performance. In this study, we show how an information visualization system based on projections, referred to as Projection Explorer (PEx), can be used to achieve high performance for biosensors made with nanostructured films containing immobilized antigens. As a proof of concept, various visualizations were obtained with impedance spectroscopy data from an array of sensors whose electrical response could be specific toward a given antibody (analyte) owing to molecular recognition processes. In addition to discussing the distinct methods for projection and normalization of the data, we demonstrate that an excellent distinction can be made between real samples tested positive for Chagas disease and Leishmaniasis, which could not be achieved with conventional statistical methods. Such high performance probably arose from the possibility of treating the data in the whole frequency range. Through a systematic analysis, it was inferred that Sammon`s mapping with standardization to normalize the data gives the best results, where distinction could be made of blood serum samples containing 10(-7) mg/mL of the antibody. The method inherent in PEx and the procedures for analyzing the impedance data are entirely generic and can be extended to optimize any type of sensor or biosensor.
Resumo:
The commercially available Jacobsen catalyst, Mn(salen), was occluded in hybrid polymeric membranes based on poly(dimethylsiloxane) (PDMS) and poly(vinyl alcohol) (PVA). The obtained systems were characterized by UV-vis spectroscopy and SEM techniques. The membranes were used as a catalytic barrier between two different phases: an organic substrate phase (cyclooctene or styrene) in the absence of solvent, and an aqueous solution of either t-BuOOH or H(2)O(2). Membranes containing different percentages of PVA were prepared, in order to modulate their hydrophilic/hydrophobic swelling properties. The occluded complex proved to be an efficient catalyst for the oxidation of alkenes. The new triphasic system containing a cheap and easily available catalyst allowed substrate oxidation and easy product separation using ""green"" oxidants. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
A long-standing challenge of content-based image retrieval (CBIR) systems is the definition of a suitable distance function to measure the similarity between images in an application context which complies with the human perception of similarity. In this paper, we present a new family of distance functions, called attribute concurrence influence distances (AID), which serve to retrieve images by similarity. These distances address an important aspect of the psychophysical notion of similarity in comparisons of images: the effect of concurrent variations in the values of different image attributes. The AID functions allow for comparisons of feature vectors by choosing one of two parameterized expressions: one targeting weak attribute concurrence influence and the other for strong concurrence influence. This paper presents the mathematical definition and implementation of the AID family for a two-dimensional feature space and its extension to any dimension. The composition of the AID family with L (p) distance family is considered to propose a procedure to determine the best distance for a specific application. Experimental results involving several sets of medical images demonstrate that, taking as reference the perception of the specialist in the field (radiologist), the AID functions perform better than the general distance functions commonly used in CBIR.
Resumo:
Some patients are no longer able to communicate effectively or even interact with the outside world in ways that most of us take for granted. In the most severe cases, tetraplegic or post-stroke patients are literally `locked in` their bodies, unable to exert any motor control after, for example, a spinal cord injury or a brainstem stroke, requiring alternative methods of communication and control. But we suggest that, in the near future, their brains may offer them a way out. Non-invasive electroencephalogram (EEG)-based brain-computer interfaces (BCD can be characterized by the technique used to measure brain activity and by the way that different brain signals are translated into commands that control an effector (e.g., controlling a computer cursor for word processing and accessing the internet). This review focuses on the basic concepts of EEG-based BC!, the main advances in communication, motor control restoration and the down-regulation of cortical activity, and the mirror neuron system (MNS) in the context of BCI. The latter appears to be relevant for clinical applications in the coming years, particularly for severely limited patients. Hypothetically, MNS could provide a robust way to map neural activity to behavior, representing the high-level information about goals and intentions of these patients. Non-invasive EEG-based BCIs allow brain-derived communication in patients with amyotrophic lateral sclerosis and motor control restoration in patients after spinal cord injury and stroke. Epilepsy and attention deficit and hyperactive disorder patients were able to down-regulate their cortical activity. Given the rapid progression of EEG-based BCI research over the last few years and the swift ascent of computer processing speeds and signal analysis techniques, we suggest that emerging ideas (e.g., MNS in the context of BC!) related to clinical neuro-rehabilitation of severely limited patients will generate viable clinical applications in the near future.
Resumo:
There is not a specific test to diagnose Alzheimer`s disease (AD). Its diagnosis should be based upon clinical history, neuropsychological and laboratory tests, neuroimaging and electroencephalography (EEG). Therefore, new approaches are necessary to enable earlier and more accurate diagnosis and to follow treatment results. In this study we used a Machine Learning (ML) technique, named Support Vector Machine (SVM), to search patterns in EEG epochs to differentiate AD patients from controls. As a result, we developed a quantitative EEG (qEEG) processing method for automatic differentiation of patients with AD from normal individuals, as a complement to the diagnosis of probable dementia. We studied EEGs from 19 normal subjects (14 females/5 males, mean age 71.6 years) and 16 probable mild to moderate symptoms AD patients (14 females/2 males, mean age 73.4 years. The results obtained from analysis of EEG epochs were accuracy 79.9% and sensitivity 83.2%. The analysis considering the diagnosis of each individual patient reached 87.0% accuracy and 91.7% sensitivity.
Resumo:
Background There are multitudes of procedures in plastic surgery used to correct hypertrophic and pendulous breasts in patients with heavy and ptotic breasts who need great resections of breast tissue, where the suprasternal notch-to-nipple distance is long and the use of nipple-areola transposition techniques is a challenge for the plastic surgeon. The purpose of this study is to present a technique of reduction mammaplasty that could solve these problems based on the following principles: mammary reduction utilizing a thin superior medial pedicle (0.8-1.5 cm thick) and the resection performed in two steps: (1) the base excess at a plane perpendicular to the breast (this determines the cone`s height) and (2) central half keel (this determines the breast diameter reduction). Methods Ninety patients with mammary hypertrophy were operated on at the ""Hospital das Clinicas,"" Sao Paulo University Medical School, between January 2000 and November 2005. Inclusion in this study required a minimum of 12-cm change in nipple position and a 750-g breast resection. Results The mean change in nipple position was 16 cm (range = 12-21 cm). The mean weight of each breast was 1400 (range = 750-3000 g).Considering the great amount of volume removed and the size of the operated breasts, few complications were observed and were similar to those reported following other techniques described in the literature. Patient satisfaction following this procedure was high. Conclusion The results of this study clearly demonstrate that thin superior medial pedicle reduction mammaplasty is a safe and reliable technique in cases of severe mammary hypertrophy.
Resumo:
Background/Aims: Cytokines have a significant role in the response to injury following liver transplantation, but the origin and course of such molecules are not completely known. The aim of this study was to evaluate the production and liver metabolism of the inflammatory cytokines interleukin (IL)-1 beta, IL-6, IL-8, interferon (IFN)-Y and tumor necrosis factor (TNF)-alpha in orthotopic liver transplantation (OLT), comparing the conventional and the piggyback methods. Methodology: We performed a study of 30 patients who underwent elective OLT and were randomized for the conventional or piggyback techniques at the beginning of the operation. The amount of cytokines and their hepatic metabolism were calculated based on plasma concentrations and vascular blood flow at 2, 5, 10, 15, 30, 60, 90, and 120 minutes after revascularization. Results: The amount of IL-1 beta in portal blood was higher in patients who underwent surgery using the conventional technique (estimate interest = 63,783.9 +/- 16,586.1 pg/min, versus 11,979.6 +/- 16,585.7 pg/min in the piggyback group, p=0.035). There were no significant differences between the two operative`s methods for IL-6, IL-8, IFN-Y and TNF-alpha production. The hepatic metabolism of cytokines was not different between groups. Although all the curves showed higher amounts of cytokines with the conventional technique, these were not statistically significant. Conclusion: The study shows the similarity between the two techniques concerning the stimuli for the production of inflammatory molecules.
Resumo:
In this study, we evaluated the biodistribution and the elimination kinetics of a biocompatible magnetic fluid, Endorem (TM), based on dextrancoated Fe(3)O(4) nanoparticles endovenously injected into Winstar rats. The iron content in blood and liver samples was recorded using electron paramagnetic resonance (EPR) and X-ray fluorescence (XRF) techniques. The EPR line intensity at g=2.1 was found to be proportional to the concentration of magnetic nanoparticles and the best temperature for spectra acquisition was 298 K. Both EPR and XRF analysis indicated that the maximum concentration of iron in the liver occurred 95 min after the ferrofluid administration. The half-life of the magnetic nanoparticles (MNP) in the blood was (11.6 +/- 0.6) min measured by EPR and (12.6 +/- 0.6) min determined by XRF. These results indicate that both EPR and XRF are very useful and appropriate techniques for the study of kinetics of ferrofluid elimination and biodistribution after its administration into the organism. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Biocompatible superparamagnetic iron oxide nanoparticles of magnetite coated with dextran were magnetically characterized using the techniques of SQUID (superconducting quantum interference device) magnetometry and ferromagnetic resonance (FMR). The SQUID magnetometry characterization was performed by isothermal measurements under applied magnetic field using the methods of zero-field-cooling (ZFC) and field-cooling (FC). The magnetic behavior of the nanoparticles indicated their superparamagnetic nature and it was assumed that they consisted exclusively of monodomains. The transition to a blocked state was observed at the temperature T(B) = (43 +/- 1) K for frozen ferrofluid and at (52 +/- 1) K for the lyophilized ferrofluid samples. The FMR analysis showed that the derivative peak-to-peak linewidth (Delta H(PP)), gyromagnetic factor (g), number of spins (N(S)), and spin-spin relaxation time (T(2)) were strongly dependent on both temperature and super-exchange interaction. This information is important for possible nanotechnological applications, mainly those which are strongly dependent on the magnetic parameters.
Resumo:
This study presents the results of Raman spectroscopy applied to the classification of arterial tissue based on a simplified model using basal morphological and biochemical information extracted from the Raman spectra of arteries. The Raman spectrograph uses an 830-nm diode laser, imaging spectrograph, and a CCD camera. A total of 111 Raman spectra from arterial fragments were used to develop the model, and those spectra were compared to the spectra of collagen, fat cells, smooth muscle cells, calcification, and cholesterol in a linear fit model. Non-atherosclerotic (NA), fatty and fibrous-fatty atherosclerotic plaques (A) and calcified (C) arteries exhibited different spectral signatures related to different morphological structures presented in each tissue type. Discriminant analysis based on Mahalanobis distance was employed to classify the tissue type with respect to the relative intensity of each compound. This model was subsequently tested prospectively in a set of 55 spectra. The simplified diagnostic model showed that cholesterol, collagen, and adipocytes were the tissue constituents that gave the best classification capability and that those changes were correlated to histopathology. The simplified model, using spectra obtained from a few tissue morphological and biochemical constituents, showed feasibility by using a small amount of variables, easily extracted from gross samples.
Resumo:
Purpose: Several attempts to determine the transit time of a high dose rate (HDR) brachytherapy unit have been reported in the literature with controversial results. The determination of the source speed is necessary to accurately calculate the transient dose in brachytherapy treatments. In these studies, only the average speed of the source was measured as a parameter for transit dose calculation, which does not account for the realistic movement of the source, and is therefore inaccurate for numerical simulations. The purpose of this work is to report the implementation and technical design of an optical fiber based detector to directly measure the instantaneous speed profile of a (192)Ir source in a Nucletron HDR brachytherapy unit. Methods: To accomplish this task, we have developed a setup that uses the Cerenkov light induced in optical fibers as a detection signal for the radiation source moving inside the HDR catheter. As the (192)Ir source travels between two optical fibers with known distance, the threshold of the induced signals are used to extract the transit time and thus the velocity. The high resolution of the detector enables the measurement of the transit time at short separation distance of the fibers, providing the instantaneous speed. Results: Accurate and high resolution speed profiles of the 192Ir radiation source traveling from the safe to the end of the catheter and between dwell positions are presented. The maximum and minimum velocities of the source were found to be 52.0 +/- 1.0 and 17.3 +/- 1:2 cm/s. The authors demonstrate that the radiation source follows a uniformly accelerated linear motion with acceleration of vertical bar a vertical bar = 113 cm/s(2). In addition, the authors compare the average speed measured using the optical fiber detector to those obtained in the literature, showing deviation up to 265%. Conclusions: To the best of the authors` knowledge, the authors directly measured for the first time the instantaneous speed profile of a radiation source in a HDR brachytherapy unit traveling from the unit safe to the end of the catheter and between interdwell distances. The method is feasible and accurate to implement on quality assurance tests and provides a unique database for efficient computational simulations of the transient dose. (C) 2010 American Association of Physicists in Medicine. [DOI: 10.1118/1.3483780]
Resumo:
The dorsolateral prefrontal cortex (DLPFC) has been implicated in the pathophysiology of mental disorders. Previous region-of-interest MRI studies that attempted to delineate this region adopted various landmarks and measurement techniques, with inconsistent results. We developed a new region-of-interest measurement method to obtain morphometric data of this region from structural MRI scans, taking into account knowledge from cytoarchitectonic postmortem studies and the large inter-individual variability of this region. MRI scans of 10 subjects were obtained, and DLPFC tracing was performed in the coronal plane by two independent raters using the semi-automated software Brains2. The intra-class correlation coefficients between two independent raters were 0.94 for the left DLPFC and 0.93 for the right DLPFC. The mean +/- S.D. DLPFC volumes were 9.23 +/- 2.35 ml for the left hemisphere and 8.20 +/- 2.08 ml for the right hemisphere. Our proposed method has high inter-rater reliability and is easy to implement, permitting the standardized measurement of this region for clinical research applications. (C) 2009 Elsevier Ireland Ltd. All rights reserved.
Resumo:
Human leukocyte antigen (HLA) haplotypes are frequently evaluated for population history inferences and association studies. However, the available typing techniques for the main HLA loci usually do not allow the determination of the allele phase and the constitution of a haplotype, which may be obtained by a very time-consuming and expensive family-based segregation study. Without the family-based study, computational inference by probabilistic models is necessary to obtain haplotypes. Several authors have used the expectation-maximization (EM) algorithm to determine HLA haplotypes, but high levels of erroneous inferences are expected because of the genetic distance among the main HLA loci and the presence of several recombination hotspots. In order to evaluate the efficiency of computational inference methods, 763 unrelated individuals stratified into three different datasets had their haplotypes manually defined in a family-based study of HLA-A, -B, -DRB1 and -DQB1 segregation, and these haplotypes were compared with the data obtained by the following three methods: the Expectation-Maximization (EM) and Excoffier-Laval-Balding (ELB) algorithms using the arlequin 3.11 software, and the PHASE method. When comparing the methods, we observed that all algorithms showed a poor performance for haplotype reconstruction with distant loci, estimating incorrect haplotypes for 38%-57% of the samples considering all algorithms and datasets. We suggest that computational haplotype inferences involving low-resolution HLA-A, HLA-B, HLA-DRB1 and HLA-DQB1 haplotypes should be considered with caution.