886 resultados para Search-based technique
Resumo:
OBJECTIVES: To assess inter-observer variability of renal blood oxygenation level-dependent MRI (BOLD-MRI) using a new method of analysis, called the concentric objects (CO) technique, in comparison with the classical ROI (region of interest)-based technique. METHODS: MR imaging (3T) was performed before and after furosemide in 10 chronic kidney disease (CKD) patients (mean eGFR 43±24ml/min/1.73m(2)) and 10 healthy volunteers (eGFR 101±28ml/min1.73m(2)), and R2* maps were determined on four coronal slices. In the CO-technique, R2* values were based on a semi-automatic procedure that divided each kidney in six equal layers, whereas in the ROI-technique, all circles (ROIs) were placed manually in the cortex and medulla. The mean R2*values as assessed by two independent investigators were compared. RESULTS: With the CO-technique, inter-observer variability was 0.7%-1.9% across all layers in non-CKD, versus 1.6%-3.8% in CKD. With the ROI-technique, median variability for cortical and medullary R2* values was 3.6 and 6.8% in non-CKD, versus 4.7 and 12.5% in CKD; similar results were observed after furosemide. CONCLUSION: The CO-technique offers a new, investigator-independent, highly reproducible alternative to the ROI-based technique to estimate renal tissue oxygenation in CKD.
Resumo:
Abstract Objective: To evaluate three-dimensional translational setup errors and residual errors in image-guided radiosurgery, comparing frameless and frame-based techniques, using an anthropomorphic phantom. Materials and Methods: We initially used specific phantoms for the calibration and quality control of the image-guided system. For the hidden target test, we used an Alderson Radiation Therapy (ART)-210 anthropomorphic head phantom, into which we inserted four 5mm metal balls to simulate target treatment volumes. Computed tomography images were the taken with the head phantom properly positioned for frameless and frame-based radiosurgery. Results: For the frameless technique, the mean error magnitude was 0.22 ± 0.04 mm for setup errors and 0.14 ± 0.02 mm for residual errors, the combined uncertainty being 0.28 mm and 0.16 mm, respectively. For the frame-based technique, the mean error magnitude was 0.73 ± 0.14 mm for setup errors and 0.31 ± 0.04 mm for residual errors, the combined uncertainty being 1.15 mm and 0.63 mm, respectively. Conclusion: The mean values, standard deviations, and combined uncertainties showed no evidence of a significant differences between the two techniques when the head phantom ART-210 was used.
Resumo:
Vehicle operations in underwater environments are often compromised by poor visibility conditions. For instance, the perception range of optical devices is heavily constrained in turbid waters, thus complicating navigation and mapping tasks in environments such as harbors, bays, or rivers. A new generation of high-definition forward-looking sonars providing acoustic imagery at high frame rates has recently emerged as a promising alternative for working under these challenging conditions. However, the characteristics of the sonar data introduce difficulties in image registration, a key step in mosaicing and motion estimation applications. In this work, we propose the use of a Fourier-based registration technique capable of handling the low resolution, noise, and artifacts associated with sonar image formation. When compared to a state-of-the art region-based technique, our approach shows superior performance in the alignment of both consecutive and nonconsecutive views as well as higher robustness in featureless environments. The method is used to compute pose constraints between sonar frames that, integrated inside a global alignment framework, enable the rendering of consistent acoustic mosaics with high detail and increased resolution. An extensive experimental section is reported showing results in relevant field applications, such as ship hull inspection and harbor mapping
Resumo:
Adrenocortical autoantibodies (ACA), present in 60-80% of patients with idiopathic Addison's disease, are conventionally detected by indirect immunofluorescence (IIF) on frozen sections of adrenal glands. The large-scale use of IIF is limited in part by the need for a fluorescence microscope and the fact that histological sections cannot be stored for long periods of time. To circumvent these restrictions we developed a novel peroxidase-labelled protein A (PLPA) technique for the detection of ACA in patients with Addison's disease and compared the results with those obtained with the classical IIF assay. We studied serum samples from 90 healthy control subjects and 22 patients with Addison's disease, who had been clinically classified into two groups: idiopathic (N = 13) and granulomatous (N = 9). ACA-PLPA were detected in 10/22 (45%) patients: 9/13 (69%) with the idiopathic form and 1/9 (11%) with the granulomatous form, whereas ACA-IIF were detected in 11/22 patients (50%): 10/13 (77%) with the idiopathic form and 1/9 (11%) with the granulomatous form. Twelve of the 13 idiopathic addisonians (92%) were positive for either ACA-PLPA or ACA-IIF, but only 7 were positive by both methods. In contrast, none of 90 healthy subjects was found to be positive for ACA. Thus, our study shows that the PLPA-based technique is useful, has technical advantages over the IIF method (by not requiring the use of a fluorescence microscope and by permitting section storage for long periods of time). However, since it is only 60% concordant with the ACA-IIF method, it should be considered complementary instead of an alternative method to IIF for the detection of ACA in human sera.
Resumo:
Statistical tests in vector autoregressive (VAR) models are typically based on large-sample approximations, involving the use of asymptotic distributions or bootstrap techniques. After documenting that such methods can be very misleading even with fairly large samples, especially when the number of lags or the number of equations is not small, we propose a general simulation-based technique that allows one to control completely the level of tests in parametric VAR models. In particular, we show that maximized Monte Carlo tests [Dufour (2002)] can provide provably exact tests for such models, whether they are stationary or integrated. Applications to order selection and causality testing are considered as special cases. The technique developed is applied to quarterly and monthly VAR models of the U.S. economy, comprising income, money, interest rates and prices, over the period 1965-1996.
Resumo:
La transformation de modèles consiste à transformer un modèle source en un modèle cible conformément à des méta-modèles source et cible. Nous distinguons deux types de transformations. La première est exogène où les méta-modèles source et cible représentent des formalismes différents et où tous les éléments du modèle source sont transformés. Quand elle concerne un même formalisme, la transformation est endogène. Ce type de transformation nécessite généralement deux étapes : l’identification des éléments du modèle source à transformer, puis la transformation de ces éléments. Dans le cadre de cette thèse, nous proposons trois principales contributions liées à ces problèmes de transformation. La première contribution est l’automatisation des transformations des modèles. Nous proposons de considérer le problème de transformation comme un problème d'optimisation combinatoire où un modèle cible peut être automatiquement généré à partir d'un nombre réduit d'exemples de transformations. Cette première contribution peut être appliquée aux transformations exogènes ou endogènes (après la détection des éléments à transformer). La deuxième contribution est liée à la transformation endogène où les éléments à transformer du modèle source doivent être détectés. Nous proposons une approche pour la détection des défauts de conception comme étape préalable au refactoring. Cette approche est inspirée du principe de la détection des virus par le système immunitaire humain, appelée sélection négative. L’idée consiste à utiliser de bonnes pratiques d’implémentation pour détecter les parties du code à risque. La troisième contribution vise à tester un mécanisme de transformation en utilisant une fonction oracle pour détecter les erreurs. Nous avons adapté le mécanisme de sélection négative qui consiste à considérer comme une erreur toute déviation entre les traces de transformation à évaluer et une base d’exemples contenant des traces de transformation de bonne qualité. La fonction oracle calcule cette dissimilarité et les erreurs sont ordonnées selon ce score. Les différentes contributions ont été évaluées sur d’importants projets et les résultats obtenus montrent leurs efficacités.
Resumo:
This paper describes the recent developments and improvements made to the variable radius niching technique called Dynamic Niche Clustering (DNC). DNC is fitness sharing based technique that employs a separate population of overlapping fuzzy niches with independent radii which operate in the decoded parameter space, and are maintained alongside the normal GA population. We describe a speedup process that can be applied to the initial generation which greatly reduces the complexity of the initial stages. A split operator is also introduced that is designed to counteract the excessive growth of niches, and it is shown that this improves the overall robustness of the technique. Finally, the effect of local elitism is documented and compared to the performance of the basic DNC technique on a selection of 2D test functions. The paper is concluded with a view to future work to be undertaken on the technique.
Resumo:
This paper presents a non-model based technique to detect, locate, and characterize structural damage by combining the impedance-based structural health monitoring technique with an artificial neural network. The impedance-based structural health monitoring technique, which utilizes the electromechanical coupling property of piezoelectric materials, has shown engineering feasibility in a variety of practical field applications. Relying on high frequency structural excitations (typically>30 kHz), this technique is very sensitive to minor structural changes in the near field of the piezoelectric sensors. In order to quantitatively assess the state of structures, two sets of artificial neural networks, which utilize measured electrical impedance signals for input patterns, were developed. By employing high frequency ranges and by incorporating neural network features, this technique is able to detect the damage in its early stage and to estimate the nature of damage without prior knowledge of the model of structures. The paper concludes with an experimental example, an investigation on a massive quarter scale model of a steel bridge section, in order to verify the performance of this proposed methodology.
Resumo:
This paper presents a non-model based technique to detect, locate, and characterize structural damage by combining the impedance-based structural health monitoring technique with an artificial neural network. The impedance-based structural health monitoring technique, which utilizes the electromechanical coupling property of piezoelectric materials, has shown engineering feasibility in a variety of practical field applications. Relying on high frequency structural excitations (typically >30 kHz), this technique is very sensitive to minor structural changes in the near field of the piezoelectric sensors. In order to quantitatively assess the state of structures, multiple sets of artificial neural networks, which utilize measured electrical impedance signals for input patterns, were developed. By employing high frequency ranges and by incorporating neural network features, this technique is able to detect the damage in its early stage and to estimate the nature of damage without prior knowledge of the model of structures. The paper concludes with experimental examples, investigations on a massive quarter scale model of a steel bridge section and a space truss structure, in order to verify the performance of this proposed methodology.
Resumo:
Pattern recognition in large amount of data has been paramount in the last decade, since that is not straightforward to design interactive and real time classification systems. Very recently, the Optimum-Path Forest classifier was proposed to overcome such limitations, together with its training set pruning algorithm, which requires a parameter that has been empirically set up to date. In this paper, we propose a Harmony Search-based algorithm that can find near optimal values for that. The experimental results have showed that our algorithm is able to find proper values for the OPF pruning algorithm parameter. © 2011 IEEE.
Resumo:
Despite the efficacy of minutia-based fingerprint matching techniques for good-quality images captured by optical sensors, minutia-based techniques do not often perform so well on poor-quality images or fingerprint images captured by small solid-state sensors. Solid-state fingerprint sensors are being increasingly deployed in a wide range of applications for user authentication purposes. Therefore, it is necessary to develop new fingerprint-matching techniques that utilize other features to deal with fingerprint images captured by solid-state sensors. This paper presents a new fingerprint matching technique based on fingerprint ridge features. This technique was assessed on the MSU-VERIDICOM database, which consists of fingerprint impressions obtained from 160 users (4 impressions per finger) using a solid-state sensor. The combination of ridge-based matching scores computed by the proposed ridge-based technique with minutia-based matching scores leads to a reduction of the false non-match rate by approximately 1.7% at a false match rate of 0.1%. © 2005 IEEE.
Resumo:
Despite several clinical tests that have been developed to qualitatively describe complex motor tasks by functional testing, these methods often depend on clinicians' interpretation, experience and training, which make the assessment results inconsistent, without the precision required to objectively assess the effect of the rehabilitative intervention. A more detailed characterization is required to fully capture the various aspects of motor control and performance during complex movements of lower and upper limbs. The need for cost-effective and clinically applicable instrumented tests would enable quantitative assessment of performance on a subject-specific basis, overcoming the limitations due to the lack of objectiveness related to individual judgment, and possibly disclosing subtle alterations that are not clearly visible to the observer. Postural motion measurements at additional locations, such as lower and upper limbs and trunk, may be necessary in order to obtain information about the inter-segmental coordination during different functional tests involved in clinical practice. With these considerations in mind, this Thesis aims: i) to suggest a novel quantitative assessment tool for the kinematics and dynamics evaluation of a multi-link kinematic chain during several functional motor tasks (i.e. squat, sit-to-stand, postural sway), using one single-axis accelerometer per segment, ii) to present a novel quantitative technique for the upper limb joint kinematics estimation, considering a 3-link kinematic chain during the Fugl-Meyer Motor Assessment and using one inertial measurement unit per segment. The suggested methods could have several positive feedbacks from clinical practice. The use of objective biomechanical measurements, provided by inertial sensor-based technique, may help clinicians to: i) objectively track changes in motor ability, ii) provide timely feedback about the effectiveness of administered rehabilitation interventions, iii) enable intervention strategies to be modified or changed if found to be ineffective, and iv) speed up the experimental sessions when several subjects are asked to perform different functional tests.
Resumo:
Organische Ladungstransfersysteme weisen eine Vielfalt von konkurrierenden Wechselwirkungen zwischen Ladungs-, Spin- und Gitterfreiheitsgraden auf. Dies führt zu interessanten physikalischen Eigenschaften, wie metallische Leitfähigkeit, Supraleitung und Magnetismus. Diese Dissertation beschäftigt sich mit der elektronischen Struktur von organischen Ladungstransfersalzen aus drei Material-Familien. Dabei kamen unterschiedliche Photoemissions- und Röntgenspektroskopietechniken zum Einsatz. Die untersuchten Moleküle wurden z.T. im MPI für Polymerforschung synthetisiert. Sie stammen aus der Familie der Coronene (Donor Hexamethoxycoronen HMC und Akzeptor Coronen-hexaon COHON) und Pyrene (Donor Tetra- und Hexamethoxypyren TMP und HMP) im Komplex mit dem klassischen starken Akzeptor Tetracyanoquinodimethan (TCNQ). Als dritte Familie wurden Ladungstransfersalze der k-(BEDT-TTF)2X Familie (X ist ein monovalentes Anion) untersucht. Diese Materialien liegen nahe bei einem Bandbreite-kontrollierten Mottübergang im Phasendiagramm.rnFür Untersuchungen mittels Ultraviolett-Photoelektronenspektroskopie (UPS) wurden UHV-deponierte dünne Filme erzeugt. Dabei kam ein neuer Doppelverdampfer zum Einsatz, welcher speziell für Milligramm-Materialmengen entwickelt wurde. Diese Methode wies im Ladungstransferkomplex im Vergleich mit der reinen Donor- und Akzeptorspezies energetische Verschiebungen von Valenzzuständen im Bereich weniger 100meV nach. Ein wichtiger Aspekt der UPS-Messungen lag im direkten Vergleich mit ab-initio Rechnungen.rnDas Problem der unvermeidbaren Oberflächenverunreinigungen von lösungsgezüchteten 3D-Kristallen wurde durch die Methode Hard-X-ray Photoelectron Spectroscopy (HAXPES) bei Photonenenergien um 6 keV (am Elektronenspeicherring PETRA III in Hamburg) überwunden. Die große mittlere freie Weglänge der Photoelektronen im Bereich von 15 nm resultiert in echter Volumensensitivität. Die ersten HAXPES Experimente an Ladungstransferkomplexen weltweit zeigten große chemische Verschiebungen (mehrere eV). In der Verbindung HMPx-TCNQy ist die N1s-Linie ein Fingerabdruck der Cyanogruppe im TCNQ und zeigt eine Aufspaltung und einen Shift zu höheren Bindungsenergien von bis zu 6 eV mit zunehmendem HMP-Gehalt. Umgekehrt ist die O1s-Linie ein Fingerabdruck der Methoxygruppe in HMP und zeigt eine markante Aufspaltung und eine Verschiebung zu geringeren Bindungsenergien (bis zu etwa 2,5eV chemischer Verschiebung), d.h. eine Größenordnung größer als die im Valenzbereich.rnAls weitere synchrotronstrahlungsbasierte Technik wurde Near-Edge-X-ray-Absorption Fine Structure (NEXAFS) Spektroskopie am Speicherring ANKA Karlsruhe intensiv genutzt. Die mittlere freie Weglänge der niederenergetischen Sekundärelektronen (um 5 nm). Starke Intensitätsvariationen von bestimmten Vorkanten-Resonanzen (als Signatur der unbesetzte Zustandsdichte) zeigen unmittelbar die Änderung der Besetzungszahlen der beteiligten Orbitale in der unmittelbaren Umgebung des angeregten Atoms. Damit war es möglich, präzise die Beteiligung spezifischer Orbitale im Ladungstransfermechanismus nachzuweisen. Im genannten Komplex wird Ladung von den Methoxy-Orbitalen 2e(Pi*) und 6a1(σ*) zu den Cyano-Orbitalen b3g und au(Pi*) und – in geringerem Maße – zum b1g und b2u(σ*) der Cyanogruppe transferiert. Zusätzlich treten kleine energetische Shifts mit unterschiedlichem Vorzeichen für die Donor- und Akzeptor-Resonanzen auf, vergleichbar mit den in UPS beobachteten Shifts.rn
Resumo:
Vascular endothelial growth factor (VEGF) can induce normal angiogenesis or the growth of angioma-like vascular tumors depending on the amount secreted by each producing cell because it remains localized in the microenvironment. In order to control the distribution of VEGF expression levels in vivo, we recently developed a high-throughput fluorescence-activated cell sorting (FACS)-based technique to rapidly purify transduced progenitors that homogeneously express a specific VEGF dose from a heterogeneous primary population. Here we tested the hypothesis that cell-based delivery of a controlled VEGF level could induce normal angiogenesis in the heart, while preventing the development of angiomas. Freshly isolated human adipose tissue-derived stem cells (ASC) were transduced with retroviral vectors expressing either rat VEGF linked to a FACS-quantifiable cell-surface marker (a truncated form of CD8) or CD8 alone as control (CTR). VEGF-expressing cells were FACS-purified to generate populations producing either a specific VEGF level (SPEC) or uncontrolled heterogeneous levels (ALL). Fifteen nude rats underwent intramyocardial injection of 10(7) cells. Histology was performed after 4 weeks. Both the SPEC and ALL cells produced a similar total amount of VEGF, and both cell types induced a 50%-60% increase in both total and perfused vessel density compared to CTR cells, despite very limited stable engraftment. However, homogeneous VEGF expression by SPEC cells induced only normal and stable angiogenesis. Conversely, heterogeneous expression of a similar total amount by the ALL cells caused the growth of numerous angioma-like structures. These results suggest that controlled VEGF delivery by FACS-purified ASC may be a promising strategy to achieve safe therapeutic angiogenesis in the heart.
Resumo:
A new physics-based technique for correcting inhomogeneities present in sub-daily temperature records is proposed. The approach accounts for changes in the sensor-shield characteristics that affect the energy balance dependent on ambient weather conditions (radiation, wind). An empirical model is formulated that reflects the main atmospheric processes and can be used in the correction step of a homogenization procedure. The model accounts for short- and long-wave radiation fluxes (including a snow cover component for albedo calculation) of a measurement system, such as a radiation shield. One part of the flux is further modulated by ventilation. The model requires only cloud cover and wind speed for each day, but detailed site-specific information is necessary. The final model has three free parameters, one of which is a constant offset. The three parameters can be determined, e.g., using the mean offsets for three observation times. The model is developed using the example of the change from the Wild screen to the Stevenson screen in the temperature record of Basel, Switzerland, in 1966. It is evaluated based on parallel measurements of both systems during a sub-period at this location, which were discovered during the writing of this paper. The model can be used in the correction step of homogenization to distribute a known mean step-size to every single measurement, thus providing a reasonable alternative correction procedure for high-resolution historical climate series. It also constitutes an error model, which may be applied, e.g., in data assimilation approaches.