968 resultados para vector method
Resumo:
In this paper we develop new techniques for revealing geometrical structures in phase space that are valid for aperiodically time dependent dynamical systems, which we refer to as Lagrangian descriptors. These quantities are based on the integration, for a finite time, along trajectories of an intrinsic bounded, positive geometrical and/or physical property of the trajectory itself. We discuss a general methodology for constructing Lagrangian descriptors, and we discuss a “heuristic argument” that explains why this method is successful for revealing geometrical structures in the phase space of a dynamical system. We support this argument by explicit calculations on a benchmark problem having a hyperbolic fixed point with stable and unstable manifolds that are known analytically. Several other benchmark examples are considered that allow us the assess the performance of Lagrangian descriptors in revealing invariant tori and regions of shear. Throughout the paper “side-by-side” comparisons of the performance of Lagrangian descriptors with both finite time Lyapunov exponents (FTLEs) and finite time averages of certain components of the vector field (“time averages”) are carried out and discussed. In all cases Lagrangian descriptors are shown to be both more accurate and computationally efficient than these methods. We also perform computations for an explicitly three dimensional, aperiodically time-dependent vector field and an aperiodically time dependent vector field defined as a data set. Comparisons with FTLEs and time averages for these examples are also carried out, with similar conclusions as for the benchmark examples.
Resumo:
Lagrangian descriptors are a recent technique which reveals geometrical structures in phase space and which are valid for aperiodically time dependent dynamical systems. We discuss a general methodology for constructing them and we discuss a "heuristic argument" that explains why this method is successful. We support this argument by explicit calculations on a benchmark problem. Several other benchmark examples are considered that allow us to assess the performance of Lagrangian descriptors with both finite time Lyapunov exponents (FTLEs) and finite time averages of certain components of the vector field ("time averages"). In all cases Lagrangian descriptors are shown to be both more accurate and computationally efficient than these methods.
Resumo:
Los análisis de fiabilidad representan una herramienta adecuada para contemplar las incertidumbres inherentes que existen en los parámetros geotécnicos. En esta Tesis Doctoral se desarrolla una metodología basada en una linealización sencilla, que emplea aproximaciones de primer o segundo orden, para evaluar eficientemente la fiabilidad del sistema en los problemas geotécnicos. En primer lugar, se emplean diferentes métodos para analizar la fiabilidad de dos aspectos propios del diseño de los túneles: la estabilidad del frente y el comportamiento del sostenimiento. Se aplican varias metodologías de fiabilidad — el Método de Fiabilidad de Primer Orden (FORM), el Método de Fiabilidad de Segundo Orden (SORM) y el Muestreo por Importancia (IS). Los resultados muestran que los tipos de distribución y las estructuras de correlación consideradas para todas las variables aleatorias tienen una influencia significativa en los resultados de fiabilidad, lo cual remarca la importancia de una adecuada caracterización de las incertidumbres geotécnicas en las aplicaciones prácticas. Los resultados también muestran que tanto el FORM como el SORM pueden emplearse para estimar la fiabilidad del sostenimiento de un túnel y que el SORM puede mejorar el FORM con un esfuerzo computacional adicional aceptable. Posteriormente, se desarrolla una metodología de linealización para evaluar la fiabilidad del sistema en los problemas geotécnicos. Esta metodología solamente necesita la información proporcionada por el FORM: el vector de índices de fiabilidad de las funciones de estado límite (LSFs) que componen el sistema y su matriz de correlación. Se analizan dos problemas geotécnicos comunes —la estabilidad de un talud en un suelo estratificado y un túnel circular excavado en roca— para demostrar la sencillez, precisión y eficiencia del procedimiento propuesto. Asimismo, se reflejan las ventajas de la metodología de linealización con respecto a las herramientas computacionales alternativas. Igualmente se muestra que, en el caso de que resulte necesario, se puede emplear el SORM —que aproxima la verdadera LSF mejor que el FORM— para calcular estimaciones más precisas de la fiabilidad del sistema. Finalmente, se presenta una nueva metodología que emplea Algoritmos Genéticos para identificar, de manera precisa, las superficies de deslizamiento representativas (RSSs) de taludes en suelos estratificados, las cuales se emplean posteriormente para estimar la fiabilidad del sistema, empleando la metodología de linealización propuesta. Se adoptan tres taludes en suelos estratificados característicos para demostrar la eficiencia, precisión y robustez del procedimiento propuesto y se discuten las ventajas del mismo con respecto a otros métodos alternativos. Los resultados muestran que la metodología propuesta da estimaciones de fiabilidad que mejoran los resultados previamente publicados, enfatizando la importancia de hallar buenas RSSs —y, especialmente, adecuadas (desde un punto de vista probabilístico) superficies de deslizamiento críticas que podrían ser no-circulares— para obtener estimaciones acertadas de la fiabilidad de taludes en suelos. Reliability analyses provide an adequate tool to consider the inherent uncertainties that exist in geotechnical parameters. This dissertation develops a simple linearization-based approach, that uses first or second order approximations, to efficiently evaluate the system reliability of geotechnical problems. First, reliability methods are employed to analyze the reliability of two tunnel design aspects: face stability and performance of support systems. Several reliability approaches —the first order reliability method (FORM), the second order reliability method (SORM), the response surface method (RSM) and importance sampling (IS)— are employed, with results showing that the assumed distribution types and correlation structures for all random variables have a significant effect on the reliability results. This emphasizes the importance of an adequate characterization of geotechnical uncertainties for practical applications. Results also show that both FORM and SORM can be used to estimate the reliability of tunnel-support systems; and that SORM can outperform FORM with an acceptable additional computational effort. A linearization approach is then developed to evaluate the system reliability of series geotechnical problems. The approach only needs information provided by FORM: the vector of reliability indices of the limit state functions (LSFs) composing the system, and their correlation matrix. Two common geotechnical problems —the stability of a slope in layered soil and a circular tunnel in rock— are employed to demonstrate the simplicity, accuracy and efficiency of the suggested procedure. Advantages of the linearization approach with respect to alternative computational tools are discussed. It is also found that, if necessary, SORM —that approximates the true LSF better than FORM— can be employed to compute better estimations of the system’s reliability. Finally, a new approach using Genetic Algorithms (GAs) is presented to identify the fully specified representative slip surfaces (RSSs) of layered soil slopes, and such RSSs are then employed to estimate the system reliability of slopes, using our proposed linearization approach. Three typical benchmark-slopes with layered soils are adopted to demonstrate the efficiency, accuracy and robustness of the suggested procedure, and advantages of the proposed method with respect to alternative methods are discussed. Results show that the proposed approach provides reliability estimates that improve previously published results, emphasizing the importance of finding good RSSs —and, especially, good (probabilistic) critical slip surfaces that might be non-circular— to obtain good estimations of the reliability of soil slope systems.
Resumo:
El análisis de imágenes hiperespectrales permite obtener información con una gran resolución espectral: cientos de bandas repartidas desde el espectro infrarrojo hasta el ultravioleta. El uso de dichas imágenes está teniendo un gran impacto en el campo de la medicina y, en concreto, destaca su utilización en la detección de distintos tipos de cáncer. Dentro de este campo, uno de los principales problemas que existen actualmente es el análisis de dichas imágenes en tiempo real ya que, debido al gran volumen de datos que componen estas imágenes, la capacidad de cómputo requerida es muy elevada. Una de las principales líneas de investigación acerca de la reducción de dicho tiempo de procesado se basa en la idea de repartir su análisis en diversos núcleos trabajando en paralelo. En relación a esta línea de investigación, en el presente trabajo se desarrolla una librería para el lenguaje RVC – CAL – lenguaje que está especialmente pensado para aplicaciones multimedia y que permite realizar la paralelización de una manera intuitiva – donde se recogen las funciones necesarias para implementar el clasificador conocido como Support Vector Machine – SVM. Cabe mencionar que este trabajo complementa el realizado en [1] y [2] donde se desarrollaron las funciones necesarias para implementar una cadena de procesado que utiliza el método unmixing para procesar la imagen hiperespectral. En concreto, este trabajo se encuentra dividido en varias partes. La primera de ellas expone razonadamente los motivos que han llevado a comenzar este Trabajo de Investigación y los objetivos que se pretenden conseguir con él. Tras esto, se hace un amplio estudio del estado del arte actual y, en él, se explican tanto las imágenes hiperespectrales como sus métodos de procesado y, en concreto, se detallará el método que utiliza el clasificador SVM. Una vez expuesta la base teórica, nos centraremos en la explicación del método seguido para convertir una versión en Matlab del clasificador SVM optimizado para analizar imágenes hiperespectrales; un punto importante en este apartado es que se desarrolla la versión secuencial del algoritmo y se asientan las bases para una futura paralelización del clasificador. Tras explicar el método utilizado, se exponen los resultados obtenidos primero comparando ambas versiones y, posteriormente, analizando por etapas la versión adaptada al lenguaje RVC – CAL. Por último, se aportan una serie de conclusiones obtenidas tras analizar las dos versiones del clasificador SVM en cuanto a bondad de resultados y tiempos de procesado y se proponen una serie de posibles líneas de actuación futuras relacionadas con dichos resultados. ABSTRACT. Hyperspectral imaging allows us to collect high resolution spectral information: hundred of bands covering from infrared to ultraviolet spectrum. These images have had strong repercussions in the medical field; in particular, we must highlight its use in cancer detection. In this field, the main problem we have to deal with is the real time analysis, because these images have a great data volume and they require a high computational power. One of the main research lines that deals with this problem is related with the analysis of these images using several cores working at the same time. According to this investigation line, this document describes the development of a RVC – CAL library – this language has been widely used for working with multimedia applications and allows an optimized system parallelization –, which joins all the functions needed to implement the Support Vector Machine – SVM - classifier. This research complements the research conducted in [1] and [2] where the necessary functions to implement the unmixing method to analyze hyperspectral images were developed. The document is divided in several chapters. The first of them introduces the motivation of the Master Thesis and the main objectives to achieve. After that, we study the state of the art of some technologies related with this work, like hyperspectral images, their processing methods and, concretely, the SVM classifier. Once we have exposed the theoretical bases, we will explain the followed methodology to translate a Matlab version of the SVM classifier optimized to process an hyperspectral image to RVC – CAL language; one of the most important issues in this chapter is that a sequential implementation is developed and the bases of a future parallelization of the SVM classifier are set. At this point, we will expose the results obtained in the comparative between versions and then, the results of the different steps that compose the SVM in its RVC – CAL version. Finally, we will extract some conclusions related with algorithm behavior and time processing. In the same way, we propose some future research lines according to the results obtained in this document.
Resumo:
Glial-cell-line-derived neurotrophic factor (GDNF) is a potent neurotrophic factor for adult nigral dopamine neurons in vivo. GDNF has both protective and restorative effects on the nigro-striatal dopaminergic (DA) system in animal models of Parkinson disease. Appropriate administration of this factor is essential for the success of its clinical application. Since it cannot cross the blood–brain barrier, a gene transfer method may be appropriate for delivery of the trophic factor to DA cells. We have constructed a recombinant adenovirus (Ad) encoding GDNF and injected it into rat striatum to make use of its ability to infect neurons and to be retrogradely transported by DA neurons. Ad-GDNF was found to drive production of large amounts of GDNF, as quantified by ELISA. The GDNF produced after gene transfer was biologically active: it increased the survival and differentiation of DA neurons in vitro. To test the efficacy of the Ad-mediated GDNF gene transfer in vivo, we used a progressive lesion model of Parkinson disease. Rats received injections unilaterally into their striatum first of Ad and then 6 days later of 6-hydroxydopamine. We found that mesencephalic nigral dopamine neurons of animals treated with the Ad-GDNF were protected, whereas those of animals treated with the Ad-β-galactosidase were not. This protection was associated with a difference in motor function: amphetamine-induced turning was much lower in animals that received the Ad-GDNF than in the animals that received Ad-β-galactosidase. This finding may have implications for the development of a treatment for Parkinson disease based on the use of neurotrophic factors.
Resumo:
To formally test the hypothesis that the granulocyte/macrophage colony-forming unit (GM-CFU) cells can contribute to early hematopoietic reconstitution immediately after transplant, the frequency of genetically modified GM-CFU after retroviral vector transduction was measured by a quantitative in situ polymerase chain reaction (PCR), which is specific for the multidrug resistance-1 (MDR-1) vector, and by a quantitative GM-CFU methylcellulose plating assay. The results of this analysis showed no difference between the transduction frequency in the products of two different transduction protocols: “suspension transduction” and “stromal growth factor transduction.” However, when an analysis of the frequency of cells positive for the retroviral MDR-1 vector posttransplantation was carried out, 0 of 10 patients transplanted with cells transduced by the suspension method were positive for the vector MDR-1 posttransplant, whereas 5 of 8 patients transplanted with the cells transduced by the stromal growth factor method were positive for the MDR-1 vector transcription unit by in situ or in solution PCR assay (a difference that is significant at the P = 0.0065 level by the Fisher exact test). These data suggest that only very small subsets of the GM-CFU fraction of myeloid cells, if any, contribute to the repopulation of the hematopoietic tissues that occurs following intensive systemic therapy and transplantation of autologous hematopoietic cells.
Resumo:
We introduce a method of functionally classifying genes by using gene expression data from DNA microarray hybridization experiments. The method is based on the theory of support vector machines (SVMs). SVMs are considered a supervised computer learning method because they exploit prior knowledge of gene function to identify unknown genes of similar function from expression data. SVMs avoid several problems associated with unsupervised clustering methods, such as hierarchical clustering and self-organizing maps. SVMs have many mathematical features that make them attractive for gene expression analysis, including their flexibility in choosing a similarity function, sparseness of solution when dealing with large data sets, the ability to handle large feature spaces, and the ability to identify outliers. We test several SVMs that use different similarity metrics, as well as some other supervised learning methods, and find that the SVMs best identify sets of genes with a common function using expression data. Finally, we use SVMs to predict functional roles for uncharacterized yeast ORFs based on their expression data.
Resumo:
Bacterial artificial chromosomes (BACs) and P1 artificial chromosomes (PACs), which contain large fragments of genomic DNA, have been successfully used as transgenes to create mouse models of dose-dependent diseases. They are also potentially valuable as transgenes for dominant diseases given that point mutations and/or small rearrangements can be accurately introduced. Here, we describe a new method to introduce small alterations in BACs, which results in the generation of point mutations with high frequency. The method involves homologous recombination between the original BAC and a shuttle vector providing the mutation. Each recombination step is monitored using positive and negative selection markers, which are the Kanamycin-resistance gene, the sacB gene and temperature-sensitive replication, all conferred by the shuttle plasmid. We have used this method to introduce four different point mutations and the insertion of the β-galactosidase gene in a BAC, which has subsequently been used for transgenic animal production.
Resumo:
Erythropoietin (Epo)-responsive anemia is a common and debilitating complication of chronic renal failure and human immunodeficiency virus infection. Current therapy for this condition involves repeated intravenous or subcutaneous injections of recombinant Epo. In this report, we describe the development of a novel muscle-based gene transfer approach that produces long-term expression of physiologically significant levels of Epo in the systemic circulation of mice. We have constructed a plasmid expression vector, pVRmEpo, that contains the murine Epo cDNA under the transcriptional control of the cytomegalovirus immediate early (CMV-IE) promoter, the CMV-IE 5' untranslated region, and intron A. A single intramuscular (i.m.) injection of as little as 10 micrograms of this plasmid into immunocompetent adult mice produced physiologically significant elevations in serum Epo levels and increased hematocrits from preinjection levels of 48 +/- 0.4% to levels of 64 +/- 3.3% 45 days after injection. Hematocrits in these animals remained elevated at greater than 60% for at least 90 days after a single i.m. injection of 10 micrograms of pVRmEpo. We observed a dose-response relationship between the amount of plasmid DNA injected and subsequent elevations in hematocrits. Mice injected once with 300 micrograms of pVRmEpo displayed 5-fold increased serum Epo levels and elevated hematocrits of 79 +/- 3.3% at 45 days after injection. The i.m. injected plasmid DNA remained localized to the site of injection as assayed by the PCR. We conclude that i.m. injection of plasmid DNA represents a viable nonviral gene transfer method for the treatment of acquired and inherited serum protein deficiencies.
Resumo:
A strategy of "sequence scanning" is proposed for rapid acquisition of sequence from clones such as bacteriophage P1 clones, cosmids, or yeast artificial chromosomes. The approach makes use of a special vector, called LambdaScan, that reliably yields subclones with inserts in the size range 8-12 kb. A number of subclones, typically 96 or 192, are chosen at random, and the ends of the inserts are sequenced using vector-specific primers. Then long-range spectrum PCR is used to order and orient the clones. This combination of shotgun and directed sequencing results in a high-resolution physical map suitable for the identification of coding regions or for comparison of sequence organization among genomes. Computer simulations indicate that, for a target clone of 100 kb, the scanning of 192 subclones with sequencing reads as short as 350 bp results in an approximate ratio of 1:2:1 of regions of double-stranded sequence, single-stranded sequence, and gaps. Longer sequencing reads tip the ratio strongly toward increased double-stranded sequence.
Resumo:
The brain amyloid of Alzheimer disease (AD) may potentially be imaged in patients with AD by using neuroimaging technology and a radiolabeled form of the 40-residue beta-amyloid peptide A beta 1-40 that is enabled to undergo transport through the brain capillary endothelial wall, which makes up the blood-brain barrier (BBB) in vivo. Transport of 125I-labeled A beta 1-40 (125I-A beta 1-40) through the BBB was found to be negligible by experiments with both an intravenous injection technique and an internal carotid artery perfusion method in anesthetized rats. In addition, 125I-A beta 1-40 was rapidly metabolized after either intravenous injection or internal carotid artery perfusion. BBB transport was increased and peripheral metabolism was decreased by conjugation of monobiotinylated 125I-A beta 1-40 to a vector-mediated drug delivery system, which consisted of a conjugate of streptavidin (SA) and the OX26 monoclonal antibody to the rat transferrin receptor, which undergoes receptor-mediated transcytosis through the BBB. The brain uptake, expressed as percent of injected dose delivered per gram of brain, of the 125I,bio-A beta 1-40/SA-OX26 conjugate was 0.15 +/- 0.01, a level that is 2-fold greater than the brain uptake of morphine. The binding of the 125I,bio-A beta 1-40/SA-OX26 conjugate to the amyloid of AD brain was demonstrated by both film and emulsion autoradiography performed on frozen sections of AD brain. Binding of the 125I,bio-A beta 1-40/SA-OX26 conjugate to the amyloid of AD brain was completely inhibited by high concentrations of unlabeled A beta 1-40. In conclusion, these studies show that BBB transport and access to amyloid within brain may be achieved by conjugation of A beta 1-40 to a vector-mediated BBB drug delivery system.
Resumo:
In this work, a modified version of the elastic bunch graph matching (EBGM) algorithm for face recognition is introduced. First, faces are detected by using a fuzzy skin detector based on the RGB color space. Then, the fiducial points for the facial graph are extracted automatically by adjusting a grid of points to the result of an edge detector. After that, the position of the nodes, their relation with their neighbors and their Gabor jets are calculated in order to obtain the feature vector defining each face. A self-organizing map (SOM) framework is shown afterwards. Thus, the calculation of the winning neuron and the recognition process are performed by using a similarity function that takes into account both the geometric and texture information of the facial graph. The set of experiments carried out for our SOM-EBGM method shows the accuracy of our proposal when compared with other state-of the-art methods.
Resumo:
We present a novel method, called the transform likelihood ratio (TLR) method, for estimation of rare event probabilities with heavy-tailed distributions. Via a simple transformation ( change of variables) technique the TLR method reduces the original rare event probability estimation with heavy tail distributions to an equivalent one with light tail distributions. Once this transformation has been established we estimate the rare event probability via importance sampling, using the classical exponential change of measure or the standard likelihood ratio change of measure. In the latter case the importance sampling distribution is chosen from the same parametric family as the transformed distribution. We estimate the optimal parameter vector of the importance sampling distribution using the cross-entropy method. We prove the polynomial complexity of the TLR method for certain heavy-tailed models and demonstrate numerically its high efficiency for various heavy-tailed models previously thought to be intractable. We also show that the TLR method can be viewed as a universal tool in the sense that not only it provides a unified view for heavy-tailed simulation but also can be efficiently used in simulation with light-tailed distributions. We present extensive simulation results which support the efficiency of the TLR method.
Resumo:
In this paper we propose a new identification method based on the residual white noise autoregressive criterion (Pukkila et al. , 1990) to select the order of VARMA structures. Results from extensive simulation experiments based on different model structures with varying number of observations and number of component series are used to demonstrate the performance of this new procedure. We also use economic and business data to compare the model structures selected by this order selection method with those identified in other published studies.
Resumo:
This paper evaluates a new, low-frequency finite-difference time-domain method applied to the problem of induced E-fields/eddy currents in the human body resulting from the pulsed magnetic field gradients in MRI. In this algorithm, a distributed equivalent magnetic current is proposed as the electromagnetic source and is obtained by quasistatic calculation of the empty coil's vector potential or measurements therein. This technique circumvents the discretization of complicated gradient coil geometries into a mesh of Yee cells, and thereby enables any type of gradient coil modelling or other complex low frequency sources. The proposed method has been verified against an example with an analytical solution. Results are presented showing the spatial distribution of gradient-induced electric fields in a multi-layered spherical phantom model and a complete body model. (C) 2004 Elsevier Inc. All rights reserved.