374 resultados para fibrado vectorial


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Chagas disease is now an active disease in the urban centers of countries of nonendemicity and endemicity because of congenital and blood and/or organ transplantation transmissions and the reactivation of the chronic disease in smaller scale than vectorial transmission, reported as controlled in countries of endemicity. Oral transmission of Chagas disease has emerged in unpredictable situations in the Amazon region and, more rarely, in areas of nonendemicity where the domiciliary triatomine cycle was under control because of exposition of the food to infected triatomine and contaminated secretions of reservoir hosts. Oral transmission of Chagas disease is considered when >1 acute case of febrile disease without other causes is linked to a suspected food and should be confirmed by the presence of the parasite after direct microscopic examination of the blood or other biological fluid sample from the patient.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Programa de doctorado: Cibernética y Telecomunicación

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[ES] La vectorización es un proceso de explotación de paralelismo de datos muy potente que, bien usado permite obtener un mejor rendimiento de la ejecución de las aplicaciones. Debido a ello, hoy en día muchos procesadores incluyen extensiones vectoriales en su repositorio de instrucciones. Para las máquinas basadas en estos procesadores, existen multitud de compiladores que permiten explotar la vectorización. Sin embargo, no todas las aplicaciones experimentan una mejora en el rendimiento cuando son vectorizadas, y no todos los compiladores son capaces de extraer el mismo rendimiento vectorial de las aplicaciones. Este trabajo presenta un estudio exhaustivo del rendimiento de diversas aplicaciones numéricas, con el objetivo de determinar el grado de utilización efectiva de la unidad vectorial. Tras seleccionar los benchmarks Polyhedron, Mantevo, Sequoia, SPECfp y NPB, se compilaron activando la vectorización y se simularon en una versión modificada del simulador de cache CMPSim, enriquecida con un núcleo basado en el coprocesador Intel Xeon Phitm. En aquellos casos en que la utilización era baja, se realizó un diagnóstico a nivel de software de la fuente del problema y se propusieron mejoras que podrían aumentar el uso efectivo de la unidad vectorial. Para aquellas aplicaciones limitadas por memoria, se realizó un diagnóstico a nivel de hardware con el fin de determinar hasta que punto el diseño de la máquina repercute en el rendimiento de la aplicación en casos de buen uso de la unidad vectorial.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[ES] El objetivo de este Trabajo Final de Grado (TFG) es la creación de un prototipo de aplicación web para la gestión de recursos geoespaciales. Esta propuesta surgió a partir de la necesidad de disponer de una herramienta que no tuviera que ser instalada en un dispositivo, sino servida por un servidor web, permitiendo su acceso desde cualquier parte y dispositivo. El resultado fue el Gestor Web de Recursos Geoespaciales con Tecnología OpenLayers, una aplicación que combina diversas herramientas (OpenLayers, GeoServer, PostgreSQL, jQuery…) – todas ellas basadas en Software Libre – para cumplir funcionalidades como la creación de primitivas vectoriales sobre un mapa, gestión y visualización de la información asociada, edición de estilos, modificación de coordenadas, etc. siendo todas éstas funcionalidades características de un Sistema de Información Geográfica (SIG) y ofreciendo una interfaz de uso cómoda y eficaz, que abstraiga al usuario de detalles internos y complejos. El material desarrollado dispone del potencial necesario para convertirse en una solución a las necesidades de gestión de información geoespacial de la ULPGC, especialmente en el campus de Tafira, sobre el que se ha ejemplificado su uso. Además, a diferencia de las herramientas ofertadas por empresas como Google o Microsoft, esta aplicación está por completo bajo una licencia GNU GPL v3, lo que permite que se pueda indagar dentro de su código, mejorarlo y añadir funcionalidades a cualquier persona interesada.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die Leistung multichromophorer Systeme geht oftmals über die der einzelnen Chromophor-Einheiten hinaus. Ziel der vorliegenden Dissertation mit dem Titel „Multichromophore Systeme auf Basis von Rylencarbonsäureimiden“ war daher die Synthese und Charakterisierung multichromophorer Molekülarchitekturen. Die verwendeten Rylenfarbstoffe zeichnen sich durch hohe photochemische Stabilitäten sowie nahezu quantitative Fluoreszenzquantenausbeuten aus. Die optischen und elektronischen Eigenschaften multichromophorer Systeme hängen stark von der geometrischen Ordnung ab, in der die Farbstoffe zueinander stehen. Daher wurden für den Einbau formpersistente Gerüststrukturen gewählt. Der erste Teil der Arbeit beschäftigt sich mit dem Einbau ein und desselben Chromophortyps und hat neben dem Verständnis von Chromophor-Wechselwirkungen vor allem die Erhöhung des Absorptionsquerschnitts und der Fluoreszenzintensität zum Ziel. Als Gerüststruktur dienen dabei Polyphenylen-Dendrimere, Ethinyl-verbrückte Dendrimere sowie Übergangsmetall-vermittelte supramolekulare Strukturen. Aufgrund der hohen Farbstoffanzahl, des ortsdefinierten Einbaus und den hohen Fluoreszenzquantenausbeuten eignen sich diese multichromophoren Systeme als Fluoreszenzsonden und als Einzelphotonenemitter. Im zweiten Teil der Arbeit werden verschiedene Chromophortypen zu multichromophoren Systemen verknüpft, mit deren Hilfe ein vektorieller Energietransfer möglich ist. Mit Hinsicht auf die Verwendung in photovoltaischen Zellen wurde eine dendritische Triade dargestellt. Eine lineare Variante einer Rylen-Triade stellt einen molekularen Draht dar, deren Brückenelement durch eine geeignete Syntheseführung verlängert und der Energietransport daher abstandsabhängig untersucht werden kann.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is mainly concerned with a model calculation for generalized parton distributions (GPDs). We calculate vectorial- and axial GPDs for the N N and N Delta transition in the framework of a light front quark model. This requires the elaboration of a connection between transition amplitudes and GPDs. We provide the first quark model calculations for N Delta GPDs. The examination of transition amplitudes leads to various model independent consistency relations. These relations are not exactly obeyed by our model calculation since the use of the impulse approximation in the light front quark model leads to a violation of Poincare covariance. We explore the impact of this covariance breaking on the GPDs and form factors which we determine in our model calculation and find large effects. The reference frame dependence of our results which originates from the breaking of Poincare covariance can be eliminated by introducing spurious covariants. We extend this formalism in order to obtain frame independent results from our transition amplitudes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In many application domains data can be naturally represented as graphs. When the application of analytical solutions for a given problem is unfeasible, machine learning techniques could be a viable way to solve the problem. Classical machine learning techniques are defined for data represented in a vectorial form. Recently some of them have been extended to deal directly with structured data. Among those techniques, kernel methods have shown promising results both from the computational complexity and the predictive performance point of view. Kernel methods allow to avoid an explicit mapping in a vectorial form relying on kernel functions, which informally are functions calculating a similarity measure between two entities. However, the definition of good kernels for graphs is a challenging problem because of the difficulty to find a good tradeoff between computational complexity and expressiveness. Another problem we face is learning on data streams, where a potentially unbounded sequence of data is generated by some sources. There are three main contributions in this thesis. The first contribution is the definition of a new family of kernels for graphs based on Directed Acyclic Graphs (DAGs). We analyzed two kernels from this family, achieving state-of-the-art results from both the computational and the classification point of view on real-world datasets. The second contribution consists in making the application of learning algorithms for streams of graphs feasible. Moreover,we defined a principled way for the memory management. The third contribution is the application of machine learning techniques for structured data to non-coding RNA function prediction. In this setting, the secondary structure is thought to carry relevant information. However, existing methods considering the secondary structure have prohibitively high computational complexity. We propose to apply kernel methods on this domain, obtaining state-of-the-art results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A previously presented algorithm for the reconstruction of bremsstrahlung spectra from transmission data has been implemented into MATHEMATICA. Spectra vectorial algebra has been used to solve the matrix system A * F = T. The new implementation has been tested by reconstructing photon spectra from transmission data acquired in narrow beam conditions, for nominal energies of 6, 15, and 25 MV. The results were in excellent agreement with the original calculations. Our implementation has the advantage to be based on a well-tested mathematical kernel. Furthermore it offers a comfortable user interface.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PRINCIPLES: Cardiogoniometry is a non-invasive technique for quantitative three-dimensional vectorial analysis of myocardial depolarization and repolarization. We describe a method of surface electrophysiological cardiac assessment using cardiogoniometry performed at rest to detect variables helpful in identifying coronary artery disease. METHODS: Cardiogoniometry was performed in 793 patients prior to diagnostic coronary angiography. Using 13 variables in men and 10 in women, values from 461 patients were retrospectively analyzed to obtain a diagnostic score that would identify patients having coronary artery disease. This score was then prospectively validated on 332 patients. RESULTS: Cardiogoniometry showed a prospective diagnostic sensitivity of 64%, and a specificity of 82%. ECG diagnostic sensitivity was significantly lower, with 53% and a similar specificity of 75%. CONCLUSIONS: Cardiogoniometry is a new, noninvasive, quantitative electrodiagnostic technique which is helpful in identifying patients with coronary artery disease. It can easily be performed at rest and delivers an accurate, automated diagnostic score.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the operation of optical isolators based on magneto-optics waveguide arrays beyond the coupled mode analysis. Semi-vectorial beam propagation simulations demonstrate that evanescent tail coupling and the effects of radiation are responsible for degrading the device’s performance. Our analysis suggests that these effects can be mitigated when the array size is scaled up. In addition, we propose the use of radiation blockers in order to offset some of these effects, and we show that they provide a dramatic improvement in performance. Finally, we also study the robustness of the system with respect to fabrication tolerances using the coupled mode theory. We show that small, random variations in the system’s parameters tend to average out as the number of optical guiding channels increases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cholesterol in milk is derived from the circulating blood through a complex transport process involving the mammary alveolar epithelium. Details of the mechanisms involved in this transfer are unclear. Apolipoprotein-AI (apoA-I) is an acceptor of cellular cholesterol effluxed by the ATP-binding cassette (ABC) transporter A1 (ABCA1). We aimed to 1) determine the binding characteristics of (125)I-apoA-I and (3)H-cholesterol to enriched plasma membrane vesicles (EPM) isolated from lactating and non-lactating bovine mammary glands (MG), 2) optimize the components of an in vitro model describing cellular (3)H-cholesterol efflux in primary bovine mammary epithelial cells (MeBo), and 3) assess the vectorial cholesterol transport in MeBo using Transwell(®) plates. The amounts of isolated EPM and the maximal binding capacity of (125)I-apoA-I to EPM differed depending on the MG's physiological state, while the kinetics of (3)H-cholesterol and (125)I-apoA-I binding were similar. (3)H-cholesterol incorporated maximally to EPM after 25±9 min. The time to achieve the half-maximum binding of (125)I-apoA-I at equilibrium was 3.3±0.6 min. The dissociation constant (KD) of (125)I-apoA-I ranged between 40-74 nmol/L. Cholesterol loading to EPM increased both cholesterol content and (125)I-apoA-I binding. The ABCA1 inhibitor Probucol displaced (125)I-apoA-I binding to EPM and reduced (3)H-cholesterol efflux in MeBo. Time-dependent (3)H-cholesterol uptake and efflux showed inverse patterns. The defined binding characteristics of cholesterol and apoA-I served to establish an efficient and significantly shorter cholesterol efflux protocol that had been used in MeBo. The application of this protocol in Transwell(®) plates with the upper chamber mimicking the apical (milk-facing) and the bottom chamber corresponding to the basolateral (blood-facing) side of cells showed that the degree of (3)H-cholesterol efflux in MeBo differed significantly between the apical and basolateral aspects. Our findings support the importance of the apoA-I/ABCA1 pathway in MG cholesterol transport and suggest its role in influencing milk composition and directing cholesterol back into the bloodstream.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

MATERNO-FETAL NUTRIENT TRANSFER ACROSS PRIMARY HUMAN TROPHOBLAST MONOLAYER Objectives: Polarized trophoblasts represent the transport and metabolic barrier between the maternal and fetal circulation. Currently human placental nutrient transfer in vitro is mainly investigated unidirectionallyon cultured primary trophoblasts, or bidirectionally on the Transwell® system using BeWo cells treated with forskolin. As forskolin can induce various gene alterations (e.g. cAMP response element genes), we aimed to establish a physiological primary trophoblast model for materno-fetal nutrient exchange studies without forskolin application. Methods: Human term cytotrophoblasts were isolated by enzymatic digestion and Percoll® gradient separation. The purity of the primary cells was assessed by flow cytometry using the trophoblast-specific marker cytokeratin-7. After screening different coating matrices, we optimized the growth conditions for the primary cytotrophoblasts on Transwell/ inserts. The morphology of 5 days cultured trophoblasts was determined by scanning electron microscopy (SEM) and transmission electron microscopy (TEM). Membrane makers were visualized using confocal microscopy. Additionally transport studies were performed on the polarized trophoblasts in the Transwell® system. Results: During 5 days culture, the trophoblasts (>90% purity) developed a modest trans-epithelial electrical resistance (TEER) and a sizedependent apparent permeability coefficient (Papp) to fluorescently labeled compounds (MW ~400-70’000D). SEM analyses confirmed a confluent trophoblast layer with numerous microvilli at day six, and TEM revealed a monolayer with tight junctions. Immunocytochemistry on the confluent trophoblasts showed positivity for the cell-cell adhesion molecule E-cadherin, the tight junction protein ZO-1, and the membrane proteins ABCA1 and Na+/K+-ATPase. Vectorial glucose and cholesterol transport studies confirmed functionality of the cultured trophoblast barrier. Conclusion: Evidence from cell morphology, biophysical parameters and cell marker expressions indicate the successful and reproducible establishment of a primary trophoblast monolayer model suitable for transport studies. Application of this model to pathological trophoblasts will help to better understand the mechanism underlying gestational diseases, and to define the consequences of placental pathology on materno-fetal nutrient transport.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The fact that the natural remanent magnetization (NRM) intensity of mid-oceanic-ridge basalt (MORB) samples shows systematic variations as a function of age has long been recognized: maximum as well as average intensities are generally high for very young samples, falling off rather rapidly to less than half the recent values in samples between 10 and 30 Ma, whereupon they slowly rise in the early Tertiary and Cretaceous to values that approach those of the very young samples. NRM intensities measured in this study follow the same trends as those observed in previous publications. In this study, we take a statistical approach and examine whether this pattern can be explained by variations in one or more of all previously proposed mechanisms: chemical composition of the magnetic minerals, abundance of these magnetization carriers, vectorial superposition of parallel or antiparallel components of magnetization, magnetic grain or domain size patterns, low-temperature oxidation to titanomaghemite, or geomagnetic field behavior. We find that the samples do not show any compositional, petrological, rock-magnetic, or paleomagnetic patterns that can explain the trends. Geomagnetic field intensity is the only effect that cannot be directly tested on the same samples, but it shows a similar pattern as our measured NRM intensities. We therefore conclude that the geomagnetic field strength was, on-average, significantly greater during the Cretaceous than during the Oligocene and Miocene.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El aprendizaje como proceso se puede considerar un avance evolutivo importante para todos los sistemas vivientes que lo adquirieron en las tempranas épocas del desarrollo de la vida. La percepción de un entorno que tiene “pasado" y “presente" permitió a los animales primitivos adquirir una visión más completa del mundo que los rodeaba. El uso cognitivo de la información que está disponible en un sistema viviente se le reconoce ahora como “aprendizaje". A pesar que han pasado muchos años y muchos investigadores han estado muy comprometidos en el estudio de la memoria y el aprendizaje, su intrincada naturaleza todavía no se comprende bien. En este trabajo se amplían muchos términos comunes en la investigación de este tema, como memoria, aprendizaje y ensayo redefiniéndose en un contexto más acotado con la intención de uniformar su comprensión. Se vuelve a discutir el modelo de aprendizaje en términos de un “circuito de aprendizaje". También se amplía el concepto unificador de “engrama de la unidad vectorial de la memoria" para el proceso de aprendizaje y el almacenamiento de la información, descrito con anterioridad. Finalmente, las implicaciones del modelo propuesto se consideran en el contexto de patologías que producen déficit de memoria, evaluándose las predicciones del modelo con la evidencia comportamental de pacientes con lesiones localizadas en ciertas partes del cerebro.