869 resultados para Intergenerational relations in literature.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

New trends in biometrics are oriented to mobile devices in order to increase the overall security in daily actions like bank account access, e-commerce or even document protection within the mobile. However, applying biometrics to mobile devices imply challenging aspects in biometric data acquisition, feature extraction or private data storage. Concretely, this paper attempts to deal with the problem of hand segmentation given a picture of the hand in an unknown background, requiring an accurate result in terms of hand isolation. For the sake of user acceptability, no restrictions are done on background, and therefore, hand images can be taken without any constraint, resulting segmentation in an exigent task. Multiscale aggregation strategies are proposed in order to solve this problem due to their accurate results in unconstrained and complicated scenarios, together with their properties in time performance. This method is evaluated with a public synthetic database with 480000 images considering different backgrounds and illumination environments. The results obtained in terms of accuracy and time performance highlight their capability of being a suitable solution for the problem of hand segmentation in contact-less environments, outperforming competitive methods in literature like Lossy Data Compression image segmentation (LDC).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC) and Normalized Cuts (NCuts). The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper seeks to analyze the political dimension of the body, and consequently the inherently political dimension of space, through the instrumental notion of situation, understood as an spatio-temporal mesh configured by bodies, practices and discourses. The political understood as the potential for action (or non-action) underlying the individual body, implies a renewed definition of a landscape that results from the body’s doing. Landscape becomes a multiple corporeality, a field of relations in which we discover ourselves enmeshed, not just placed; a field in which the limit is not frontier but bond and common dimension. A disquieting ambiguous zone appears there where the individual spatiality is born out of the body through the actualization of its political potential and entangles with others to constitute a common spatiality, political action of the multitude. The article is organized through the description of a back-and-forth movement between the revolts of Tehran in 2009 and the Iranian revolution of 1979. Also, a detour into the works of Robert Morris and Trisha Brown is required in order to understand the link between the body and the constitution of a common spatiality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La diabetes comprende un conjunto de enfermedades metabólicas que se caracterizan por concentraciones de glucosa en sangre anormalmente altas. En el caso de la diabetes tipo 1 (T1D, por sus siglas en inglés), esta situación es debida a una ausencia total de secreción endógena de insulina, lo que impide a la mayoría de tejidos usar la glucosa. En tales circunstancias, se hace necesario el suministro exógeno de insulina para preservar la vida del paciente; no obstante, siempre con la precaución de evitar caídas agudas de la glucemia por debajo de los niveles recomendados de seguridad. Además de la administración de insulina, las ingestas y la actividad física son factores fundamentales que influyen en la homeostasis de la glucosa. En consecuencia, una gestión apropiada de la T1D debería incorporar estos dos fenómenos fisiológicos, en base a una identificación y un modelado apropiado de los mismos y de sus sorrespondientes efectos en el balance glucosa-insulina. En particular, los sistemas de páncreas artificial –ideados para llevar a cabo un control automático de los niveles de glucemia del paciente– podrían beneficiarse de la integración de esta clase de información. La primera parte de esta tesis doctoral cubre la caracterización del efecto agudo de la actividad física en los perfiles de glucosa. Con este objetivo se ha llevado a cabo una revisión sistemática de la literatura y meta-análisis que determinen las respuestas ante varias modalidades de ejercicio para pacientes con T1D, abordando esta caracterización mediante unas magnitudes que cuantifican las tasas de cambio en la glucemia a lo largo del tiempo. Por otro lado, una identificación fiable de los periodos con actividad física es un requisito imprescindible para poder proveer de esa información a los sistemas de páncreas artificial en condiciones libres y ambulatorias. Por esta razón, la segunda parte de esta tesis está enfocada a la propuesta y evaluación de un sistema automático diseñado para reconocer periodos de actividad física, clasificando su nivel de intensidad (ligera, moderada o vigorosa); así como, en el caso de periodos vigorosos, identificando también la modalidad de ejercicio (aeróbica, mixta o de fuerza). En este sentido, ambos aspectos tienen una influencia específica en el mecanismo metabólico que suministra la energía para llevar a cabo el ejercicio y, por tanto, en las respuestas glucémicas en T1D. En este trabajo se aplican varias combinaciones de técnicas de aprendizaje máquina y reconocimiento de patrones sobre la fusión multimodal de señales de acelerometría y ritmo cardíaco, las cuales describen tanto aspectos mecánicos del movimiento como la respuesta fisiológica del sistema cardiovascular ante el ejercicio. Después del reconocimiento de patrones se incorpora también un módulo de filtrado temporal para sacar partido a la considerable coherencia temporal presente en los datos, una redundancia que se origina en el hecho de que en la práctica, las tendencias en cuanto a actividad física suelen mantenerse estables a lo largo de cierto tiempo, sin fluctuaciones rápidas y repetitivas. El tercer bloque de esta tesis doctoral aborda el tema de las ingestas en el ámbito de la T1D. En concreto, se propone una serie de modelos compartimentales y se evalúan éstos en función de su capacidad para describir matemáticamente el efecto remoto de las concetraciones plasmáticas de insulina exógena sobre las tasas de eleiminación de la glucosa atribuible a la ingesta; un aspecto hasta ahora no incorporado en los principales modelos de paciente para T1D existentes en la literatura. Los datos aquí utilizados se obtuvieron gracias a un experimento realizado por el Institute of Metabolic Science (Universidad de Cambridge, Reino Unido) con 16 pacientes jóvenes. En el experimento, de tipo ‘clamp’ con objetivo variable, se replicaron los perfiles individuales de glucosa, según lo observado durante una visita preliminar tras la ingesta de una cena con o bien alta carga glucémica, o bien baja. Los seis modelos mecanísticos evaluados constaban de: a) submodelos de doble compartimento para las masas de trazadores de glucosa, b) un submodelo de único compartimento para reflejar el efecto remoto de la insulina, c) dos tipos de activación de este mismo efecto remoto (bien lineal, bien con un punto de corte), y d) diversas condiciones iniciales. ABSTRACT Diabetes encompasses a series of metabolic diseases characterized by abnormally high blood glucose concentrations. In the case of type 1 diabetes (T1D), this situation is caused by a total absence of endogenous insulin secretion, which impedes the use of glucose by most tissues. In these circumstances, exogenous insulin supplies are necessary to maintain patient’s life; although caution is always needed to avoid acute decays in glycaemia below safe levels. In addition to insulin administrations, meal intakes and physical activity are fundamental factors influencing glucose homoeostasis. Consequently, a successful management of T1D should incorporate these two physiological phenomena, based on an appropriate identification and modelling of these events and their corresponding effect on the glucose-insulin balance. In particular, artificial pancreas systems –designed to perform an automated control of patient’s glycaemia levels– may benefit from the integration of this type of information. The first part of this PhD thesis covers the characterization of the acute effect of physical activity on glucose profiles. With this aim, a systematic review of literature and metaanalyses are conduced to determine responses to various exercise modalities in patients with T1D, assessed via rates-of-change magnitudes to quantify temporal variations in glycaemia. On the other hand, a reliable identification of physical activity periods is an essential prerequisite to feed artificial pancreas systems with information concerning exercise in ambulatory, free-living conditions. For this reason, the second part of this thesis focuses on the proposal and evaluation of an automatic system devised to recognize physical activity, classifying its intensity level (light, moderate or vigorous) and for vigorous periods, identifying also its exercise modality (aerobic, mixed or resistance); since both aspects have a distinctive influence on the predominant metabolic pathway involved in fuelling exercise, and therefore, in the glycaemic responses in T1D. Various combinations of machine learning and pattern recognition techniques are applied on the fusion of multi-modal signal sources, namely: accelerometry and heart rate measurements, which describe both mechanical aspects of movement and the physiological response of the cardiovascular system to exercise. An additional temporal filtering module is incorporated after recognition in order to exploit the considerable temporal coherence (i.e. redundancy) present in data, which stems from the fact that in practice, physical activity trends are often maintained stable along time, instead of fluctuating rapid and repeatedly. The third block of this PhD thesis addresses meal intakes in the context of T1D. In particular, a number of compartmental models are proposed and compared in terms of their ability to describe mathematically the remote effect of exogenous plasma insulin concentrations on the disposal rates of meal-attributable glucose, an aspect which had not yet been incorporated to the prevailing T1D patient models in literature. Data were acquired in an experiment conduced at the Institute of Metabolic Science (University of Cambridge, UK) on 16 young patients. A variable-target glucose clamp replicated their individual glucose profiles, observed during a preliminary visit after ingesting either a high glycaemic-load or a low glycaemic-load evening meal. The six mechanistic models under evaluation here comprised: a) two-compartmental submodels for glucose tracer masses, b) a single-compartmental submodel for insulin’s remote effect, c) two types of activations for this remote effect (either linear or with a ‘cut-off’ point), and d) diverse forms of initial conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La Energía eléctrica producida mediante tecnología eólica flotante es uno de los recursos más prometedores para reducir la dependencia de energía proveniente de combustibles fósiles. Esta tecnología es de especial interés en países como España, donde la plataforma continental es estrecha y existen pocas áreas para el desarrollo de estructuras fijas. Entre los diferentes conceptos flotantes, esta tesis se ha ocupado de la tipología semisumergible. Estas plataformas pueden experimentar movimientos resonantes en largada y arfada. En largada, dado que el periodo de resonancia es largo estos puede ser inducidos por efectos de segundo orden de deriva lenta que pueden tener una influencia muy significativa en las cargas en los fondeos. En arfada las fuerzas de primer orden pueden inducir grandes movimientos y por tanto la correcta determinación del amortiguamiento es esencial para la analizar la operatividad de la plataforma. Esta tesis ha investigado estos dos efectos, para ello se ha usado como caso base el diseño de una plataforma desarrollada en el proyecto Europeo Hiprwind. La plataforma se compone de 3 columnas cilíndricas unidas mediante montantes estructurales horizontales y diagonales, Los cilindros proporcionan flotabilidad y momentos adrizante. A la base de cada columna se le ha añadido un gran “Heave Plate” o placa de cierre. El diseño es similar a otros diseños previos (Windfloat). Se ha fabricado un modelo a escala de una de las columnas para el estudio detallado del amortiguamiento mediante oscilaciones forzadas. Las dimensiones del modelo (1m diámetro en la placa de cierre) lo hacen, de los conocidos por el candidato, el mayor para el que se han publicado datos. El diseño del cilindro se ha realizado de tal manera que permite la fijación de placas de cierre planas o con refuerzo, ambos modelos se han fabricado y analizado. El modelo con refuerzos es una reproducción exacta del diseño a escala real incluyendo detalles distintivos del mismo, siendo el más importante la placa vertical perimetral. Los ensayos de oscilaciones forzadas se han realizado para un rango de frecuencias, tanto para el disco plano como el reforzado. Se han medido las fuerzas durante los ensayos y se han calculado los coeficientes de amortiguamiento y de masa añadida. Estos coeficientes son necesarios para el cálculo del fondeo mediante simulaciones en el dominio del tiempo. Los coeficientes calculados se han comparado con la literatura existente, con cálculos potenciales y por ultimo con cálculos CFD. Para disponer de información relevante para el diseño estructural de la plataforma se han medido y analizado experimentalmente las presiones en la parte superior e inferior de cada placa de cierre. Para la correcta estimación numérica de las fuerzas de deriva lenta en la plataforma se ha realizado una campaña experimental que incluye ensayos con modelo cautivo de la plataforma completa en olas bicromaticas. Pese a que estos experimentos no reproducen un escenario de oleaje realista, los mismos permiten una verificación del modelo numérico mediante la comparación de fuerzas medidas en el modelo físico y el numérico. Como resultados de esta tesis podemos enumerar las siguientes conclusiones. 1. El amortiguamiento y la masa añadida muestran una pequeña dependencia con la frecuencia pero una gran dependencia con la amplitud del movimiento. siendo coherente con investigaciones existentes. 2. Las medidas con la placa de cierre reforzada con cierre vertical en el borde, muestra un amortiguamiento significativamente menor comparada con la placa plana. Esto implica que para ensayos de canal es necesario incluir estos detalles en el modelo. 3. La masa añadida no muestra grandes variaciones comparando placa plana y placa con refuerzos. 4. Un coeficiente de amortiguamiento del 6% del crítico se puede considerar conservador para el cálculo en el dominio de la frecuencia. Este amortiguamiento es equivalente a un coeficiente de “drag” de 4 en elementos de Morison cuadráticos en las placas de cierre usadas en simulaciones en el dominio del tiempo. 5. Se han encontrado discrepancias en algunos valores de masa añadida y amortiguamiento de la placa plana al comparar con datos publicados. Se han propuesto algunas explicaciones basadas en las diferencias en la relación de espesores, en la distancia a la superficie libre y también relacionadas con efectos de escala. 6. La presión en la placa con refuerzos son similares a las de la placa plana, excepto en la zona del borde donde la placa con refuerzo vertical induce una gran diferencias de presiones entre la cara superior e inferior. 7. La máxima diferencia de presión escala coherentemente con la fuerza equivalente a la aceleración de la masa añadida distribuida sobre la placa. 8. Las masas añadidas calculadas con el código potencial (WADAM) no son suficientemente precisas, Este software no contempla el modelado de placas de pequeño espesor con dipolos, la poca precisión de los resultados aumenta la importancia de este tipo de elementos al realizar simulaciones con códigos potenciales para este tipo de plataformas que incluyen elementos de poco espesor. 9. Respecto al código CFD (Ansys CFX) la precisión de los cálculos es razonable para la placa plana, esta precisión disminuye para la placa con refuerzo vertical en el borde, como era de esperar dado la mayor complejidad del flujo. 10. Respecto al segundo orden, los resultados, en general, muestran que, aunque la tendencia en las fuerzas de segundo orden se captura bien con los códigos numéricos, se observan algunas reducciones en comparación con los datos experimentales. Las diferencias entre simulaciones y datos experimentales son mayores al usar la aproximación de Newman, que usa únicamente resultados de primer orden para el cálculo de las fuerzas de deriva media. 11. Es importante remarcar que las tendencias observadas en los resultados con modelo fijo cambiarn cuando el modelo este libre, el impacto que los errores en las estimaciones de fuerzas segundo orden tienen en el sistema de fondeo dependen de las condiciones ambientales que imponen las cargas ultimas en dichas líneas. En cualquier caso los resultados que se han obtenido en esta investigación confirman que es necesaria y deseable una detallada investigación de los métodos usados en la estimación de las fuerzas no lineales en las turbinas flotantes para que pueda servir de guía en futuros diseños de estos sistemas. Finalmente, el candidato espera que esta investigación pueda beneficiar a la industria eólica offshore en mejorar el diseño hidrodinámico del concepto semisumergible. ABSTRACT Electrical power obtained from floating offshore wind turbines is one of the promising resources which can reduce the fossil fuel energy consumption and cover worldwide energy demands. The concept is the most competitive in countries, such as Spain, where the continental shelf is narrow and does not provide space for fixed structures. Among the different floating structures concepts, this thesis has dealt with the semisubmersible one. Platforms of this kind may experience resonant motions both in surge and heave directions. In surge, since the platform natural period is long, such resonance can be excited with second order slow drift forces and may have substantial influence on mooring loads. In heave, first order forces can induce significant motion, whose damping is a crucial factor for the platform downtime. These two topics have been investigated in this thesis. To this aim, a design developed during HiPRWind EU project, has been selected as reference case study. The platform is composed of three cylindrical legs, linked together by a set of structural braces. The cylinders provide buoyancy and restoring forces and moments. Large circular heave plates have been attached to their bases. The design is similar to other documented in literature (e.g. Windfloat), which implies outcomes could have a general value. A large scale model of one of the legs has been built in order to study heave damping through forced oscillations. The final dimensions of the specimen (one meter diameter discs) make it, to the candidate’s knowledge, the largest for which data has been published. The model design allows for the fitting of either a plain solid heave plate or a flapped reinforced one; both have been built. The latter is a model scale reproduction of the prototype heave plate and includes some distinctive features, the most important being the inclusion of a vertical flap on its perimeter. The forced oscillation tests have been conducted for a range of frequencies and amplitudes, with both the solid plain model and the vertical flap one. Forces have been measured, from which added mass and damping coefficients have been obtained. These are necessary to accurately compute time-domain simulations of mooring design. The coefficients have been compared with literature, and potential flow and CFD predictions. In order to provide information for the structural design of the platform, pressure measurements on the top and bottom side of the heave discs have been recorded and pressure differences analyzed. In addition, in order to conduct a detailed investigation on the numerical estimations of the slow-drift forces of the HiPRWind platform, an experimental campaign involving captive (fixed) model tests of a model of the whole platform in bichromatic waves has been carried out. Although not reproducing the more realistic scenario, these tests allowed a preliminary verification of the numerical model based directly on the forces measured on the structure. The following outcomes can be enumerated: 1. Damping and added mass coefficients show, on one hand, a small dependence with frequency and, on the other hand, a large dependence with the motion amplitude, which is coherent with previously published research. 2. Measurements with the prototype plate, equipped with the vertical flap, show that damping drops significantly when comparing this to the plain one. This implies that, for tank tests of the whole floater and turbine, the prototype plate, equipped with the flap, should be incorporated to the model. 3. Added mass values do not suffer large alterations when comparing the plain plate and the one equipped with a vertical flap. 4. A conservative damping coefficient equal to 6% of the critical damping can be considered adequate for the prototype heave plate for frequency domain analysis. A corresponding drag coefficient equal to 4.0 can be used in time domain simulations to define Morison elements. 5. When comparing to published data, some discrepancies in added mass and damping coefficients for the solid plain plate have been found. Explanations have been suggested, focusing mainly on differences in thickness ratio and distance to the free surface, and eventual scale effects. 6. Pressures on the plate equipped with the vertical flap are similar in magnitude to those of the plain plate, even though substantial differences are present close to the edge, where the flap induces a larger pressure difference in the reinforced case. 7. The maximum pressure difference scales coherently with the force equivalent to the acceleration of the added mass, distributed over the disc surface. 8. Added mass coefficient values predicted with the potential solver (WADAM) are not accurate enough. The used solver does not contemplate modeling thin plates with doublets. The relatively low accuracy of the results highlights the importance of these elements when performing potential flow simulations of offshore platforms which include thin plates. 9. For the full CFD solver (Ansys CFX), the accuracy of the computations is found reasonable for the plain plate. Such accuracy diminishes for the disc equipped with a vertical flap, an expected result considering the greater complexity of the flow. 10. In regards to second order effects, in general, the results showed that, although the main trend in the behavior of the second-order forces is well captured by the numerical predictions, some under prediction of the experimental values is visible. The gap between experimental and numerical results is more pronounced when Newman’s approximation is considered, making use exclusively of the mean drift forces calculated in the first-order solution. 11. It should be observed that the trends observed in the fixed model test may change when the body is free to float, and the impact that eventual errors in the estimation of the second-order forces may have on the mooring system depends on the characteristics of the sea conditions that will ultimately impose the maximum loads on the mooring lines. Nevertheless, the preliminary results obtained in this research do confirm that a more detailed investigation of the methods adopted for the estimation of the nonlinear wave forces on the FOWT would be welcome and may provide some further guidance for the design of such systems. As a final remark, the candidate hopes this research can benefit the offshore wind industry in improving the hydrodynamic design of the semi-submersible concept.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los hipergrafos dirigidos se han empleado en problemas relacionados con lógica proposicional, bases de datos relacionales, linguística computacional y aprendizaje automático. Los hipergrafos dirigidos han sido también utilizados como alternativa a los grafos (bipartitos) dirigidos para facilitar el estudio de las interacciones entre componentes de sistemas complejos que no pueden ser fácilmente modelados usando exclusivamente relaciones binarias. En este contexto, este tipo de representación es conocida como hiper-redes. Un hipergrafo dirigido es una generalización de un grafo dirigido especialmente adecuado para la representación de relaciones de muchos a muchos. Mientras que una arista en un grafo dirigido define una relación entre dos de sus nodos, una hiperarista en un hipergrafo dirigido define una relación entre dos conjuntos de sus nodos. La conexión fuerte es una relación de equivalencia que divide el conjunto de nodos de un hipergrafo dirigido en particiones y cada partición define una clase de equivalencia conocida como componente fuertemente conexo. El estudio de los componentes fuertemente conexos de un hipergrafo dirigido puede ayudar a conseguir una mejor comprensión de la estructura de este tipo de hipergrafos cuando su tamaño es considerable. En el caso de grafo dirigidos, existen algoritmos muy eficientes para el cálculo de los componentes fuertemente conexos en grafos de gran tamaño. Gracias a estos algoritmos, se ha podido averiguar que la estructura de la WWW tiene forma de “pajarita”, donde más del 70% del los nodos están distribuidos en tres grandes conjuntos y uno de ellos es un componente fuertemente conexo. Este tipo de estructura ha sido también observada en redes complejas en otras áreas como la biología. Estudios de naturaleza similar no han podido ser realizados en hipergrafos dirigidos porque no existe algoritmos capaces de calcular los componentes fuertemente conexos de este tipo de hipergrafos. En esta tesis doctoral, hemos investigado como calcular los componentes fuertemente conexos de un hipergrafo dirigido. En concreto, hemos desarrollado dos algoritmos para este problema y hemos determinado que son correctos y cuál es su complejidad computacional. Ambos algoritmos han sido evaluados empíricamente para comparar sus tiempos de ejecución. Para la evaluación, hemos producido una selección de hipergrafos dirigidos generados de forma aleatoria inspirados en modelos muy conocidos de grafos aleatorios como Erdos-Renyi, Newman-Watts-Strogatz and Barabasi-Albert. Varias optimizaciones para ambos algoritmos han sido implementadas y analizadas en la tesis. En concreto, colapsar los componentes fuertemente conexos del grafo dirigido que se puede construir eliminando ciertas hiperaristas complejas del hipergrafo dirigido original, mejora notablemente los tiempos de ejecucion de los algoritmos para varios de los hipergrafos utilizados en la evaluación. Aparte de los ejemplos de aplicación mencionados anteriormente, los hipergrafos dirigidos han sido también empleados en el área de representación de conocimiento. En concreto, este tipo de hipergrafos se han usado para el cálculo de módulos de ontologías. Una ontología puede ser definida como un conjunto de axiomas que especifican formalmente un conjunto de símbolos y sus relaciones, mientras que un modulo puede ser entendido como un subconjunto de axiomas de la ontología que recoge todo el conocimiento que almacena la ontología sobre un conjunto especifico de símbolos y sus relaciones. En la tesis nos hemos centrado solamente en módulos que han sido calculados usando la técnica de localidad sintáctica. Debido a que las ontologías pueden ser muy grandes, el cálculo de módulos puede facilitar las tareas de re-utilización y mantenimiento de dichas ontologías. Sin embargo, analizar todos los posibles módulos de una ontología es, en general, muy costoso porque el numero de módulos crece de forma exponencial con respecto al número de símbolos y de axiomas de la ontología. Afortunadamente, los axiomas de una ontología pueden ser divididos en particiones conocidas como átomos. Cada átomo representa un conjunto máximo de axiomas que siempre aparecen juntos en un modulo. La decomposición atómica de una ontología es definida como un grafo dirigido de tal forma que cada nodo del grafo corresponde con un átomo y cada arista define una dependencia entre una pareja de átomos. En esta tesis introducimos el concepto de“axiom dependency hypergraph” que generaliza el concepto de descomposición atómica de una ontología. Un modulo en una ontología correspondería con un componente conexo en este tipo de hipergrafos y un átomo de una ontología con un componente fuertemente conexo. Hemos adaptado la implementación de nuestros algoritmos para que funcionen también con axiom dependency hypergraphs y poder de esa forma calcular los átomos de una ontología. Para demostrar la viabilidad de esta idea, hemos incorporado nuestros algoritmos en una aplicación que hemos desarrollado para la extracción de módulos y la descomposición atómica de ontologías. A la aplicación la hemos llamado HyS y hemos estudiado sus tiempos de ejecución usando una selección de ontologías muy conocidas del área biomédica, la mayoría disponibles en el portal de Internet NCBO. Los resultados de la evaluación muestran que los tiempos de ejecución de HyS son mucho mejores que las aplicaciones más rápidas conocidas. ABSTRACT Directed hypergraphs are an intuitive modelling formalism that have been used in problems related to propositional logic, relational databases, computational linguistic and machine learning. Directed hypergraphs are also presented as an alternative to directed (bipartite) graphs to facilitate the study of the interactions between components of complex systems that cannot naturally be modelled as binary relations. In this context, they are known as hyper-networks. A directed hypergraph is a generalization of a directed graph suitable for representing many-to-many relationships. While an edge in a directed graph defines a relation between two nodes of the graph, a hyperedge in a directed hypergraph defines a relation between two sets of nodes. Strong-connectivity is an equivalence relation that induces a partition of the set of nodes of a directed hypergraph into strongly-connected components. These components can be collapsed into single nodes. As result, the size of the original hypergraph can significantly be reduced if the strongly-connected components have many nodes. This approach might contribute to better understand how the nodes of a hypergraph are connected, in particular when the hypergraphs are large. In the case of directed graphs, there are efficient algorithms that can be used to compute the strongly-connected components of large graphs. For instance, it has been shown that the macroscopic structure of the World Wide Web can be represented as a “bow-tie” diagram where more than 70% of the nodes are distributed into three large sets and one of these sets is a large strongly-connected component. This particular structure has been also observed in complex networks in other fields such as, e.g., biology. Similar studies cannot be conducted in a directed hypergraph because there does not exist any algorithm for computing the strongly-connected components of the hypergraph. In this thesis, we investigate ways to compute the strongly-connected components of directed hypergraphs. We present two new algorithms and we show their correctness and computational complexity. One of these algorithms is inspired by Tarjan’s algorithm for directed graphs. The second algorithm follows a simple approach to compute the stronglyconnected components. This approach is based on the fact that two nodes of a graph that are strongly-connected can also reach the same nodes. In other words, the connected component of each node is the same. Both algorithms are empirically evaluated to compare their performances. To this end, we have produced a selection of random directed hypergraphs inspired by existent and well-known random graphs models like Erd˝os-Renyi and Newman-Watts-Strogatz. Besides the application examples that we mentioned earlier, directed hypergraphs have also been employed in the field of knowledge representation. In particular, they have been used to compute the modules of an ontology. An ontology is defined as a collection of axioms that provides a formal specification of a set of terms and their relationships; and a module is a subset of an ontology that completely captures the meaning of certain terms as defined in the ontology. In particular, we focus on the modules computed using the notion of syntactic locality. As ontologies can be very large, the computation of modules facilitates the reuse and maintenance of these ontologies. Analysing all modules of an ontology, however, is in general not feasible as the number of modules grows exponentially in the number of terms and axioms of the ontology. Nevertheless, the modules can succinctly be represented using the Atomic Decomposition of an ontology. Using this representation, an ontology can be partitioned into atoms, which are maximal sets of axioms that co-occur in every module. The Atomic Decomposition is then defined as a directed graph such that each node correspond to an atom and each edge represents a dependency relation between two atoms. In this thesis, we introduce the notion of an axiom dependency hypergraph which is a generalization of the atomic decomposition of an ontology. A module in the ontology corresponds to a connected component in the hypergraph, and the atoms of the ontology to the strongly-connected components. We apply our algorithms for directed hypergraphs to axiom dependency hypergraphs and in this manner, we compute the atoms of an ontology. To demonstrate the viability of this approach, we have implemented the algorithms in the application HyS which computes the modules of ontologies and calculate their atomic decomposition. In the thesis, we provide an experimental evaluation of HyS with a selection of large and prominent biomedical ontologies, most of which are available in the NCBO Bioportal. HyS outperforms state-of-the-art implementations in the tasks of extracting modules and computing the atomic decomposition of these ontologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Radon gas (Rn) is a natural radioactive gas present in some soils and able to penetrate buildings through the building envelope in contact with the soil. Radon can accumulate within buildings and consequently be inhaled by their occupants. Because it is a radioactive gas, its disintegration process produces alpha particles that, in contact with the lung epithelia, can produce alterations potentially giving rise to cancer. Many international organizations related to health protection, such as WHO, confirm this causality. One way to avoid the accumulation of radon in buildings is to use the building envelope as a radon barrier. The extent to which concrete provides such a barrier is described by its radon diffusion coefficient (DRn), a parameter closely related to porosity (ɛ) and tortuosity factor (τ). The measurement of the radon diffusion coefficient presents challenges, due to the absence of standard procedures, the requirement to establish adequate airtightness in testing apparatus (referred to here as the diffusion cell), and due to the fact that measurement has to be carried out in an environment certified for use of radon calibrated sources. In addition to this calibrated radon sources are costly. The measurement of the diffusion coefficient for non-radioactive gas is less complex, but nevertheless retains a degree of difficulty due to the need to provide reliably airtight apparatus for all tests. Other parameters that can characterize and describe the process of gas transport through concrete include the permeability coefficient (K) and the electrical resistivity (ρe), both of which can be measured relatively easily with standardized procedure. The use of these parameters would simplify the characterization of concrete behaviour as a radon barrier. Although earlier studies exist, describing correlation among these parameters, there is, as has been observed in the literature, little common ground between the various research efforts. For precisely this reason, prior to any attempt to measure radon diffusion, it was deemed necessary to carry out further research in this area, as a foundation to the current work, to explore potential relationships among the following parameters: porosity-tortuosity, oxygen diffusion coefficient, permeability coefficient and resistivity. Permeability coefficient measurement (m2) presents a more straightforward challenge than diffusion coefficient measurement. Some authors identify a relationship between both coefficients, including Gaber (1988), who proposes: k= a•Dn Equation 1 Where: a=A/(8ΠD020), A = sample cross-section, D020 = diffusion coefficient in air (m2/s). Other studies (Klink et al. 1999, Gaber and Schlattner 1997, Gräf and Grube et al. 1986), experimentally relate both coefficients of different types of concrete confirming that this relationship exists, as represented by the simplified expression: k≈Dn Equation 2 In each particular study a different value for n was established, varying from 1.3 to 2.5, but this requires determination of a value for n in a more general way because these proposed models cannot estimate diffusion coefficient. If diffusion coefficient has to be measured to be able to establish n, these relationships are not interesting. The measurement of electric resistivity is easier than diffusion coefficient measurement. Correlation between the parameters can be established via Einstein´s law that relates movement of electrical charges to media conductivity according to the expression: D_e=k/ρ Equation 3 Where: De = diffusion coefficient (cm2/s), K = constant, ρ = electric resistivity (Ω•cm). The tortuosity factor is used to represent the uneven geometry of concrete pores, which are described as being not straight, but tortuous. This factor was first introduced in the literature to relate global porosity with fluid transport in a porous media, and can be formulated in a number of different ways. For example, it can take the form of equation 4 (Mason y Malinauskas), which combines molecular and Knudsen diffusion using the tortuosity factor: D=ε^τ (3/2r √(πM/8RT+1/D_0 ))^(-1) Equation 4 Where: r = medium radius obtained from MIP (µm), M = gas molecular mass, R = ideal gases constant, T = temperature (K), D0 = coefficient diffusion in the air (m2/s). Few studies provide any insight as to how to obtain the tortuosity factor. The work of Andrade (2012) is exceptional in this sense, as it outlines how the tortuosity factor can be deduced from pore size distribution (from MIP) from the equation: ∅_th=∅_0•ε^(-τ). Equation 5 Where: Øth = threshold diameter (µm), Ø0 = minimum diameter (µm), ɛ = global porosity, τ = tortuosity factor. Alternatively, the following equation may be used to obtain the tortuosity factor: DO2=D0*ɛτ Equation 6 Where: DO2 = oxygen diffusion coefficient obtained experimentally (m2/s), DO20 = oxygen diffusion coefficient in the air (m2/s). This equation has been inferred from Archie´s law ρ_e=〖a•ρ〗_0•ɛ^(-m) and from the Einstein law mentioned above, using the values of oxygen diffusion coefficient obtained experimentally. The principal objective of the current study was to establish correlations between the different parameters that characterize gas transport through concrete. The achievement of this goal will facilitate the assessment of the useful life of concrete, as well as open the door to the pro-active planning for the use of concrete as a radon barrier. Two further objectives were formulated within the current study: 1.- To develop a method for measurement of gas coefficient diffusion in concrete. 2.- To model an analytic estimation of radon diffusion coefficient from parameters related to concrete porosity and tortuosity factor. In order to assess the possible correlations, parameters have been measured using the standardized procedures or purpose-built in the laboratory for the study of equations 1, 2 y 3. To measure the gas diffusion coefficient, a diffusion cell was designed and manufactured, with the design evolving over several cycles of research, leading ultimately to a unit that is reliably air tight. The analytic estimation of the radon diffusion coefficient DRn in concrete is based on concrete global porosity (ɛ), whose values may be experimentally obtained from a mercury intrusion porosimetry test (MIP), and from its tortuosity factor (τ), derived using the relations expressed in equations 5 y 6. The conclusions of the study are: Several models based on regressions, for concrete with a relative humidity of 50%, have been proposed to obtain the diffusion coefficient following the equations K=Dn, K=a*Dn y D=n/ρe. The final of these three relations is the one with the determination coefficient closest to a value of 1: D=(19,997*LNɛ+59,354)/ρe Equation 7 The values of the obtained oxygen diffusion coefficient adjust quite well to those experimentally measured. The proposed method for the measurement of the gas coefficient diffusion is considered to be adequate. The values obtained for the oxygen diffusion coefficient are within the range of those proposed by the literature (10-7 a 10-8 m2/s), and are consistent with the other studied parameters. Tortuosity factors obtained using pore distribution and the expression Ø=Ø0*ɛ-τ are inferior to those from resistivity ρ=ρ0*ɛ-τ. The closest relationship to it is the one with porosity of pore diameter 1 µm (τ=2,07), being 7,21% inferior. Tortuosity factors obtained from the expression DO2=D0*ɛτ are similar to those from resistivity: for global tortuosity τ=2,26 and for the rest of porosities τ=0,7. Estimated radon diffusion coefficients are within the range of those consulted in literature (10-8 a 10-10 m2/s).ABSTRACT El gas radón (Rn) es un gas natural radioactivo presente en algunos terrenos que puede penetrar en los edificios a través de los cerramientos en contacto con el mismo. En los espacios interiores se puede acumular y ser inhalado por las personas. Al ser un gas radioactivo, en su proceso de desintegración emite partículas alfa que, al entrar en contacto con el epitelio pulmonar, pueden producir alteraciones del mismo causando cáncer. Muchos organismos internacionales relacionados con la protección de la salud, como es la OMS, confirman esta causalidad. Una de las formas de evitar que el radón penetre en los edificios es utilizando las propiedades de barrera frente al radón de su propia envolvente en contacto con el terreno. La principal característica del hormigón que confiere la propiedad de barrera frente al radón cuando conforma esta envolvente es su permeabilidad que se puede caracterizar mediante su coeficiente de difusión (DRn). El coeficiente de difusión de un gas en el hormigón es un parámetro que está muy relacionado con su porosidad (ɛ) y su tortuosidad (τ). La medida del coeficiente de difusión del radón resulta bastante complicada debido a que el procedimiento no está normalizado, a que es necesario asegurar una estanquidad a la celda de medida de la difusión y a que la medida tiene que ser realizada en un laboratorio cualificado para el uso de fuentes de radón calibradas, que además son muy caras. La medida del coeficiente de difusión de gases no radioactivos es menos compleja, pero sigue teniendo un alto grado de dificultad puesto que tampoco está normalizada, y se sigue teniendo el problema de lograr una estanqueidad adecuada de la celda de difusión. Otros parámetros que pueden caracterizar el proceso son el coeficiente de permeabilidad (K) y la resistividad eléctrica (ρe), que son más fáciles de determinar mediante ensayos que sí están normalizados. El uso de estos parámetros facilitaría la caracterización del hormigón como barrera frente al radón, pero aunque existen algunos estudios que proponen correlaciones entre estos parámetros, en general existe divergencias entre los investigadores, como se ha podido comprobar en la revisión bibliográfica realizada. Por ello, antes de tratar de medir la difusión del radón se ha considerado necesario realizar más estudios que puedan clarificar las posibles relaciones entre los parámetros: porosidad-tortuosidad, coeficiente de difusión del oxígeno, coeficiente de permeabilidad y resistividad. La medida del coeficiente de permeabilidad (m2) es más sencilla que el de difusión. Hay autores que relacionan el coeficiente de permeabilidad con el de difusión. Gaber (1988) propone la siguiente relación: k= a•Dn Ecuación 1 En donde: a=A/(8ΠD020), A = sección de la muestra, D020 = coeficiente de difusión en el aire (m2/s). Otros estudios (Klink et al. 1999, Gaber y Schlattner 1997, Gräf y Grube et al. 1986) relacionan de forma experimental los coeficientes de difusión de radón y de permeabilidad de distintos hormigones confirmando que existe una relación entre ambos parámetros, utilizando la expresión simplificada: k≈Dn Ecuación 2 En cada estudio concreto se han encontrado distintos valores para n que van desde 1,3 a 2,5 lo que lleva a la necesidad de determinar n porque no hay métodos que eviten la determinación del coeficiente de difusión. Si se mide la difusión ya deja de ser de interés la medida indirecta a través de la permeabilidad. La medida de la resistividad eléctrica es muchísimo más sencilla que la de la difusión. La relación entre ambos parámetros se puede establecer a través de una de las leyes de Einstein que relaciona el movimiento de cargas eléctricas con la conductividad del medio según la siguiente expresión: D_e=k/ρ_e Ecuación 3 En donde: De = coeficiente de difusión (cm2/s), K = constante, ρe = resistividad eléctrica (Ω•cm). El factor de tortuosidad es un factor de forma que representa la irregular geometría de los poros del hormigón, al no ser rectos sino tener una forma tortuosa. Este factor se introduce en la literatura para relacionar la porosidad total con el transporte de un fluido en un medio poroso y se puede formular de distintas formas. Por ejemplo se destaca la ecuación 4 (Mason y Malinauskas) que combina la difusión molecular y la de Knudsen utilizando el factor de tortuosidad: D=ε^τ (3/2r √(πM/8RT+1/D_0 ))^(-1) Ecuación 4 En donde: r = radio medio obtenido del MIP (µm), M = peso molecular del gas, R = constante de los gases ideales, T = temperatura (K), D0 = coeficiente de difusión de un gas en el aire (m2/s). No hay muchos estudios que proporcionen una forma de obtener este factor de tortuosidad. Destaca el estudio de Andrade (2012) en el que deduce el factor de tortuosidad de la distribución del tamaño de poros (curva de porosidad por intrusión de mercurio) a partir de la ecuación: ∅_th=∅_0•ε^(-τ) Ecuación 5 En donde: Øth = diámetro umbral (µm), Ø0 = diámetro mínimo (µm), ɛ = porosidad global, τ = factor de tortuosidad. Por otro lado, se podría utilizar también para obtener el factor de tortuosidad la relación: DO2=D0*-τ Ecuación 6 En donde: DO2 = coeficiente de difusión del oxígeno experimental (m2/s), DO20 = coeficiente de difusión del oxígeno en el aire (m2/s). Esta ecuación está inferida de la ley de Archie ρ_e=〖a•ρ〗_0•ɛ^(-m) y la de Einstein mencionada anteriormente, utilizando valores del coeficiente de difusión del oxígeno DO2 obtenidos experimentalmente. El objetivo fundamental de la tesis es encontrar correlaciones entre los distintos parámetros que caracterizan el transporte de gases a través del hormigón. La consecución de este objetivo facilitará la evaluación de la vida útil del hormigón así como otras posibilidades, como la evaluación del hormigón como elemento que pueda ser utilizado en la construcción de nuevos edificios como barrera frente al gas radón presente en el terreno. Se plantean también los siguientes objetivos parciales en la tesis: 1.- Elaborar una metodología para la medida del coeficiente de difusión de los gases en el hormigón. 2.- Plantear una estimación analítica del coeficiente de difusión del radón a partir de parámetros relacionados con su porosidad y su factor de tortuosidad. Para el estudio de las correlaciones posibles, se han medido los parámetros con los procedimientos normalizados o puestos a punto en el propio Instituto, y se han estudiado las reflejadas en las ecuaciones 1, 2 y 3. Para la medida del coeficiente de difusión de gases se ha fabricado una celda que ha exigido una gran variedad de detalles experimentales con el fin de hacerla estanca. Para la estimación analítica del coeficiente de difusión del radón DRn en el hormigón se ha partido de su porosidad global (ɛ), que se obtiene experimentalmente del ensayo de porosimetría por intrusión de mercurio (MIP), y de su factor de tortuosidad (τ), que se ha obtenido a partir de las relaciones reflejadas en las ecuaciones 5 y 6. Las principales conclusiones obtenidas son las siguientes: Se proponen modelos basados en regresiones, para un acondicionamiento con humedad relativa de 50%, para obtener el coeficiente de difusión del oxígeno según las relaciones: K=Dn, K=a*Dn y D=n/ρe. La propuesta para esta última relación es la que tiene un mejor ajuste con R2=0,999: D=(19,997*LNɛ+59,354)/ρe Ecuación 7 Los valores del coeficiente de difusión del oxígeno así estimados se ajustan a los obtenidos experimentalmente. Se considera adecuado el método propuesto de medida del coeficiente de difusión para gases. Los resultados obtenidos para el coeficiente de difusión del oxígeno se encuentran dentro del rango de los consultados en la literatura (10-7 a 10-8 m2/s) y son coherentes con el resto de parámetros estudiados. Los resultados de los factores de tortuosidad obtenidos de la relación Ø=Ø0*ɛ-τ son inferiores a la de la resistividad (ρ=ρ0*ɛ-τ). La relación que más se ajusta a ésta, siendo un 7,21% inferior, es la de la porosidad correspondiente al diámetro 1 µm con τ=2,07. Los resultados de los factores de tortuosidad obtenidos de la relación DO2=D0*ɛτ son similares a la de la resistividad: para la porosidad global τ=2,26 y para el resto de porosidades τ=0,7. Los coeficientes de difusión de radón estimados mediante estos factores de tortuosidad están dentro del rango de los consultados en la literatura (10-8 a 10-10 m2/s).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We explore charge migration in DNA, advancing two distinct mechanisms of charge separation in a donor (d)–bridge ({Bj})–acceptor (a) system, where {Bj} = B1,B2, … , BN are the N-specific adjacent bases of B-DNA: (i) two-center unistep superexchange induced charge transfer, d*{Bj}a → d∓{Bj}a±, and (ii) multistep charge transport involves charge injection from d* (or d+) to {Bj}, charge hopping within {Bj}, and charge trapping by a. For off-resonance coupling, mechanism i prevails with the charge separation rate and yield exhibiting an exponential dependence ∝ exp(−βR) on the d-a distance (R). Resonance coupling results in mechanism ii with the charge separation lifetime τ ∝ Nη and yield Y ≃ (1 + δ̄ Nη)−1 exhibiting a weak (algebraic) N and distance dependence. The power parameter η is determined by charge hopping random walk. Energetic control of the charge migration mechanism is exerted by the energetics of the ion pair state d∓B1±B2 … BNa relative to the electronically excited donor doorway state d*B1B2 … BNa. The realization of charge separation via superexchange or hopping is determined by the base sequence within the bridge. Our energetic–dynamic relations, in conjunction with the energetic data for d*/d− and for B/B+, determine the realization of the two distinct mechanisms in different hole donor systems, establishing the conditions for “chemistry at a distance” after charge transport in DNA. The energetic control of the charge migration mechanisms attained by the sequence specificity of the bridge is universal for large molecular-scale systems, for proteins, and for DNA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In both humans and animals, the hippocampus is critical to memory across modalities of information (e.g., spatial and nonspatial memory) and plays a critical role in the organization and flexible expression of memories. Recent studies have advanced our understanding of cellular basis of hippocampal function, showing that N-methyl-d-aspartate (NMDA) receptors in area CA1 are required in both the spatial and nonspatial domains of learning. Here we examined whether CA1 NMDA receptors are specifically required for the acquisition and flexible expression of nonspatial memory. Mice lacking CA1 NMDA receptors were impaired in solving a transverse patterning problem that required the simultaneous acquisition of three overlapping odor discriminations, and their impairment was related to an abnormal strategy by which they failed to adequately sample and compare the critical odor stimuli. By contrast, they performed normally, and used normal stimulus sampling strategies, in the concurrent learning of three nonoverlapping concurrent odor discriminations. These results suggest that CA1 NMDA receptors play a crucial role in the encoding and flexible expression of stimulus relations in nonspatial memory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The leaves and especially the roots of chicory (Cichorium intybus L.) contain high concentrations of bitter sesquiterpene lactones such as the guianolides lactupicrin, lactucin, and 8-deoxylactucin. Eudesmanolides and germacranolides are present in smaller amounts. Their postulated biosynthesis through the mevalonate-farnesyl diphosphate-germacradiene pathway has now been confirmed by the isolation of a (+)-germacrene A synthase from chicory roots. This sesquiterpene cyclase was purified 200-fold using a combination of anion-exchange and dye-ligand chromatography. It has a Km value of 6.6 μm, an estimated molecular mass of 54 kD, and a (broad) pH optimum around 6.7. Germacrene A, the enzymatic product, proved to be much more stable than reported in literature. Its heat-induced Cope rearrangement into (−)-β-elemene was utilized to determine its absolute configuration on an enantioselective gas chromatography column. To our knowledge, until now in sesquiterpene biosynthesis, germacrene A has only been reported as an (postulated) enzyme-bound intermediate, which, instead of being released, is subjected to additional cyclization(s) by the same enzyme that generated it from farnesyl diphosphate. However, in chicory germacrene A is released from the sesquiterpene cyclase. Apparently, subsequent oxidations and/or glucosylation of the germacrane skeleton, together with a germacrene cyclase, determine whether guaiane- or eudesmane-type sesquiterpene lactones are produced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The structure of solute transporters is understood largely from analysis of their amino acid sequences, and more direct information is greatly needed. Here we report work that applies cysteine scanning mutagenesis to describe structure-function relations in UhpT, a bacterial membrane transporter. By using an impermeant SH-reactive agent to probe single-cysteine variants, we show that UhpT transmembrane segment 7 spans the membrane as an alpha-helix and that the central portion of this helix is exposed to both membrane surfaces, forming part of the translocation pathway through this transporter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Riassunto La spettrometria di massa (MS) nata negli anni ’70 trova oggi, grazie alla tecnologia Matrix-Assisted Laser Desorption Ionization-Time of Flight (MALDI-TOF), importanti applicazioni in diversi settori: biotecnologico (per la caratterizzazione ed il controllo di qualità di proteine ricombinanti ed altre macromolecole), medico–clinico (per la diagnosi di laboratorio di malattie e per lo sviluppo di nuovi trattamenti terapeutici mirati), alimentare ed ambientale. Negli ultimi anni, questa tecnologia è diventata un potente strumento anche per la diagnosi di laboratorio in microbiologia clinica, rivoluzionando il flusso di lavoro per una rapida identificazione di batteri e funghi, sostituendo l’identificazione fenotipica convenzionale basata su saggi biochimici. Attualmente mediante MALDI-TOF MS sono possibili due diversi approcci per la caratterizzazione dei microrganismi: (1) confronto degli spettri (“mass spectra”) con banche dati contenenti profili di riferimento (“database fingerprints”) e (2) “matching” di bio-marcatori con banche dati proteomiche (“proteome database”). Recentemente, la tecnologia MALDI-TOF, oltre alla sua applicazione classica nell’identificazione di microrganismi, è stata utilizzata per individuare, indirettamente, meccanismi di resistenza agli antibiotici. Primo scopo di questo studio è stato verificare e dimostrare l’efficacia identificativa della metodica MALDI-TOF MS mediante approccio di comparazione degli spettri di differenti microrganismi di interesse medico per i quali l’identificazione risultava impossibile a causa della completa assenza o presenza limitata, di spettri di riferimento all’interno della banca dati commerciale associata allo strumento. In particolare, tale scopo è stato raggiunto per i batteri appartenenti a spirochete del genere Borrelia e Leptospira, a miceti filamentosi (dermatofiti) e protozoi (Trichomonas vaginalis). Secondo scopo di questo studio è stato valutare il secondo approccio identificativo basato sulla ricerca di specifici marcatori per differenziare parassiti intestinali di interesse medico per i quali non è disponibile una banca dati commerciale di riferimento e la sua creazione risulterebbe particolarmente difficile e complessa, a causa della complessità del materiale biologico di partenza analizzato e del terreno di coltura nei quali questi protozoi sono isolati. Terzo ed ultimo scopo di questo studio è stata la valutazione dell’applicabilità della spettrometria di massa con tecnologia MALDI-TOF per lo studio delle resistenze batteriche ai carbapenemi. In particolare, è stato messo a punto un saggio di idrolisi dei carbapenemi rilevata mediante MALDI-TOF MS in grado di determinare indirettamente la produzione di carbapenemasi in Enterobacteriaceae. L’efficacia identificativa della metodica MALDI-TOF mediante l’approccio di comparazione degli spettri è stata dimostrata in primo luogo per batteri appartenenti al genere Borrelia. La banca dati commerciale dello strumento MALDI-TOF MS in uso presso il nostro laboratorio includeva solo 3 spettri di riferimento appartenenti alle specie B. burgdorferi ss, B. spielmani e B. garinii. L’implementazione del “database” con specie diverse da quelle già presenti ha permesso di colmare le lacune identificative dovute alla mancanza di spettri di riferimento di alcune tra le specie di Borrelia più diffuse in Europa (B. afzelii) e nel mondo (come ad esempio B. hermsii, e B. japonica). Inoltre l’implementazione con spettri derivanti da ceppi di riferimento di specie già presenti nel “database” ha ulteriormente migliorato l’efficacia identificativa del sistema. Come atteso, il ceppo di isolamento clinico di B. lusitaniae (specie non presente nel “database”) è stato identificato solo a livello di genere corroborando, grazie all’assenza di mis-identificazione, la robustezza della “nuova” banca dati. I risultati ottenuti analizzando i profili proteici di ceppi di Borrelia spp. di isolamento clinico, dopo integrazione del “database” commerciale, indicano che la tecnologia MALDI-TOF potrebbe essere utilizzata come rapida, economica ed affidabile alternativa ai metodi attualmente utilizzati per identificare ceppi appartenenti a questo genere. Analogamente, per il genere Leptospira dopo la creazione ex-novo della banca dati “home-made”, costruita con i 20 spettri derivati dai 20 ceppi di riferimento utilizzati, è stata ottenuta una corretta identificazione a livello di specie degli stessi ceppi ri-analizzati in un esperimento indipendente condotto in doppio cieco. Il dendrogramma costruito con i 20 MSP-Spectra implementati nella banca dati è formato da due rami principali: il primo formato dalla specie non patogena L. biflexa e dalla specie a patogenicità intermedia L. fainei ed il secondo che raggruppa insieme le specie patogene L. interrogans, L. kirschneri, L. noguchii e L. borgpetersenii. Il secondo gruppo è ulteriormente suddiviso in due rami, contenenti rispettivamente L. borgpetersenii in uno e L. interrogans, L. kirschneri e L. noguchii nell’altro. Quest’ultimo, a sua volta, è suddiviso in due rami ulteriori: il primo comprendente la sola specie L. noguchii, il secondo le specie L. interrogans e L. kirshneri non separabili tra loro. Inoltre, il dendrogramma costruito con gli MSP-Spectra dei ceppi appartenenti ai generi Borrelia e Leptospira acquisiti in questo studio, e appartenenti al genere Brachyspira (implementati in un lavoro precedentemente condotto) mostra tre gruppi principali separati tra loro, uno per ogni genere, escludendo possibili mis-identificazioni tra i 3 differenti generi di spirochete. Un’analisi più approfondita dei profili proteici ottenuti dall’analisi ha mostrato piccole differenze per ceppi della stessa specie probabilmente dovute ai diversi pattern proteici dei distinti sierotipi, come confermato dalla successiva analisi statistica, che ha evidenziato picchi sierotipo-specifici. È stato, infatti, possibile mediante la creazione di un modello statistico dedicato ottenere un “pattern” di picchi discriminanti in grado di differenziare a livello di sierotipo sia i ceppi di L. interrogans sia i ceppi di L. borgpetersenii saggiati, rispettivamente. Tuttavia, non possiamo concludere che i picchi discriminanti da noi riportati siano universalmente in grado di identificare il sierotipo dei ceppi di L. interrogans ed L. borgpetersenii; i picchi trovati, infatti, sono il risultato di un’analisi condotta su uno specifico pannello di sierotipi. È stato quindi dimostrato che attuando piccoli cambiamenti nei parametri standardizzati come l’utilizzo di un modello statistico e di un programma dedicato applicato nella routine diagnostica è possibile utilizzare la spettrometria di massa MALDI-TOF per una rapida ed economica identificazione anche a livello di sierotipo. Questo può significativamente migliorare gli approcci correntemente utilizzati per monitorare l’insorgenza di focolai epidemici e per la sorveglianza degli agenti patogeni. Analogamente a quanto dimostrato per Borrelia e Leptospira, l’implementazione della banca dati dello spettrometro di massa con spettri di riferimento di miceti filamentosi (dermatofiti) si è rilevata di particolare importanza non solo per l’identificazione di tutte le specie circolanti nella nostra area ma anche per l’identificazione di specie la cui frequenza nel nostro Paese è in aumento a causa dei flussi migratori dalla zone endemiche (M. audouinii, T. violaceum e T. sudanense). Inoltre, l’aggiornamento del “database” ha consentito di superare la mis-identificazione dei ceppi appartenenti al complesso T. mentagrophytes (T. interdigitale e T. mentagrophytes) con T. tonsurans, riscontrata prima dell’implementazione della banca dati commerciale. Il dendrogramma ottenuto dai 24 spettri implementati appartenenti a 13 specie di dermatofiti ha rivelato raggruppamenti che riflettono quelli costruiti su base filogenetica. Sulla base dei risultati ottenuti mediante sequenziamento della porzione della regione ITS del genoma fungino non è stato possibile distinguere T. interdigitale e T. mentagrophytes, conseguentemente anche gli spettri di queste due specie presentavano picchi dello stesso peso molecoalre. Da sottolineare che il dendrogramma costruito con i 12 profili proteici già inclusi nel database commerciale e con i 24 inseriti nel nuovo database non riproduce l’albero filogenetico per alcune specie del genere Tricophyton: gli spettri MSP già presenti nel database e quelli aggiunti delle specie T. interdigitale e T. mentagrophytes raggruppano separatamente. Questo potrebbe spiegare le mis-identificazioni di T. interdigitale e T. mentagrophytes con T. tonsurans ottenute prima dell’implementazione del database. L’efficacia del sistema identificativo MALDI-TOF è stata anche dimostrata per microrganismi diversi da batteri e funghi per i quali la metodica originale è stata sviluppata. Sebbene tale sistema identificativo sia stato applicato con successo a Trichomonas vaginalis è stato necessario apportare modifiche nei parametri standard previsti per l’identificazione di batteri e funghi. Le interferenze riscontrate tra i profili proteici ottenuti per i due terreni utilizzati per la coltura di questo protozoo e per i ceppi di T. vaginalis hanno, infatti, reso necessario l’utilizzo di nuovi parametri per la creazione degli spettri di riferimento (MSP-Spectra). L’importanza dello sviluppo del nuovo metodo risiede nel fatto che è possibile identificare sulla base del profilo proteico (e non sulla base di singoli marcatori) microorganismi cresciuti su terreni complessi che potrebbero presentare picchi nell'intervallo di peso molecolare utilizzato a scopo identificativo: metaboliti, pigmenti e nutrienti presenti nel terreno possono interferire con il processo di cristallizzazione e portare ad un basso punteggio identificativo. Per T. vaginalis, in particolare, la “sottrazione” di picchi dovuti a molecole riconducibili al terreno di crescita utilizzato, è stata ottenuta escludendo dall'identificazione l'intervallo di peso molecolare compreso tra 3-6 kDa, permettendo la corretta identificazione di ceppi di isolamento clinico sulla base del profilo proteico. Tuttavia, l’elevata concentrazione di parassita richiesta (105 trofozoiti/ml) per una corretta identificazione, difficilmente ottenibile in vivo, ha impedito l’identificazione di ceppi di T. vaginalis direttamente in campioni clinici. L’approccio identificativo mediante individuazione di specifici marcatori proteici (secondo approccio identificativo) è stato provato ed adottato in questo studio per l’identificazione e la differenziazione di ceppi di Entamoeba histolytica (ameba patogena) ed Entamoeba dispar (ameba non patogena), specie morfologiacamente identiche e distinguibili solo mediante saggi molecolari (PCR) aventi come bersaglio il DNA-18S, che codifica per l’RNA della subunità ribosomiale minore. Lo sviluppo di tale applicazione ha consentito di superare l’impossibilità della creazione di una banca dati dedicata, a causa della complessità del materiale fecale di partenza e del terreno di coltura impiagato per l’isolamento, e di identificare 5 picchi proteici in grado di differenziare E. histolytica da E. dispar. In particolare, l’analisi statistica ha mostrato 2 picchi specifici per E. histolytica e 3 picchi specifici per E. dispar. L’assenza dei 5 picchi discriminanti trovati per E. histolytica e E. dispar nei profili dei 3 differenti terreni di coltura utilizzati in questo studio (terreno axenico LYI-S-2 e terreno di Robinson con e senza E. coli) permettono di considerare questi picchi buoni marcatori in grado di differenziare le due specie. La corrispondenza dei picchi con il PM di due specifiche proteine di E. histolytica depositate in letteratura (Amoebapore A e un “unknown putative protein” di E. histolytica ceppo di riferimento HM-1:IMSS-A) conferma la specificità dei picchi di E. histolytica identificati mediante analisi MALDI-TOF MS. Lo stesso riscontro non è stato possibile per i picchi di E. dispar in quanto nessuna proteina del PM di interesse è presente in GenBank. Tuttavia, va ricordato che non tutte le proteine E. dispar sono state ad oggi caratterizzate e depositate in letteratura. I 5 marcatori hanno permesso di differenziare 12 dei 13 ceppi isolati da campioni di feci e cresciuti in terreno di Robinson confermando i risultati ottenuti mediante saggio di Real-Time PCR. Per un solo ceppo di isolamento clinico di E. histolytica l’identificazione, confermata mediante sequenziamento della porzione 18S-rDNA, non è stata ottenuta mediante sistema MALDI-TOF MS in quanto non sono stati trovati né i picchi corrispondenti a E. histolytica né i picchi corrispondenti a E. dispar. Per questo ceppo è possibile ipotizzare la presenza di mutazioni geno/fenotipiche a livello delle proteine individuate come marcatori specifici per E. histolytica. Per confermare questa ipotesi sarebbe necessario analizzare un numero maggiore di ceppi di isolamento clinico con analogo profilo proteico. L’analisi condotta a diversi tempi di incubazione del campione di feci positivo per E. histolytica ed E. dipar ha mostrato il ritrovamento dei 5 picchi discriminanti solo dopo 12 ore dall’inoculo del campione nel terreno iniziale di Robinson. Questo risultato suggerisce la possibile applicazione del sistema MALDI-TOF MS per identificare ceppi di isolamento clinico di E. histolytica ed E. dipar nonostante la presenza di materiale fecale che materialmente può disturbare e rendere difficile l’interpretazione dello spettro ottenuto mediante analisi MALDI-TOF MS. Infine in questo studio è stata valutata l’applicabilità della tecnologia MALDI-TOF MS come saggio fenotipico rapido per la determinazione di ceppi produttori di carbapenemasi, verificando l'avvenuta idrolisi del meropenem (carbapeneme di riferimento utilizzato in questo studio) a contatto con i ceppi di riferimento e ceppi di isolamento clinico potenzialmente produttori di carbapenemasi dopo la messa a punto di un protocollo analitico dedicato. Il saggio di idrolisi del meropenem mediante MALDI-TOF MS ha dimostrato la presenza o l’assenza indiretta di carbapenemasi nei 3 ceppi di riferimento e nei 1219 (1185 Enterobacteriaceae e 34 non-Enterobacteriaceae) ceppi di isolamento clinico inclusi nello studio. Nessuna interferenza è stata riscontrata per i ceppi di Enterobacteriaceae variamente resistenti ai tre carbapenemi ma risultati non produttori di carbapenemasi mediante i saggi fenotipici comunemente impiegati nella diagnostica routinaria di laboratorio: nessuna idrolisi del farmaco è stata infatti osservata al saggio di idrolisi mediante MALDI-TOF MS. In un solo caso (ceppo di K. pneumoniae N°1135) è stato ottenuto un profilo anomalo in quanto presenti sia i picchi del farmaco intatto che quelli del farmaco idrolizzato. Per questo ceppo resistente ai tre carbapenemi saggiati, negativo ai saggi fenotipici per la presenza di carbapenemasi, è stata dimostrata la presenza del gene blaKPC mediante Real-Time PCR. Per questo ceppo si può ipotizzare la presenza di mutazioni a carico del gene blaKPC che sebbene non interferiscano con il suo rilevamento mediante PCR (Real-Time PCR positiva), potrebbero condizionare l’attività della proteina prodotta (Saggio di Hodge modificato e Test di Sinergia negativi) riducendone la funzionalità come dimostrato, mediante analisi MALDI-TOF MS, dalla presenza dei picchi relativi sia all’idrolisi del farmaco sia dei picchi relativi al farmaco intatto. Questa ipotesi dovrebbe essere confermata mediante sequenziamento del gene blaKPC e successiva analisi strutturale della sequenza amminoacidica deducibile. L’utilizzo della tecnologia MALDI-TOF MS per la verifica dell’avvenuta idrolisi del maropenem è risultato un saggio fenotipico indiretto in grado di distinguere, al pari del test di Hodge modificato impiegato comunemente nella routine diagnostica in microbiologia, un ceppo produttore di carbapenemasi da un ceppo non produttore sia per scopi diagnostici che per la sorveglianza epidemiologica. L’impiego del MALDI-TOF MS ha mostrato, infatti, diversi vantaggi rispetto ai metodi convenzionali (Saggio di Hodge modificato e Test di Sinergia) impiegati nella routine diagnostica di laboratorio i quali richiedono personale esperto per l’interpretazione del risultato e lunghi tempi di esecuzione e di conseguenza di refertazione. La semplicità e la facilità richieste per la preparazione dei campioni e l’immediata acquisizione dei dati rendono questa tecnica un metodo accurato e rapido. Inoltre, il metodo risulta conveniente dal punto di vista economico, con un costo totale stimato di 1,00 euro per ceppo analizzato. Tutte queste considerazioni pongono questa metodologia in posizione centrale in ambito microbiologico anche nel caso del rilevamento di ceppi produttori di carbapenemasi. Indipendentemente dall’approccio identificativo utilizzato, comparato con i metodi convenzionali il MALDI-TOF MS conferisce in molti casi un guadagno in termini di tempo di lavoro tecnico (procedura pre-analititca per la preparazione dei campioni) e di tempo di ottenimento dei risultati (procedura analitica automatizzata). Questo risparmio di tempo si accentua quando sono analizzati in contemporanea un maggior numero di isolati. Inoltre, la semplicità e la facilità richieste per la preparazione dei campioni e l’immediata acquisizione dei dati rendono questo un metodo di identificazione accurato e rapido risultando più conveniente anche dal punto di vista economico, con un costo totale di 0,50 euro (materiale consumabile) per ceppo analizzato. I risultati ottenuti dimostrano che la spettrometria di massa MALDI-TOF sta diventando uno strumento importante in microbiologia clinica e sperimentale, data l’elevata efficacia identificativa, grazie alla disponibilità sia di nuove banche dati commerciali sia di aggiornamenti delle stesse da parte di diversi utenti, e la possibilità di rilevare con successo anche se in modo indiretto le antibiotico-resistenze.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing economic competition drives the industry to implement tools that improve their processes efficiencies. The process automation is one of these tools, and the Real Time Optimization (RTO) is an automation methodology that considers economic aspects to update the process control in accordance with market prices and disturbances. Basically, RTO uses a steady-state phenomenological model to predict the process behavior, and then, optimizes an economic objective function subject to this model. Although largely implemented in industry, there is not a general agreement about the benefits of implementing RTO due to some limitations discussed in the present work: structural plant/model mismatch, identifiability issues and low frequency of set points update. Some alternative RTO approaches have been proposed in literature to handle the problem of structural plant/model mismatch. However, there is not a sensible comparison evaluating the scope and limitations of these RTO approaches under different aspects. For this reason, the classical two-step method is compared to more recently derivative-based methods (Modifier Adaptation, Integrated System Optimization and Parameter estimation, and Sufficient Conditions of Feasibility and Optimality) using a Monte Carlo methodology. The results of this comparison show that the classical RTO method is consistent, providing a model flexible enough to represent the process topology, a parameter estimation method appropriate to handle measurement noise characteristics and a method to improve the sample information quality. At each iteration, the RTO methodology updates some key parameter of the model, where it is possible to observe identifiability issues caused by lack of measurements and measurement noise, resulting in bad prediction ability. Therefore, four different parameter estimation approaches (Rotational Discrimination, Automatic Selection and Parameter estimation, Reparametrization via Differential Geometry and classical nonlinear Least Square) are evaluated with respect to their prediction accuracy, robustness and speed. The results show that the Rotational Discrimination method is the most suitable to be implemented in a RTO framework, since it requires less a priori information, it is simple to be implemented and avoid the overfitting caused by the Least Square method. The third RTO drawback discussed in the present thesis is the low frequency of set points update, this problem increases the period in which the process operates at suboptimum conditions. An alternative to handle this problem is proposed in this thesis, by integrating the classic RTO and Self-Optimizing control (SOC) using a new Model Predictive Control strategy. The new approach demonstrates that it is possible to reduce the problem of low frequency of set points updates, improving the economic performance. Finally, the practical aspects of the RTO implementation are carried out in an industrial case study, a Vapor Recompression Distillation (VRD) process located in Paulínea refinery from Petrobras. The conclusions of this study suggest that the model parameters are successfully estimated by the Rotational Discrimination method; the RTO is able to improve the process profit in about 3%, equivalent to 2 million dollars per year; and the integration of SOC and RTO may be an interesting control alternative for the VRD process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A major problem related to the treatment of ecosystems is that they have no available mathematical formalization. This implies that many of their properties are not presented as short, rigorous modalities, but rather as long expressions which, from a biological standpoint, totally capture the significance of the property, but which have the disadvantage of not being sufficiently manageable, from a mathematical standpoint. The interpretation of ecosystems through networks allows us to employ the concepts of coverage and invariance alongside other related concepts. The latter will allow us to present the two most important relations in an ecosystem – predator–prey and competition – in a different way. Biological control, defined as “the use of living organisms, their resources or their products to prevent or reduce loss or damage caused by pests”, is now considered the environmentally safest and most economically advantageous method of pest control (van Lenteren, 2011). A guild includes all those organisms that share a common food resource (Polis et al., 1989), which in the context of biological control means all the natural enemies of a given pest. There are several types of intraguild interactions, but the one that has received most research attention is intraguild predation, which occurs when two organisms share the same prey while at the same time participating in some kind of trophic interaction. However, this is not the only intraguild relationship possible, and studies are now being conducted on others, such as oviposition deterrence. In this article, we apply the developed concepts of structural functions, coverage, invariant sets, etc. (Lloret et al., 1998, Esteve and Lloret, 2006a, Esteve and Lloret, 2006b and Esteve and Lloret, 2007) to a tritrophic system that includes aphids, one of the most damaging pests and a current bottleneck for the success of biological control in Mediterranean greenhouses.