896 resultados para Pattern-based interaction models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The current paper is an excerpt from the doctoral thesis ”Multi-Layer Insulation as Contribution to Orbital Debris”written at the Institute of Aerospace Systems of the Technische Universit ̈at of Braunschweig. The Multi-Layer In-sulation (MLI) population included in ESA’s MASTER-2009 (M eteoroid and Space-Debris Terrestrial Environment Reference) software is based on models for two mechanisms: One model simulates the release of MLI debris during fragmentation events while another estimates the continuo us release of larger MLI pieces due to aging related deterioration of the material. The aim of the thesis was to revise the MLI models from the base up followed by a re-validation of the simulated MLI debris population. The validation is based on comparison to measurement data of the GEO and GTO debris environment obtained by the Astronomical Institute of the University of Bern (AIUB) using ESA’s Space Debris Telescope (ESASDT), the 1-m Zeiss telescope located at the Optical Ground Station (OGS) at the Teide Observatory at Tenerife, Spain. The re-validation led to the conclusion that MLI may cover a much smaller portion of the observed objects than previously published. Further investigation of the resulting discrepancy revealed that the contribution of altogether nine known Ariane H-10 upper stage explosion events which occurred between 1984 and 2002 has very likely been underestimated in past simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Linear- and unimodal-based inference models for mean summer temperatures (partial least squares, weighted averaging, and weighted averaging partial least squares models) were applied to a high-resolution pollen and cladoceran stratigraphy from Gerzensee, Switzerland. The time-window of investigation included the Allerød, the Younger Dryas, and the Preboreal. Characteristic major and minor oscillations in the oxygen-isotope stratigraphy, such as the Gerzensee oscillation, the onset and end of the Younger Dryas stadial, and the Preboreal oscillation, were identified by isotope analysis of bulk-sediment carbonates of the same core and were used as independent indicators for hemispheric or global scale climatic change. In general, the pollen-inferred mean summer temperature reconstruction using all three inference models follows the oxygen-isotope curve more closely than the cladoceran curve. The cladoceran-inferred reconstruction suggests generally warmer summers than the pollen-based reconstructions, which may be an effect of terrestrial vegetation not being in equilibrium with climate due to migrational lags during the Late Glacial and early Holocene. Allerød summer temperatures range between 11 and 12°C based on pollen, whereas the cladoceran-inferred temperatures lie between 11 and 13°C. Pollen and cladocera-inferred reconstructions both suggest a drop to 9–10°C at the beginning of the Younger Dryas. Although the Allerød–Younger Dryas transition lasted 150–160 years in the oxygen-isotope stratigraphy, the pollen-inferred cooling took 180–190 years and the cladoceran-inferred cooling lasted 250–260 years. The pollen-inferred summer temperature rise to 11.5–12°C at the transition from the Younger Dryas to the Preboreal preceded the oxygen-isotope signal by several decades, whereas the cladoceran-inferred warming lagged. Major discrepancies between the pollen- and cladoceran-inference models are observed for the Preboreal, where the cladoceran-inference model suggests mean summer temperatures of up to 14–15°C. Both pollen- and cladoceran-inferred reconstructions suggest a cooling that may be related to the Gerzensee oscillation, but there is no evidence for a cooling synchronous with the Preboreal oscillation as recorded in the oxygen-isotope record. For the Gerzensee oscillation the inferred cooling was ca. 1 and 0.5°C based on pollen and cladocera, respectively, which lies well within the inherent prediction errors of the inference models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: To differentiate diabetic macular edema (DME) from pseudophakic cystoid macular edema (PCME) based solely on spectral-domain optical coherence tomography (SD-OCT). METHODS: This cross-sectional study included 134 participants: 49 with PCME, 60 with DME, and 25 with diabetic retinopathy (DR) and ME after cataract surgery. First, two unmasked experts classified the 25 DR patients after cataract surgery as either DME, PCME, or mixed-pattern based on SD-OCT and color-fundus photography. Then all 134 patients were divided into two datasets and graded by two masked readers according to a standardized reading-protocol. Accuracy of the masked readers to differentiate the diseases based on SD-OCT parameters was tested. Parallel to the masked readers, a computer-based algorithm was established using support vector machine (SVM) classifiers to automatically differentiate disease entities. RESULTS: The masked readers assigned 92.5% SD-OCT images to the correct clinical diagnose. The classifier-accuracy trained and tested on dataset 1 was 95.8%. The classifier-accuracy trained on dataset 1 and tested on dataset 2 to differentiate PCME from DME was 90.2%. The classifier-accuracy trained and tested on dataset 2 to differentiate all three diseases was 85.5%. In particular, higher central-retinal thickness/retinal-volume ratio, absence of an epiretinal-membrane, and solely inner nuclear layer (INL)-cysts indicated PCME, whereas higher outer nuclear layer (ONL)/INL ratio, the absence of subretinal fluid, presence of hard exudates, microaneurysms, and ganglion cell layer and/or retinal nerve fiber layer cysts strongly favored DME in this model. CONCLUSIONS: Based on the evaluation of SD-OCT, PCME can be differentiated from DME by masked reader evaluation, and by automated analysis, even in DR patients with ME after cataract surgery. The automated classifier may help to independently differentiate these two disease entities and is made publicly available.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, it has been shown that water fluxes across biological membranes occur not only through the lipid bilayer but also through specialized water-conducting proteins, the so called aquaporins. In the present study, we investigated in young and mature leaves of Brassica napus L. the expression and localization of a vacuolar aquaporin homologous to radish γ-tonoplast intrinsic protein/vacuolar-membrane integral protein of 23 kDa (TIP/VM 23). In-situ hybridization showed that these tonoplast aquaporins are highly expressed not only in developing but also in mature leaves, which export photosynthates. No substantial differences could be observed between different tissues of young and mature leaves. However, independent of the developmental stage, an immunohistochemical approach revealed that the vacuolar membrane of bundle-sheath cells contained more protein cross-reacting with antibodies raised against radish γ-TIP/VM 23 than the mesophyll cells. The lowest labeling was detected in phloem cells. We compared these results with the distribution of plasma-membrane aquaporins cross-reacting with antibodies detecting a domain conserved among members of the plasma-membrane intrinsic protein 1 (PIP1) subfamily. We observed the same picture as for the vacuolar aquaporins. Furthermore, a high density of gold particles labeling proteins of the PIP1 group could be observed in plasmalemmasomes of the vascular parenchyma. Our results indicate that γ-TIP/VM 23 and PIP1 homologous proteins show a similar expression pattern. Based on these results it is tempting to speculate that bundle-sheath cells play an important role in facilitating water fluxes between the apoplastic and symplastic compartments in close proximity to the vascular tissue.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Most studies have looked at breastfeeding practices from the point of view of the maternal behavior only, however in counseling women who choose to breastfeed it is important to be aware of general infant feeding patterns in order to adequately provide information about what to expect. Available literature on the differences in infant breastfeeding behavior by sex is minimal and therefore requires further investigation. Objectives: This study determined if at the age of 2 months there were differences in the amount of breast milk consumed, duration of breastfeeding, and infant satiety by infant sex. It also assessed whether infant sex is an independent predictor of initiation of breastfeeding. Methods: This is a secondary analysis of data obtained from the Infant Feeding Practices Survey II (IFPS II) which was a longitudinal study carried out from May 2005 through June 2007 by the Food and Drug Administration and the Centers for Disease Control and Prevention. The questionnaires asked about demography, prenatal care, mode of delivery, birth weight, infant sex, and breastfeeding patterns. A total of 3,033 and 2,552 mothers completed the neonatal and post-neonatal questionnaires respectively. ^ Results: There was no significant difference in the initiation of breastfeeding by infant sex. About 85% of the male infants initiated breastfeeding compared with 84% of female infants. The odds ratio of ever initiating breastfeeding by male infants was 0.93 but the difference was not significant with a p-value of 0.49. None of the other infant feeding patterns differed by infant gender. ^ Conclusion: This study found no evidence that male infants feed more or that their mothers are more likely to initiate breastfeeding. Each baby is an individual and therefore will have a unique feeding pattern. Based on these findings, the major determining factors for breastfeeding continue to be maternal factors therefore more effort should be invested in promoting breastfeeding among mothers of all ethnic groups and social classes.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ALINE is a pedagogical model developed to aid nursing faculty transition from passive to active learning. Based on constructionist theory, ALINE serves as a tool for organizing curriculum for online and classroom based interaction and permits positioning the student as the active player and the instructor, the facilitator to nursing competency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ecological theory of adaptive radiation predicts that the evolution of phenotypic diversity within species is generated by divergent natural selection arising from different environments and competition between species. Genetic connectivity among populations is likely also to have an important role in both the origin and maintenance of adaptive genetic diversity. Our goal was to evaluate the potential roles of genetic connectivity and natural selection in the maintenance of adaptive phenotypic differences among morphs of Arctic charr, Salvelinus alpinus, in Iceland. At a large spatial scale, we tested the predictive power of geographic structure and phenotypic variation for patterns of neutral genetic variation among populations throughout Iceland. At a smaller scale, we evaluated the genetic differentiation between two morphs in Lake Thingvallavatn relative to historically explicit, coalescent-based null models of the evolutionary history of these lineages. At the large spatial scale, populations are highly differentiated, but weakly structured, both geographically and with respect to patterns of phenotypic variation. At the intralacustrine scale, we observe modest genetic differentiation between two morphs, but this level of differentiation is nonetheless consistent with strong reproductive isolation throughout the Holocene. Rather than a result of the homogenizing effect of gene flow in a system at migration-drift equilibrium, the modest level of genetic differentiation could equally be a result of slow neutral divergence by drift in large populations. We conclude that contemporary and recent patterns of restricted gene flow have been highly conducive to the evolution and maintenance of adaptive genetic variation in Icelandic Arctic charr.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report a measurement of the νµ charged current quasi-elastic cross-sections on carbon in the T2K on-axis neutrino beam. The measured charged current quasi-elastic cross-sections on carbon at mean neutrino energies of 1.94 GeV and 0.93 GeV are (11.95 ± 0.19(stat.) +1.82−1.47(syst.)) ×10^−39 cm^2/neutron, and (10.64 ± 0.37(stat.)+2.03−1.65(syst.)) × 10^−39 cm^2/neutron, respectively. These results agree well with the predictions of neutrino interaction models. In addition, we investigated the effects of the nuclear model and the multi-nucleon interaction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tree-reweighted belief propagation is a message passing method that has certain advantages compared to traditional belief propagation (BP). However, it fails to outperform BP in a consistent manner, does not lend itself well to distributed implementation, and has not been applied to distributions with higher-order interactions. We propose a method called uniformly-reweighted belief propagation that mitigates these drawbacks. After having shown in previous works that this method can substantially outperform BP in distributed inference with pairwise interaction models, in this paper we extend it to higher-order interactions and apply it to LDPC decoding, leading performance gains over BP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La computación basada en servicios (Service-Oriented Computing, SOC) se estableció como un paradigma ampliamente aceptado para el desarollo de sistemas de software flexibles, distribuidos y adaptables, donde las composiciones de los servicios realizan las tareas más complejas o de nivel más alto, frecuentemente tareas inter-organizativas usando los servicios atómicos u otras composiciones de servicios. En tales sistemas, las propriedades de la calidad de servicio (Quality of Service, QoS), como la rapídez de procesamiento, coste, disponibilidad o seguridad, son críticas para la usabilidad de los servicios o sus composiciones en cualquier aplicación concreta. El análisis de estas propriedades se puede realizarse de una forma más precisa y rica en información si se utilizan las técnicas de análisis de programas, como el análisis de complejidad o de compartición de datos, que son capables de analizar simultáneamente tanto las estructuras de control como las de datos, dependencias y operaciones en una composición. El análisis de coste computacional para la composicion de servicios puede ayudar a una monitorización predictiva así como a una adaptación proactiva a través de una inferencia automática de coste computacional, usando los limites altos y bajos como funciones del valor o del tamaño de los mensajes de entrada. Tales funciones de coste se pueden usar para adaptación en la forma de selección de los candidatos entre los servicios que minimizan el coste total de la composición, basado en los datos reales que se pasan al servicio. Las funciones de coste también pueden ser combinadas con los parámetros extraídos empíricamente desde la infraestructura, para producir las funciones de los límites de QoS sobre los datos de entrada, cuales se pueden usar para previsar, en el momento de invocación, las violaciones de los compromisos al nivel de servicios (Service Level Agreements, SLA) potenciales or inminentes. En las composiciones críticas, una previsión continua de QoS bastante eficaz y precisa se puede basar en el modelado con restricciones de QoS desde la estructura de la composition, datos empiricos en tiempo de ejecución y (cuando estén disponibles) los resultados del análisis de complejidad. Este enfoque se puede aplicar a las orquestaciones de servicios con un control centralizado del flujo, así como a las coreografías con participantes multiples, siguiendo unas interacciones complejas que modifican su estado. El análisis del compartición de datos puede servir de apoyo para acciones de adaptación, como la paralelización, fragmentación y selección de los componentes, las cuales son basadas en dependencias funcionales y en el contenido de información en los mensajes, datos internos y las actividades de la composición, cuando se usan construcciones de control complejas, como bucles, bifurcaciones y flujos anidados. Tanto las dependencias funcionales como el contenido de información (descrito a través de algunos atributos definidos por el usuario) se pueden expresar usando una representación basada en la lógica de primer orden (claúsulas de Horn), y los resultados del análisis se pueden interpretar como modelos conceptuales basados en retículos. ABSTRACT Service-Oriented Computing (SOC) is a widely accepted paradigm for development of flexible, distributed and adaptable software systems, in which service compositions perform more complex, higher-level, often cross-organizational tasks using atomic services or other service compositions. In such systems, Quality of Service (QoS) properties, such as the performance, cost, availability or security, are critical for the usability of services and their compositions in concrete applications. Analysis of these properties can become more precise and richer in information, if it employs program analysis techniques, such as the complexity and sharing analyses, which are able to simultaneously take into account both the control and the data structures, dependencies, and operations in a composition. Computation cost analysis for service composition can support predictive monitoring and proactive adaptation by automatically inferring computation cost using the upper and lower bound functions of value or size of input messages. These cost functions can be used for adaptation by selecting service candidates that minimize total cost of the composition, based on the actual data that is passed to them. The cost functions can also be combined with the empirically collected infrastructural parameters to produce QoS bounds functions of input data that can be used to predict potential or imminent Service Level Agreement (SLA) violations at the moment of invocation. In mission-critical applications, an effective and accurate continuous QoS prediction, based on continuations, can be achieved by constraint modeling of composition QoS based on its structure, known data at runtime, and (when available) the results of complexity analysis. This approach can be applied to service orchestrations with centralized flow control, and choreographies with multiple participants with complex stateful interactions. Sharing analysis can support adaptation actions, such as parallelization, fragmentation, and component selection, which are based on functional dependencies and information content of the composition messages, internal data, and activities, in presence of complex control constructs, such as loops, branches, and sub-workflows. Both the functional dependencies and the information content (described using user-defined attributes) can be expressed using a first-order logic (Horn clause) representation, and the analysis results can be interpreted as a lattice-based conceptual models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

University education in Peru is based on models of teacher-centered teaching and a conception of knowledge which is closed and static and under the dominance of an information model now overwhelmed by multiple factors hastened by international change. The worlds most prestigious universities have chosen cultural diversity as a sign of quality and are hence interested in the mobility of teachers and students through exchange and cooperation with foreign educational institutions. These universities respond more effectively to pressure from the international business sector, better satisfy training demands, introduce new information and communication technologies into education and research and have improved administration and management structures. While there is progress, the university system in Peru is a planning model defined "as a discipline that seeks to respond to the needs of an organization defined by new cultural and social models" (A. Cazorla, et al 2007).This paper studies the non-Euclidean thinking of planning and development of John Friedmann (2001). Based on the four domains of social practice, it proposes a planning model for Peruvian universities that meets international requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Underspanned suspension bridges are structures with important economical and aesthetic advantages, due to their high structural efficiency. However, road bridges of this typology are still uncommon because of limited knowledge about this structural system. In particular, there remains some uncertainty over the dynamic behaviour of these bridges, due to their extreme lightness. The vibrations produced by vehicles crossing the viaduct are one of the main concerns. In this work, traffic-induced dynamic effects on this kind of viaduct are addressed by means of vehicle-bridge dynamic interaction models. A finite element method is used for the structure, and multibody dynamic models for the vehicles, while interaction is represented by means of the penalty method. Road roughness is included in this model in such a way that the fact that profiles under left and right tyres are different, but not independent, is taken into account. In addition, free software {PRPgenerator) to generate these profiles is presented in this paper. The structural dynamic sensitivity of underspanned suspension bridges was found to be considerable, as well as the dynamic amplification factors and deck accelerations. It was also found that vehicle speed has a relevant influence on the results. In addition, the impact of bridge deformation on vehicle vibration was addressed, and the effect on the comfort of vehicle users was shown to be negligible.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El concepto de algoritmo es básico en informática, por lo que es crucial que los alumnos profundicen en él desde el inicio de su formación. Por tanto, contar con una herramienta que guíe a los estudiantes en su aprendizaje puede suponer una gran ayuda en su formación. La mayoría de los autores coinciden en que, para determinar la eficacia de una herramienta de visualización de algoritmos, es esencial cómo se utiliza. Así, los estudiantes que participan activamente en la visualización superan claramente a los que la contemplan de forma pasiva. Por ello, pensamos que uno de los mejores ejercicios para un alumno consiste en simular la ejecución del algoritmo que desea aprender mediante el uso de una herramienta de visualización, i. e. consiste en realizar una simulación visual de dicho algoritmo. La primera parte de esta tesis presenta los resultados de una profunda investigación sobre las características que debe reunir una herramienta de ayuda al aprendizaje de algoritmos y conceptos matemáticos para optimizar su efectividad: el conjunto de especificaciones eMathTeacher, además de un entorno de aprendizaje que integra herramientas que las cumplen: GRAPHs. Hemos estudiado cuáles son las cualidades esenciales para potenciar la eficacia de un sistema e-learning de este tipo. Esto nos ha llevado a la definición del concepto eMathTeacher, que se ha materializado en el conjunto de especificaciones eMathTeacher. Una herramienta e-learning cumple las especificaciones eMathTeacher si actúa como un profesor virtual de matemáticas, i. e. si es una herramienta de autoevaluación que ayuda a los alumnos a aprender de forma activa y autónoma conceptos o algoritmos matemáticos, corrigiendo sus errores y proporcionando pistas para encontrar la respuesta correcta, pero sin dársela explícitamente. En estas herramientas, la simulación del algoritmo no continúa hasta que el usuario introduce la respuesta correcta. Para poder reunir en un único entorno una colección de herramientas que cumplan las especificaciones eMathTeacher hemos creado GRAPHs, un entorno ampliable, basado en simulación visual, diseñado para el aprendizaje activo e independiente de los algoritmos de grafos y creado para que en él se integren simuladores de diferentes algoritmos. Además de las opciones de creación y edición del grafo y la visualización de los cambios producidos en él durante la simulación, el entorno incluye corrección paso a paso, animación del pseudocódigo del algoritmo, preguntas emergentes, manejo de las estructuras de datos del algoritmo y creación de un log de interacción en XML. Otro problema que nos planteamos en este trabajo, por su importancia en el proceso de aprendizaje, es el de la evaluación formativa. El uso de ciertos entornos e-learning genera gran cantidad de datos que deben ser interpretados para llegar a una evaluación que no se limite a un recuento de errores. Esto incluye el establecimiento de relaciones entre los datos disponibles y la generación de descripciones lingüísticas que informen al alumno sobre la evolución de su aprendizaje. Hasta ahora sólo un experto humano era capaz de hacer este tipo de evaluación. Nuestro objetivo ha sido crear un modelo computacional que simule el razonamiento del profesor y genere un informe sobre la evolución del aprendizaje que especifique el nivel de logro de cada uno de los objetivos definidos por el profesor. Como resultado del trabajo realizado, la segunda parte de esta tesis presenta el modelo granular lingüístico de la evaluación del aprendizaje, capaz de modelizar la evaluación y generar automáticamente informes de evaluación formativa. Este modelo es una particularización del modelo granular lingüístico de un fenómeno (GLMP), en cuyo desarrollo y formalización colaboramos, basado en la lógica borrosa y en la teoría computacional de las percepciones. Esta técnica, que utiliza sistemas de inferencia basados en reglas lingüísticas y es capaz de implementar criterios de evaluación complejos, se ha aplicado a dos casos: la evaluación, basada en criterios, de logs de interacción generados por GRAPHs y de cuestionarios de Moodle. Como consecuencia, se han implementado, probado y utilizado en el aula sistemas expertos que evalúan ambos tipos de ejercicios. Además de la calificación numérica, los sistemas generan informes de evaluación, en lenguaje natural, sobre los niveles de competencia alcanzados, usando sólo datos objetivos de respuestas correctas e incorrectas. Además, se han desarrollado dos aplicaciones capaces de ser configuradas para implementar los sistemas expertos mencionados. Una procesa los archivos producidos por GRAPHs y la otra, integrable en Moodle, evalúa basándose en los resultados de los cuestionarios. ABSTRACT The concept of algorithm is one of the core subjects in computer science. It is extremely important, then, for students to get a good grasp of this concept from the very start of their training. In this respect, having a tool that helps and shepherds students through the process of learning this concept can make a huge difference to their instruction. Much has been written about how helpful algorithm visualization tools can be. Most authors agree that the most important part of the learning process is how students use the visualization tool. Learners who are actively involved in visualization consistently outperform other learners who view the algorithms passively. Therefore we think that one of the best exercises to learn an algorithm is for the user to simulate the algorithm execution while using a visualization tool, thus performing a visual algorithm simulation. The first part of this thesis presents the eMathTeacher set of requirements together with an eMathTeacher-compliant tool called GRAPHs. For some years, we have been developing a theory about what the key features of an effective e-learning system for teaching mathematical concepts and algorithms are. This led to the definition of eMathTeacher concept, which has materialized in the eMathTeacher set of requirements. An e-learning tool is eMathTeacher compliant if it works as a virtual math trainer. In other words, it has to be an on-line self-assessment tool that helps students to actively and autonomously learn math concepts or algorithms, correcting their mistakes and providing them with clues to find the right answer. In an eMathTeacher-compliant tool, algorithm simulation does not continue until the user enters the correct answer. GRAPHs is an extendible environment designed for active and independent visual simulation-based learning of graph algorithms, set up to integrate tools to help the user simulate the execution of different algorithms. Apart from the options of creating and editing the graph, and visualizing the changes made to the graph during simulation, the environment also includes step-by-step correction, algorithm pseudo-code animation, pop-up questions, data structure handling and XML-based interaction log creation features. On the other hand, assessment is a key part of any learning process. Through the use of e-learning environments huge amounts of data can be output about this process. Nevertheless, this information has to be interpreted and represented in a practical way to arrive at a sound assessment that is not confined to merely counting mistakes. This includes establishing relationships between the available data and also providing instructive linguistic descriptions about learning evolution. Additionally, formative assessment should specify the level of attainment of the learning goals defined by the instructor. Till now, only human experts were capable of making such assessments. While facing this problem, our goal has been to create a computational model that simulates the instructor’s reasoning and generates an enlightening learning evolution report in natural language. The second part of this thesis presents the granular linguistic model of learning assessment to model the assessment of the learning process and implement the automated generation of a formative assessment report. The model is a particularization of the granular linguistic model of a phenomenon (GLMP) paradigm, based on fuzzy logic and the computational theory of perceptions, to the assessment phenomenon. This technique, useful for implementing complex assessment criteria using inference systems based on linguistic rules, has been applied to two particular cases: the assessment of the interaction logs generated by GRAPHs and the criterion-based assessment of Moodle quizzes. As a consequence, several expert systems to assess different algorithm simulations and Moodle quizzes have been implemented, tested and used in the classroom. Apart from the grade, the designed expert systems also generate natural language progress reports on the achieved proficiency level, based exclusively on the objective data gathered from correct and incorrect responses. In addition, two applications, capable of being configured to implement the expert systems, have been developed. One is geared up to process the files output by GRAPHs and the other one is a Moodle plug-in set up to perform the assessment based on the quizzes results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La seguridad y fiabilidad de los procesos industriales son la principal preocupación de los ingenieros encargados de las plantas industriales. Por lo tanto, desde un punto de vista económico, el objetivo principal es reducir el costo del mantenimiento, el tiempo de inactividad y las pérdidas causadas por los fallos. Por otra parte, la seguridad de los operadores, que afecta a los aspectos sociales y económicos, es el factor más relevante a considerar en cualquier sistema Debido a esto, el diagnóstico de fallos se ha convertido en un foco importante de interés para los investigadores de todo el mundo e ingenieros en la industria. Los principales trabajos enfocados en detección de fallos se basan en modelos de los procesos. Existen diferentes técnicas para el modelado de procesos industriales tales como máquinas de estado, árboles de decisión y Redes de Petri (RdP). Por lo tanto, esta tesis se centra en el modelado de procesos utilizando redes de petri interpretadas. Redes de Petri es una herramienta usada en el modelado gráfico y matemático con la habilidad para describir información de los sistemas de una manera concurrente, paralela, asincrona, distribuida y no determinística o estocástica. RdP son también una herramienta de comunicación visual gráfica útil como lo son las cartas de flujo o diagramas de bloques. Adicionalmente, las marcas de las RdP simulan la dinámica y concurrencia de los sistemas. Finalmente, ellas tienen la capacidad de definir ecuaciones de estado específicas, ecuaciones algebraicas y otros modelos que representan el comportamiento común de los sistemas. Entre los diferentes tipos de redes de petri (Interpretadas, Coloreadas, etc.), este trabajo de investigación trata con redes de petri interpretadas principalmente debido a características tales como sincronización, lugares temporizados, aparte de su capacidad para procesamiento de datos. Esta investigación comienza con el proceso para diseñar y construir el modelo y diagnosticador para detectar fallos definitivos, posteriormente, la dinámica temporal fue adicionada para detectar fallos intermitentes. Dos procesos industriales, concretamente un HVAC (Calefacción, Ventilación y Aire Acondicionado) y un Proceso de Envasado de Líquidos fueron usados como banco de pruebas para implementar la herramienta de diagnóstico de fallos (FD) creada. Finalmente, su capacidad de diagnóstico fue ampliada en orden a detectar fallos en sistemas híbridos. Finalmente, un pequeño helicóptero no tripulado fue elegido como ejemplo de sistema donde la seguridad es un desafío, y las técnicas de detección de fallos desarrolladas en esta tesis llevan a ser una herramienta valorada, desde que los accidentes de las aeronaves no tripuladas (UAVs) envuelven un alto costo económico y son la principal razón para introducir restricciones de volar sobre áreas pobladas. Así, este trabajo introduce un proceso sistemático para construir un Diagnosticador de Fallos del sistema mencionado basado en RdR Esta novedosa herramienta es capaz de detectar fallos definitivos e intermitentes. El trabajo realizado es discutido desde un punto de vista teórico y práctico. El procedimiento comienza con la división del sistema en subsistemas para seguido integrar en una RdP diagnosticadora global que es capaz de monitorear el sistema completo y mostrar las variables críticas al operador en orden a determinar la salud del UAV, para de esta manera prevenir accidentes. Un Sistema de Adquisición de Datos (DAQ) ha sido también diseñado para recoger datos durante los vuelos y alimentar la RdP diagnosticadora. Vuelos reales realizados bajo condiciones normales y de fallo han sido requeridos para llevar a cabo la configuración del diagnosticador y verificar su comportamiento. Vale la pena señalar que un alto riesgo fue asumido en la generación de fallos durante los vuelos, a pesar de eso esto permitió recoger datos básicos para desarrollar el diagnóstico de fallos, técnicas de aislamiento, protocolos de mantenimiento, modelos de comportamiento, etc. Finalmente, un resumen de la validación de resultados obtenidos durante las pruebas de vuelo es también incluido. Un extensivo uso de esta herramienta mejorará los protocolos de mantenimiento para UAVs (especialmente helicópteros) y permite establecer recomendaciones en regulaciones. El uso del diagnosticador usando redes de petri es considerado un novedoso enfoque. ABSTRACT Safety and reliability of industrial processes are the main concern of the engineers in charge of industrial plants. Thus, from an economic point of view, the main goal is to reduce the maintenance downtime cost and the losses caused by failures. Moreover, the safety of the operators, which affects to social and economic aspects, is the most relevant factor to consider in any system. Due to this, fault diagnosis has become a relevant focus of interest for worldwide researchers and engineers in the industry. The main works focused on failure detection are based on models of the processes. There are different techniques for modelling industrial processes such as state machines, decision trees and Petri Nets (PN). Thus, this Thesis is focused on modelling processes by using Interpreted Petri Nets. Petri Nets is a tool used in the graphic and mathematical modelling with ability to describe information of the systems in a concurrent, parallel, asynchronous, distributed and not deterministic or stochastic manner. PNs are also useful graphical visual communication tools as flow chart or block diagram. Additionally, the marks of the PN simulate the dynamics and concurrence of the systems. Finally, they are able to define specific state equations, algebraic equations and other models that represent the common behaviour of systems. Among the different types of PN (Interpreted, Coloured, etc.), this research work deals with the interpreted Petri Nets mainly due to features such as synchronization capabilities, timed places, apart from their capability for processing data. This Research begins with the process for designing and building the model and diagnoser to detect permanent faults, subsequently, the temporal dynamic was added for detecting intermittent faults. Two industrial processes, namely HVAC (Heating, Ventilation and Air Condition) and Liquids Packaging Process were used as testbed for implementing the Fault Diagnosis (FD) tool created. Finally, its diagnostic capability was enhanced in order to detect faults in hybrid systems. Finally, a small unmanned helicopter was chosen as example of system where safety is a challenge and fault detection techniques developed in this Thesis turn out to be a valuable tool since UAVs accidents involve high economic cost and are the main reason for setting restrictions to fly over populated areas. Thus, this work introduces a systematic process for building a Fault Diagnoser of the mentioned system based on Petri Nets. This novel tool is able to detect both intermittent and permanent faults. The work carried out is discussed from theoretical and practical point of view. The procedure begins with a division of the system into subsystems for further integration into a global PN diagnoser that is able to monitor the whole system and show critical variables to the operator in order to determine the UAV health, preventing accidents in this manner. A Data Acquisition System (DAQ) has been also designed for collecting data during the flights and feed PN Diagnoser. Real flights carried out under nominal and failure conditions have been required to perform the diagnoser setup and verify its performance. It is worth noting that a high risk was assumed in the generation of faults during the flights, nevertheless this allowed collecting basic data so as to develop fault diagnosis, isolations techniques, maintenance protocols, behaviour models, etc. Finally, a summary of the validation results obtained during real flight tests is also included. An extensive use of this tool will improve preventive maintenance protocols for UAVs (especially helicopters) and allow establishing recommendations in regulations. The use of the diagnoser by using Petri Nets is considered as novel approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las valoraciones automatizadas (AVM según sus siglas en inglés) son modelos matemáticos de valoración que permiten estimar el valor de un inmueble o una cartera de inmuebles mediante la utilización de información de mercado previamente recogida y analizada. Las principales ventajas de los modelos AVM respecto a las valoraciones tradicionales son su objetividad, rapidez y economía. Existe regulación al respecto y estándares profesionales en otros países, no obstante en España los criterios de valoración mediante modelos AVM siguen siendo excesivamente básicos, al no establecer procesos concretos ni distinguir entre los diversos métodos estadísticos existentes y la aplicación adecuada de los mismos. Por otra parte, desde la publicación de la Circular 3/2008 del Banco de España se ha extendido el uso de este tipo de valoraciones en España para la actualización de valor de inmuebles en garantía de préstamos hipotecarios. La actual desregularización en nuestro país en cuanto a normativa de valoraciones automatizadas nos permite plantear propuestas y metodologías con el fin de aportar un punto de vista nuevo desde la investigación, la experiencia en otros países y la práctica empresarial más reciente. Este trabajo de investigación trata de sentar las bases de una futura regulación en España en materia de valoraciones masivas adaptadas al marco hipotecario español. ABSTRACT The Automated Valuation Models (AVM) are mathematically based valuation models that allow to estimate the value of a property or portfolio of properties by using market information previously collected and analyzed. The main advantages of the AVMs in comparison to traditional valuations are the objectivity, speed and economy. In other countries there is regulation and standards regarding the use of AVMs. However, in Spain the norms that apply to AVM valuations are still too basic, since these norms do not define specific processes and do not distinguish between the different statistical approaches available and their correct use. On the other hand, following the ratification of the 3/2008 Bank of Spain Circular the use of AVM models in Spain for the value update of mortgaged properties has increased. The current deregulation in Spain regarding Automated Valuation Models allows to provide propositions and methodologies in order to offer a new point of view based on the research, experience in other countries and the most recent professional practice. The present research thesis aims to provide basic principles and criteria for a future Spanish regulation on massive valuation in the context of the Spanish mortgage framework.