22 resultados para Decomposition of Ranked Models
em Universidad Politécnica de Madrid
Resumo:
Los hipergrafos dirigidos se han empleado en problemas relacionados con lógica proposicional, bases de datos relacionales, linguística computacional y aprendizaje automático. Los hipergrafos dirigidos han sido también utilizados como alternativa a los grafos (bipartitos) dirigidos para facilitar el estudio de las interacciones entre componentes de sistemas complejos que no pueden ser fácilmente modelados usando exclusivamente relaciones binarias. En este contexto, este tipo de representación es conocida como hiper-redes. Un hipergrafo dirigido es una generalización de un grafo dirigido especialmente adecuado para la representación de relaciones de muchos a muchos. Mientras que una arista en un grafo dirigido define una relación entre dos de sus nodos, una hiperarista en un hipergrafo dirigido define una relación entre dos conjuntos de sus nodos. La conexión fuerte es una relación de equivalencia que divide el conjunto de nodos de un hipergrafo dirigido en particiones y cada partición define una clase de equivalencia conocida como componente fuertemente conexo. El estudio de los componentes fuertemente conexos de un hipergrafo dirigido puede ayudar a conseguir una mejor comprensión de la estructura de este tipo de hipergrafos cuando su tamaño es considerable. En el caso de grafo dirigidos, existen algoritmos muy eficientes para el cálculo de los componentes fuertemente conexos en grafos de gran tamaño. Gracias a estos algoritmos, se ha podido averiguar que la estructura de la WWW tiene forma de “pajarita”, donde más del 70% del los nodos están distribuidos en tres grandes conjuntos y uno de ellos es un componente fuertemente conexo. Este tipo de estructura ha sido también observada en redes complejas en otras áreas como la biología. Estudios de naturaleza similar no han podido ser realizados en hipergrafos dirigidos porque no existe algoritmos capaces de calcular los componentes fuertemente conexos de este tipo de hipergrafos. En esta tesis doctoral, hemos investigado como calcular los componentes fuertemente conexos de un hipergrafo dirigido. En concreto, hemos desarrollado dos algoritmos para este problema y hemos determinado que son correctos y cuál es su complejidad computacional. Ambos algoritmos han sido evaluados empíricamente para comparar sus tiempos de ejecución. Para la evaluación, hemos producido una selección de hipergrafos dirigidos generados de forma aleatoria inspirados en modelos muy conocidos de grafos aleatorios como Erdos-Renyi, Newman-Watts-Strogatz and Barabasi-Albert. Varias optimizaciones para ambos algoritmos han sido implementadas y analizadas en la tesis. En concreto, colapsar los componentes fuertemente conexos del grafo dirigido que se puede construir eliminando ciertas hiperaristas complejas del hipergrafo dirigido original, mejora notablemente los tiempos de ejecucion de los algoritmos para varios de los hipergrafos utilizados en la evaluación. Aparte de los ejemplos de aplicación mencionados anteriormente, los hipergrafos dirigidos han sido también empleados en el área de representación de conocimiento. En concreto, este tipo de hipergrafos se han usado para el cálculo de módulos de ontologías. Una ontología puede ser definida como un conjunto de axiomas que especifican formalmente un conjunto de símbolos y sus relaciones, mientras que un modulo puede ser entendido como un subconjunto de axiomas de la ontología que recoge todo el conocimiento que almacena la ontología sobre un conjunto especifico de símbolos y sus relaciones. En la tesis nos hemos centrado solamente en módulos que han sido calculados usando la técnica de localidad sintáctica. Debido a que las ontologías pueden ser muy grandes, el cálculo de módulos puede facilitar las tareas de re-utilización y mantenimiento de dichas ontologías. Sin embargo, analizar todos los posibles módulos de una ontología es, en general, muy costoso porque el numero de módulos crece de forma exponencial con respecto al número de símbolos y de axiomas de la ontología. Afortunadamente, los axiomas de una ontología pueden ser divididos en particiones conocidas como átomos. Cada átomo representa un conjunto máximo de axiomas que siempre aparecen juntos en un modulo. La decomposición atómica de una ontología es definida como un grafo dirigido de tal forma que cada nodo del grafo corresponde con un átomo y cada arista define una dependencia entre una pareja de átomos. En esta tesis introducimos el concepto de“axiom dependency hypergraph” que generaliza el concepto de descomposición atómica de una ontología. Un modulo en una ontología correspondería con un componente conexo en este tipo de hipergrafos y un átomo de una ontología con un componente fuertemente conexo. Hemos adaptado la implementación de nuestros algoritmos para que funcionen también con axiom dependency hypergraphs y poder de esa forma calcular los átomos de una ontología. Para demostrar la viabilidad de esta idea, hemos incorporado nuestros algoritmos en una aplicación que hemos desarrollado para la extracción de módulos y la descomposición atómica de ontologías. A la aplicación la hemos llamado HyS y hemos estudiado sus tiempos de ejecución usando una selección de ontologías muy conocidas del área biomédica, la mayoría disponibles en el portal de Internet NCBO. Los resultados de la evaluación muestran que los tiempos de ejecución de HyS son mucho mejores que las aplicaciones más rápidas conocidas. ABSTRACT Directed hypergraphs are an intuitive modelling formalism that have been used in problems related to propositional logic, relational databases, computational linguistic and machine learning. Directed hypergraphs are also presented as an alternative to directed (bipartite) graphs to facilitate the study of the interactions between components of complex systems that cannot naturally be modelled as binary relations. In this context, they are known as hyper-networks. A directed hypergraph is a generalization of a directed graph suitable for representing many-to-many relationships. While an edge in a directed graph defines a relation between two nodes of the graph, a hyperedge in a directed hypergraph defines a relation between two sets of nodes. Strong-connectivity is an equivalence relation that induces a partition of the set of nodes of a directed hypergraph into strongly-connected components. These components can be collapsed into single nodes. As result, the size of the original hypergraph can significantly be reduced if the strongly-connected components have many nodes. This approach might contribute to better understand how the nodes of a hypergraph are connected, in particular when the hypergraphs are large. In the case of directed graphs, there are efficient algorithms that can be used to compute the strongly-connected components of large graphs. For instance, it has been shown that the macroscopic structure of the World Wide Web can be represented as a “bow-tie” diagram where more than 70% of the nodes are distributed into three large sets and one of these sets is a large strongly-connected component. This particular structure has been also observed in complex networks in other fields such as, e.g., biology. Similar studies cannot be conducted in a directed hypergraph because there does not exist any algorithm for computing the strongly-connected components of the hypergraph. In this thesis, we investigate ways to compute the strongly-connected components of directed hypergraphs. We present two new algorithms and we show their correctness and computational complexity. One of these algorithms is inspired by Tarjan’s algorithm for directed graphs. The second algorithm follows a simple approach to compute the stronglyconnected components. This approach is based on the fact that two nodes of a graph that are strongly-connected can also reach the same nodes. In other words, the connected component of each node is the same. Both algorithms are empirically evaluated to compare their performances. To this end, we have produced a selection of random directed hypergraphs inspired by existent and well-known random graphs models like Erd˝os-Renyi and Newman-Watts-Strogatz. Besides the application examples that we mentioned earlier, directed hypergraphs have also been employed in the field of knowledge representation. In particular, they have been used to compute the modules of an ontology. An ontology is defined as a collection of axioms that provides a formal specification of a set of terms and their relationships; and a module is a subset of an ontology that completely captures the meaning of certain terms as defined in the ontology. In particular, we focus on the modules computed using the notion of syntactic locality. As ontologies can be very large, the computation of modules facilitates the reuse and maintenance of these ontologies. Analysing all modules of an ontology, however, is in general not feasible as the number of modules grows exponentially in the number of terms and axioms of the ontology. Nevertheless, the modules can succinctly be represented using the Atomic Decomposition of an ontology. Using this representation, an ontology can be partitioned into atoms, which are maximal sets of axioms that co-occur in every module. The Atomic Decomposition is then defined as a directed graph such that each node correspond to an atom and each edge represents a dependency relation between two atoms. In this thesis, we introduce the notion of an axiom dependency hypergraph which is a generalization of the atomic decomposition of an ontology. A module in the ontology corresponds to a connected component in the hypergraph, and the atoms of the ontology to the strongly-connected components. We apply our algorithms for directed hypergraphs to axiom dependency hypergraphs and in this manner, we compute the atoms of an ontology. To demonstrate the viability of this approach, we have implemented the algorithms in the application HyS which computes the modules of ontologies and calculate their atomic decomposition. In the thesis, we provide an experimental evaluation of HyS with a selection of large and prominent biomedical ontologies, most of which are available in the NCBO Bioportal. HyS outperforms state-of-the-art implementations in the tasks of extracting modules and computing the atomic decomposition of these ontologies.
Resumo:
Ozone stomatal fluxes were modeled for a 3-year period following different approaches for a commercial variety of durum wheat (Triticum durum Desf. cv. Camacho) at the phenological stage of anthesis. All models performed in the same range, although not all of them afforded equally significant results. Nevertheless, all of them suggest that stomatal conductance would account for the main percentage of ozone deposition fluxes. A new modeling approach was tested, based on a 3-D architectural model of the wheat canopy, and fairly accurate results were obtained. Plant species-specific measurements, as well as measurements of stomatal conductance and environmental parameters, were required. The method proposed for calculating ozone stomatal fluxes (FO(3_3-D)) from experimental gs data and modeling them as a function of certain environmental parameters in conjunction with the use of the YPLANT model seems to be adequate, providing realistic estimates of the canopy FO(3_3-D), integrating and not neglecting the contribution of the lower leaves with respect to the flag leaf, although a further development of this model is needed.
Resumo:
Software architectural evaluation is a key discipline used to identify, at early stages of a real-time system (RTS) development, the problems that may arise during its operation. Typical mechanisms supporting concurrency, such as semaphores, mutexes or monitors, usually lead to concurrency problems in execution time that are difficult to be identified, reproduced and solved. For this reason, it is crucial to understand the root causes of these problems and to provide support to identify and mitigate them at early stages of the system lifecycle. This paper aims to present the results of a research work oriented to the development of the tool called ‘Deadlock Risk Evaluation of Architectural Models’ (DREAM) to assess deadlock risk in architectural models of an RTS. A particular architectural style, Pipelines of Processes in Object-Oriented Architectures–UML (PPOOA) was used to represent platform-independent models of an RTS architecture supported by the PPOOA –Visio tool. We validated the technique presented here by using several case studies related to RTS development and comparing our results with those from other deadlock detection approaches, supported by different tools. Here we present two of these case studies, one related to avionics and the other to planetary exploration robotics. Copyright © 2011 John Wiley & Sons, Ltd.
Resumo:
We present a method for the static resource usage analysis of MiniZinc models. The analysis can infer upper bounds on the usage that a MiniZinc model will make of some resources such as the number of constraints of a given type (equality, disequality, global constraints, etc.), the number of variables (search variables or temporary variables), or the size of the expressions before calling the solver. These bounds are obtained from the models independently of the concrete input data (the instance data) and are in general functions of sizes of such data. In our approach, MiniZinc models are translated into Ciao programs which are then analysed by the CiaoPP system. CiaoPP includes a parametric analysis framework for resource usage in which the user can define resources and express the resource usage of library procedures (and certain program construets) by means of a language of assertions. We present the approach and report on a preliminary implementation, which shows the feasibility of the approach, and provides encouraging results.
Resumo:
We present two approaches to cluster dialogue-based information obtained by the speech understanding module and the dialogue manager of a spoken dialogue system. The purpose is to estimate a language model related to each cluster, and use them to dynamically modify the model of the speech recognizer at each dialogue turn. In the first approach we build the cluster tree using local decisions based on a Maximum Normalized Mutual Information criterion. In the second one we take global decisions, based on the optimization of the global perplexity of the combination of the cluster-related LMs. Our experiments show a relative reduction of the word error rate of 15.17%, which helps to improve the performance of the understanding and the dialogue manager modules.
Resumo:
There is evidence that the climate changes and that now, the change is influenced and accelerated by the CO2 augmentation in atmosphere due to combustion by humans. Such ?Climate change? is on the policy agenda at the global level, with the aim of understanding and reducing its causes and to mitigate its consequences. In most countries and international organisms UNO (e.g. Rio de Janeiro 1992), OECD, EC, etc . . . the efforts and debates have been directed to know the possible causes, to predict the future evolution of some variable conditioners, and trying to make studies to fight against the effects or to delay the negative evolution of such. The Protocol of Kyoto 1997 set international efforts about CO2 emissions, but it was partial and not followed e.g. by USA and China . . . , and in Durban 2011 the ineffectiveness of humanity on such global real challenges was set as evident. Among all that, the elaboration of a global model was not boarded that can help to choose the best alternative between the feasible ones, to elaborate the strategies and to evaluate the costs, and the authors propose to enter in that frame for study. As in all natural, technological and social changes, the best-prepared countries will have the best bear and the more rapid recover. In all the geographic areas the alternative will not be the same one, but the model must help us to make the appropriated decision. It is essential to know those areas that are more sensitive to the negative effects of climate change, the parameters to take into account for its evaluation, and comprehensive plans to deal with it. The objective of this paper is to elaborate a mathematical model support of decisions, which will allow to develop and to evaluate alternatives of adaptation to the climatic change of different communities in Europe and Latin-America, mainly in especially vulnerable areas to the climatic change, considering in them all the intervening factors. The models will consider criteria of physical type (meteorological, edaphic, water resources), of use of the ground (agriculturist, forest, mining, industrial, urban, tourist, cattle dealer), economic (income, costs, benefits, infrastructures), social (population), politician (implementation, legislation), educative (Educational programs, diffusion) and environmental, at the present moment and the future. The intention is to obtain tools for aiding to get a realistic position for these challenges, which are an important part of the future problems of humanity in next decades.
Resumo:
Climate change is on the policy agenda at the global level, with the aim of understanding and reducing its causes and to mitigate its consequences. In most of the countries and international organisms UNO, OECD, EC, etc … the efforts and debates have been directed to know the possible causes, to predict the future evolution of some variable conditioners, and trying to make studies to fight against the effects or to delay the negative evolution of such. Nevertheless, the elaboration of a global model was not boarded that can help to choose the best alternative between the feasible ones, to elaborate the strategies and to evaluate the costs. As in all natural, technological and social changes, the best-prepared countries will have the best bear and the more rapid recover. In all the geographic areas the alternative will not be the same one, but the model should help us to make the appropriated decision. It is essential to know those areas that are more sensitive to the negative effects of climate change, the parameters to take into account for its evaluation, and comprehensive plans to deal with it. The objective of this paper is to elaborate a mathematical model support of decisions, that will allow to develop and to evaluate alternatives of adaptation to the climatic change of different communities in Europe and Latin-America, mainly, in vulnerable areas to the climatic change, considering in them all the intervening factors. The models will take into consideration criteria of physical type (meteorological, edaphic, water resources), of use of the ground (agriculturist, forest, mining, industrial, urban, tourist, cattle dealer), economic (income, costs, benefits, infrastructures), social (population), politician (implementation, legislation), educative (Educational programs, diffusion), sanitary and environmental, at the present moment and the future.
Resumo:
We present two approaches to cluster dialogue-based information obtained by the speech understanding module and the dialogue manager of a spoken dialogue system. The purpose is to estimate a language model related to each cluster, and use them to dynamically modify the model of the speech recognizer at each dialogue turn. In the first approach we build the cluster tree using local decisions based on a Maximum Normalized Mutual Information criterion. In the second one we take global decisions, based on the optimization of the global perplexity of the combination of the cluster-related LMs. Our experiments show a relative reduction of the word error rate of 15.17%, which helps to improve the performance of the understanding and the dialogue manager modules.
Resumo:
In the recent years the missing fourth component, the memristor, was successfully synthesized. However, the mathematical complexity and variety of the models behind this component, in addition to the existence of convergence problems in the simulations, make the design of memristor-based applications long and difficult. In this work we present a memristor model characterization framework which supports the automated generation of subcircuit files. The proposed environment allows the designer to choose and parameterize the memristor model that best suits for a given application. The framework carries out characterizing simulations in order to study the possible non-convergence problems, solving the dependence on the simulation conditions and guaranteeing the functionality and performance of the design. Additionally, the occurrence of undesirable effects related to PVT variations is also taken into account. By performing a Monte Carlo or a corner analysis, the designer is aware of the safety margins which assure the correct device operation.
Resumo:
This paper presents a new methodology to build parametric models to estimate global solar irradiation adjusted to specific on-site characteristics based on the evaluation of variable im- portance. Thus, those variables higly correlated to solar irradiation on a site are implemented in the model and therefore, different models might be proposed under different climates. This methodology is applied in a study case in La Rioja region (northern Spain). A new model is proposed and evaluated on stability and accuracy against a review of twenty-two already exist- ing parametric models based on temperatures and rainfall in seventeen meteorological stations in La Rioja. The methodology of model evaluation is based on bootstrapping, which leads to achieve a high level of confidence in model calibration and validation from short time series (in this case five years, from 2007 to 2011). The model proposed improves the estimates of the other twenty-two models with average mean absolute error (MAE) of 2.195 MJ/m2 day and average confidence interval width (95% C.I., n=100) of 0.261 MJ/m2 day. 41.65% of the daily residuals in the case of SIAR and 20.12% in that of SOS Rioja fall within the uncertainty tolerance of the pyranometers of the two networks (10% and 5%, respectively). Relative differences between measured and estimated irradiation on an annual cumulative basis are below 4.82%. Thus, the proposed model might be useful to estimate annual sums of global solar irradiation, reaching insignificant differences between measurements from pyranometers.
Resumo:
Several authors have analysed the changes of the probability density function of the solar radiation with different time resolutions. Some others have approached to study the significance of these changes when produced energy calculations are attempted. We have undertaken different transformations to four Spanish databases in order to clarify the interrelationship between radiation models and produced energy estimations. Our contribution is straightforward: the complexity of a solar radiation model needed for yearly energy calculations, is very low. Twelve values of monthly mean of solar radiation are enough to estimate energy with errors below 3%. Time resolutions better than hourly samples do not improve significantly the result of energy estimations.
Resumo:
An elliptic computational fluid dynamics wake model based on the actuator disk concept is used to simulate a wind turbine, approximated by a disk upon which a distribution of forces, defined as axial momentum sources, is applied on an incoming non-uniform shear flow. The rotor is supposed to be uniformly loaded with the exerted forces estimated as a function of the incident wind speed, thrust coefficient and rotor diameter. The model is assessed in terms of wind speed deficit and added turbulence intensity for different turbulence models and is validated from experimental measurements of the Sexbierum wind turbine experiment.
Resumo:
We present an approach to adapt dynamically the language models (LMs) used by a speech recognizer that is part of a spoken dialogue system. We have developed a grammar generation strategy that automatically adapts the LMs using the semantic information that the user provides (represented as dialogue concepts), together with the information regarding the intentions of the speaker (inferred by the dialogue manager, and represented as dialogue goals). We carry out the adaptation as a linear interpolation between a background LM, and one or more of the LMs associated to the dialogue elements (concepts or goals) addressed by the user. The interpolation weights between those models are automatically estimated on each dialogue turn, using measures such as the posterior probabilities of concepts and goals, estimated as part of the inference procedure to determine the actions to be carried out. We propose two approaches to handle the LMs related to concepts and goals. Whereas in the first one we estimate a LM for each one of them, in the second one we apply several clustering strategies to group together those elements that share some common properties, and estimate a LM for each cluster. Our evaluation shows how the system can estimate a dynamic model adapted to each dialogue turn, which helps to improve the performance of the speech recognition (up to a 14.82% of relative improvement), which leads to an improvement in both the language understanding and the dialogue management tasks.
Resumo:
An asymptotic analysîs of the Eberstein-Glassman kinetic mechanlsm for the thermal décomposition of hydrazine is carried out. It is shown that at températures near 800°K and near 1000°K,and for hydrazine molar fractions of the order of unity, 10-2 the entire kinetics reduces to a single, overall reaction. Characteristic times for the chemical relaxation of ail active, intermediate species produced in the décomposition, and for the overall reaction, are obtained. Explicit expressions for the overall reaction rate and stoichiometry are given as functions of température, total molar concentration (or pressure)and hydrazine molar fraction. Approximate, patched expressions can then be obtained for values of température and hydrazine molar fraction between 750 and 1000°K, and 1 and 10-3 respectively.
Resumo:
During launch, satellite and their equipment are subjected to loads of random nature and with a wide frequency range. Their vibro-acoustic response is an important issue to be analysed, for example for folded solar arrays and antennas. The main issue at low modal density is the modelling combinations engaging air layers, structures and external fluid. Depending on the modal density different methodologies, as FEM, BEM and SEA should be considered. This work focuses on the analysis of different combinations of the methodologies previously stated used in order to characterise the vibro-acoustic response of two rectangular sandwich structure panels isolated and engaging an air layer between them under a diffuse acoustic field. Focusing on the modelling of air layers, different models are proposed. To illustrate the phenomenology described and studied, experimental results from an acoustic test on an ARA-MKIII solar array in folded configuration are presented along with numerical results.