53 resultados para multivehicle interaction directed-graph model

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a novel framework for encoding latency analysis of arbitrary multiview video coding prediction structures. This framework avoids the need to consider an specific encoder architecture for encoding latency analysis by assuming an unlimited processing capacity on the multiview encoder. Under this assumption, only the influence of the prediction structure and the processing times have to be considered, and the encoding latency is solved systematically by means of a graph model. The results obtained with this model are valid for a multiview encoder with sufficient processing capacity and serve as a lower bound otherwise. Furthermore, with the objective of low latency encoder design with low penalty on rate-distortion performance, the graph model allows us to identify the prediction relationships that add higher encoding latency to the encoder. Experimental results for JMVM prediction structures illustrate how low latency prediction structures with a low rate-distortion penalty can be derived in a systematic manner using the new model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los hipergrafos dirigidos se han empleado en problemas relacionados con lógica proposicional, bases de datos relacionales, linguística computacional y aprendizaje automático. Los hipergrafos dirigidos han sido también utilizados como alternativa a los grafos (bipartitos) dirigidos para facilitar el estudio de las interacciones entre componentes de sistemas complejos que no pueden ser fácilmente modelados usando exclusivamente relaciones binarias. En este contexto, este tipo de representación es conocida como hiper-redes. Un hipergrafo dirigido es una generalización de un grafo dirigido especialmente adecuado para la representación de relaciones de muchos a muchos. Mientras que una arista en un grafo dirigido define una relación entre dos de sus nodos, una hiperarista en un hipergrafo dirigido define una relación entre dos conjuntos de sus nodos. La conexión fuerte es una relación de equivalencia que divide el conjunto de nodos de un hipergrafo dirigido en particiones y cada partición define una clase de equivalencia conocida como componente fuertemente conexo. El estudio de los componentes fuertemente conexos de un hipergrafo dirigido puede ayudar a conseguir una mejor comprensión de la estructura de este tipo de hipergrafos cuando su tamaño es considerable. En el caso de grafo dirigidos, existen algoritmos muy eficientes para el cálculo de los componentes fuertemente conexos en grafos de gran tamaño. Gracias a estos algoritmos, se ha podido averiguar que la estructura de la WWW tiene forma de “pajarita”, donde más del 70% del los nodos están distribuidos en tres grandes conjuntos y uno de ellos es un componente fuertemente conexo. Este tipo de estructura ha sido también observada en redes complejas en otras áreas como la biología. Estudios de naturaleza similar no han podido ser realizados en hipergrafos dirigidos porque no existe algoritmos capaces de calcular los componentes fuertemente conexos de este tipo de hipergrafos. En esta tesis doctoral, hemos investigado como calcular los componentes fuertemente conexos de un hipergrafo dirigido. En concreto, hemos desarrollado dos algoritmos para este problema y hemos determinado que son correctos y cuál es su complejidad computacional. Ambos algoritmos han sido evaluados empíricamente para comparar sus tiempos de ejecución. Para la evaluación, hemos producido una selección de hipergrafos dirigidos generados de forma aleatoria inspirados en modelos muy conocidos de grafos aleatorios como Erdos-Renyi, Newman-Watts-Strogatz and Barabasi-Albert. Varias optimizaciones para ambos algoritmos han sido implementadas y analizadas en la tesis. En concreto, colapsar los componentes fuertemente conexos del grafo dirigido que se puede construir eliminando ciertas hiperaristas complejas del hipergrafo dirigido original, mejora notablemente los tiempos de ejecucion de los algoritmos para varios de los hipergrafos utilizados en la evaluación. Aparte de los ejemplos de aplicación mencionados anteriormente, los hipergrafos dirigidos han sido también empleados en el área de representación de conocimiento. En concreto, este tipo de hipergrafos se han usado para el cálculo de módulos de ontologías. Una ontología puede ser definida como un conjunto de axiomas que especifican formalmente un conjunto de símbolos y sus relaciones, mientras que un modulo puede ser entendido como un subconjunto de axiomas de la ontología que recoge todo el conocimiento que almacena la ontología sobre un conjunto especifico de símbolos y sus relaciones. En la tesis nos hemos centrado solamente en módulos que han sido calculados usando la técnica de localidad sintáctica. Debido a que las ontologías pueden ser muy grandes, el cálculo de módulos puede facilitar las tareas de re-utilización y mantenimiento de dichas ontologías. Sin embargo, analizar todos los posibles módulos de una ontología es, en general, muy costoso porque el numero de módulos crece de forma exponencial con respecto al número de símbolos y de axiomas de la ontología. Afortunadamente, los axiomas de una ontología pueden ser divididos en particiones conocidas como átomos. Cada átomo representa un conjunto máximo de axiomas que siempre aparecen juntos en un modulo. La decomposición atómica de una ontología es definida como un grafo dirigido de tal forma que cada nodo del grafo corresponde con un átomo y cada arista define una dependencia entre una pareja de átomos. En esta tesis introducimos el concepto de“axiom dependency hypergraph” que generaliza el concepto de descomposición atómica de una ontología. Un modulo en una ontología correspondería con un componente conexo en este tipo de hipergrafos y un átomo de una ontología con un componente fuertemente conexo. Hemos adaptado la implementación de nuestros algoritmos para que funcionen también con axiom dependency hypergraphs y poder de esa forma calcular los átomos de una ontología. Para demostrar la viabilidad de esta idea, hemos incorporado nuestros algoritmos en una aplicación que hemos desarrollado para la extracción de módulos y la descomposición atómica de ontologías. A la aplicación la hemos llamado HyS y hemos estudiado sus tiempos de ejecución usando una selección de ontologías muy conocidas del área biomédica, la mayoría disponibles en el portal de Internet NCBO. Los resultados de la evaluación muestran que los tiempos de ejecución de HyS son mucho mejores que las aplicaciones más rápidas conocidas. ABSTRACT Directed hypergraphs are an intuitive modelling formalism that have been used in problems related to propositional logic, relational databases, computational linguistic and machine learning. Directed hypergraphs are also presented as an alternative to directed (bipartite) graphs to facilitate the study of the interactions between components of complex systems that cannot naturally be modelled as binary relations. In this context, they are known as hyper-networks. A directed hypergraph is a generalization of a directed graph suitable for representing many-to-many relationships. While an edge in a directed graph defines a relation between two nodes of the graph, a hyperedge in a directed hypergraph defines a relation between two sets of nodes. Strong-connectivity is an equivalence relation that induces a partition of the set of nodes of a directed hypergraph into strongly-connected components. These components can be collapsed into single nodes. As result, the size of the original hypergraph can significantly be reduced if the strongly-connected components have many nodes. This approach might contribute to better understand how the nodes of a hypergraph are connected, in particular when the hypergraphs are large. In the case of directed graphs, there are efficient algorithms that can be used to compute the strongly-connected components of large graphs. For instance, it has been shown that the macroscopic structure of the World Wide Web can be represented as a “bow-tie” diagram where more than 70% of the nodes are distributed into three large sets and one of these sets is a large strongly-connected component. This particular structure has been also observed in complex networks in other fields such as, e.g., biology. Similar studies cannot be conducted in a directed hypergraph because there does not exist any algorithm for computing the strongly-connected components of the hypergraph. In this thesis, we investigate ways to compute the strongly-connected components of directed hypergraphs. We present two new algorithms and we show their correctness and computational complexity. One of these algorithms is inspired by Tarjan’s algorithm for directed graphs. The second algorithm follows a simple approach to compute the stronglyconnected components. This approach is based on the fact that two nodes of a graph that are strongly-connected can also reach the same nodes. In other words, the connected component of each node is the same. Both algorithms are empirically evaluated to compare their performances. To this end, we have produced a selection of random directed hypergraphs inspired by existent and well-known random graphs models like Erd˝os-Renyi and Newman-Watts-Strogatz. Besides the application examples that we mentioned earlier, directed hypergraphs have also been employed in the field of knowledge representation. In particular, they have been used to compute the modules of an ontology. An ontology is defined as a collection of axioms that provides a formal specification of a set of terms and their relationships; and a module is a subset of an ontology that completely captures the meaning of certain terms as defined in the ontology. In particular, we focus on the modules computed using the notion of syntactic locality. As ontologies can be very large, the computation of modules facilitates the reuse and maintenance of these ontologies. Analysing all modules of an ontology, however, is in general not feasible as the number of modules grows exponentially in the number of terms and axioms of the ontology. Nevertheless, the modules can succinctly be represented using the Atomic Decomposition of an ontology. Using this representation, an ontology can be partitioned into atoms, which are maximal sets of axioms that co-occur in every module. The Atomic Decomposition is then defined as a directed graph such that each node correspond to an atom and each edge represents a dependency relation between two atoms. In this thesis, we introduce the notion of an axiom dependency hypergraph which is a generalization of the atomic decomposition of an ontology. A module in the ontology corresponds to a connected component in the hypergraph, and the atoms of the ontology to the strongly-connected components. We apply our algorithms for directed hypergraphs to axiom dependency hypergraphs and in this manner, we compute the atoms of an ontology. To demonstrate the viability of this approach, we have implemented the algorithms in the application HyS which computes the modules of ontologies and calculate their atomic decomposition. In the thesis, we provide an experimental evaluation of HyS with a selection of large and prominent biomedical ontologies, most of which are available in the NCBO Bioportal. HyS outperforms state-of-the-art implementations in the tasks of extracting modules and computing the atomic decomposition of these ontologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents a systematic method for the generation and treatment of the alarms' graphs, being its final object to find the Alarm Root Cause of the Massive Alarms that are produced in the dispatching centers. Although many works about this matter have been already developed, the problem about the alarm management in the industry is still completely unsolved. In this paper, a simple statistic analysis of the historical data base is conducted. The results obtained by the acquisition alarm systems, are used to generate a directed graph from which the more significant alarms are extracted, previously analyzing any possible case in which a great quantity of alarms are produced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents a cartographic system to facilitate cooperative manoeuvres among autonomous vehicles in a well-known environment. The main objective is to design an extended cartographic system to help in the navigation of autonomous vehicles. This system has to allow the vehicles not only to access the reference points needed for navigation, but also noticeable information such as the location and type of traffic signals, the proximity to a crossing, the streets en route, etc. To do this, a hierarchical representation of the information has been chosen, where the information has been stored in two levels. The lower level contains the archives with the Universal Traverse Mercator (UTM) coordinates of the points that define the reference segments to follow. The upper level contains a directed graph with the relational database in which streets, crossings, roundabouts and other points of interest are represented. Using this new system it is possible to know when the vehicle approaches a crossing, what other paths arrive at that crossing, and, should there be other vehicles circulating on those paths and arriving at the crossing, which one has the highest priority. The data obtained from the cartographic system is used by the autonomous vehicles for cooperative manoeuvres.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La creciente complejidad, heterogeneidad y dinamismo inherente a las redes de telecomunicaciones, los sistemas distribuidos y los servicios avanzados de información y comunicación emergentes, así como el incremento de su criticidad e importancia estratégica, requieren la adopción de tecnologías cada vez más sofisticadas para su gestión, su coordinación y su integración por parte de los operadores de red, los proveedores de servicio y las empresas, como usuarios finales de los mismos, con el fin de garantizar niveles adecuados de funcionalidad, rendimiento y fiabilidad. Las estrategias de gestión adoptadas tradicionalmente adolecen de seguir modelos excesivamente estáticos y centralizados, con un elevado componente de supervisión y difícilmente escalables. La acuciante necesidad por flexibilizar esta gestión y hacerla a la vez más escalable y robusta, ha provocado en los últimos años un considerable interés por desarrollar nuevos paradigmas basados en modelos jerárquicos y distribuidos, como evolución natural de los primeros modelos jerárquicos débilmente distribuidos que sucedieron al paradigma centralizado. Se crean así nuevos modelos como son los basados en Gestión por Delegación, en el paradigma de código móvil, en las tecnologías de objetos distribuidos y en los servicios web. Estas alternativas se han mostrado enormemente robustas, flexibles y escalables frente a las estrategias tradicionales de gestión, pero continúan sin resolver aún muchos problemas. Las líneas actuales de investigación parten del hecho de que muchos problemas de robustez, escalabilidad y flexibilidad continúan sin ser resueltos por el paradigma jerárquico-distribuido, y abogan por la migración hacia un paradigma cooperativo fuertemente distribuido. Estas líneas tienen su germen en la Inteligencia Artificial Distribuida (DAI) y, más concretamente, en el paradigma de agentes autónomos y en los Sistemas Multi-agente (MAS). Todas ellas se perfilan en torno a un conjunto de objetivos que pueden resumirse en alcanzar un mayor grado de autonomía en la funcionalidad de la gestión y una mayor capacidad de autoconfiguración que resuelva los problemas de escalabilidad y la necesidad de supervisión presentes en los sistemas actuales, evolucionar hacia técnicas de control fuertemente distribuido y cooperativo guiado por la meta y dotar de una mayor riqueza semántica a los modelos de información. Cada vez más investigadores están empezando a utilizar agentes para la gestión de redes y sistemas distribuidos. Sin embargo, los límites establecidos en sus trabajos entre agentes móviles (que siguen el paradigma de código móvil) y agentes autónomos (que realmente siguen el paradigma cooperativo) resultan difusos. Muchos de estos trabajos se centran en la utilización de agentes móviles, lo cual, al igual que ocurría con las técnicas de código móvil comentadas anteriormente, les permite dotar de un mayor componente dinámico al concepto tradicional de Gestión por Delegación. Con ello se consigue flexibilizar la gestión, distribuir la lógica de gestión cerca de los datos y distribuir el control. Sin embargo se permanece en el paradigma jerárquico distribuido. Si bien continúa sin definirse aún una arquitectura de gestión fiel al paradigma cooperativo fuertemente distribuido, estas líneas de investigación han puesto de manifiesto serios problemas de adecuación en los modelos de información, comunicación y organizativo de las arquitecturas de gestión existentes. En este contexto, la tesis presenta un modelo de arquitectura para gestión holónica de sistemas y servicios distribuidos mediante sociedades de agentes autónomos, cuyos objetivos fundamentales son el incremento del grado de automatización asociado a las tareas de gestión, el aumento de la escalabilidad de las soluciones de gestión, soporte para delegación tanto por dominios como por macro-tareas, y un alto grado de interoperabilidad en entornos abiertos. A partir de estos objetivos se ha desarrollado un modelo de información formal de tipo semántico, basado en lógica descriptiva que permite un mayor grado de automatización en la gestión en base a la utilización de agentes autónomos racionales, capaces de razonar, inferir e integrar de forma dinámica conocimiento y servicios conceptualizados mediante el modelo CIM y formalizados a nivel semántico mediante lógica descriptiva. El modelo de información incluye además un “mapping” a nivel de meta-modelo de CIM al lenguaje de especificación de ontologías OWL, que supone un significativo avance en el área de la representación y el intercambio basado en XML de modelos y meta-información. A nivel de interacción, el modelo aporta un lenguaje de especificación formal de conversaciones entre agentes basado en la teoría de actos ilocucionales y aporta una semántica operacional para dicho lenguaje que facilita la labor de verificación de propiedades formales asociadas al protocolo de interacción. Se ha desarrollado también un modelo de organización holónico y orientado a roles cuyas principales características están alineadas con las demandadas por los servicios distribuidos emergentes e incluyen la ausencia de control central, capacidades de reestructuración dinámica, capacidades de cooperación, y facilidades de adaptación a diferentes culturas organizativas. El modelo incluye un submodelo normativo adecuado al carácter autónomo de los holones de gestión y basado en las lógicas modales deontológica y de acción.---ABSTRACT---The growing complexity, heterogeneity and dynamism inherent in telecommunications networks, distributed systems and the emerging advanced information and communication services, as well as their increased criticality and strategic importance, calls for the adoption of increasingly more sophisticated technologies for their management, coordination and integration by network operators, service providers and end-user companies to assure adequate levels of functionality, performance and reliability. The management strategies adopted traditionally follow models that are too static and centralised, have a high supervision component and are difficult to scale. The pressing need to flexibilise management and, at the same time, make it more scalable and robust recently led to a lot of interest in developing new paradigms based on hierarchical and distributed models, as a natural evolution from the first weakly distributed hierarchical models that succeeded the centralised paradigm. Thus new models based on management by delegation, the mobile code paradigm, distributed objects and web services came into being. These alternatives have turned out to be enormously robust, flexible and scalable as compared with the traditional management strategies. However, many problems still remain to be solved. Current research lines assume that the distributed hierarchical paradigm has as yet failed to solve many of the problems related to robustness, scalability and flexibility and advocate migration towards a strongly distributed cooperative paradigm. These lines of research were spawned by Distributed Artificial Intelligence (DAI) and, specifically, the autonomous agent paradigm and Multi-Agent Systems (MAS). They all revolve around a series of objectives, which can be summarised as achieving greater management functionality autonomy and a greater self-configuration capability, which solves the problems of scalability and the need for supervision that plague current systems, evolving towards strongly distributed and goal-driven cooperative control techniques and semantically enhancing information models. More and more researchers are starting to use agents for network and distributed systems management. However, the boundaries established in their work between mobile agents (that follow the mobile code paradigm) and autonomous agents (that really follow the cooperative paradigm) are fuzzy. Many of these approximations focus on the use of mobile agents, which, as was the case with the above-mentioned mobile code techniques, means that they can inject more dynamism into the traditional concept of management by delegation. Accordingly, they are able to flexibilise management, distribute management logic about data and distribute control. However, they remain within the distributed hierarchical paradigm. While a management architecture faithful to the strongly distributed cooperative paradigm has yet to be defined, these lines of research have revealed that the information, communication and organisation models of existing management architectures are far from adequate. In this context, this dissertation presents an architectural model for the holonic management of distributed systems and services through autonomous agent societies. The main objectives of this model are to raise the level of management task automation, increase the scalability of management solutions, provide support for delegation by both domains and macro-tasks and achieve a high level of interoperability in open environments. Bearing in mind these objectives, a descriptive logic-based formal semantic information model has been developed, which increases management automation by using rational autonomous agents capable of reasoning, inferring and dynamically integrating knowledge and services conceptualised by means of the CIM model and formalised at the semantic level by means of descriptive logic. The information model also includes a mapping, at the CIM metamodel level, to the OWL ontology specification language, which amounts to a significant advance in the field of XML-based model and metainformation representation and exchange. At the interaction level, the model introduces a formal specification language (ACSL) of conversations between agents based on speech act theory and contributes an operational semantics for this language that eases the task of verifying formal properties associated with the interaction protocol. A role-oriented holonic organisational model has also been developed, whose main features meet the requirements demanded by emerging distributed services, including no centralised control, dynamic restructuring capabilities, cooperative skills and facilities for adaptation to different organisational cultures. The model includes a normative submodel adapted to management holon autonomy and based on the deontic and action modal logics.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

El presente trabajo de investigación se ocupa del estudio de las vibraciones verticales inducidas por vórtices (VIV) en aquellos puentes que, por sus características geométricas y propiedades dinámicas, muestran cierta sensibilidad este tipo de fenómeno aeroelástico. El objeto principal es el análisis del mecanismo de interacción viento-estructura sobre secciones no fuseladas de geometría simple, con objeto de realizar una adecuada caracterización del problema y poder abordar posteriormente el análisis de otras secciones de geometría más compleja, representativas de los principales elementos estructurales de los puentes, como arcos, tableros, torres y pilas. Este aspecto es fundamental durante la fase de diseño del puente, donde deberán tenerse en cuenta también una serie de detalles que pueden influir significativamente su sensibilidad ante problemas aerodinámicos, como la morfología y dimensiones principales de la sección transversal del tablero, la disposición de barreras de seguridad y barreras cortaviento, o las riostras que unen diferentes elementos estructurales. La configuración de dos elementos en tándem o la construcción de un puente en las inmediaciones de otro existente son otros aspectos a considerar respecto a la sensibilidad frente a efectos aeroelásticos. El estudio se ha llevado a cabo principalmente mediante la implementación de simulaciones numéricas que reproducen la interacción entre la corriente de aire y secciones representativas de modelos estructurales, a partir de un código CFD basado en el método de las partículas de vórtices (VPM), siguiendo por tanto un esquema Lagrangiano. Los resultados han sido validados con datos experimentales existentes, valores procedentes de ensayos en túnel de viento y registros reales a partir de diferentes casos de estudio: Alconétar (2006), Niterói (1980), Trans- Tokyo Bay (1995) y Volgogrado (2010). Finalmente, se propone un modelo semi-empírico para la estimación del rango de velocidades críticas y amplitudes de oscilación basado en la utilización de las derivadas de flameo de Scanlan, y la densidad espectral de las fuerzas aerodinámicas en el dominio de la frecuencia. The present research work concerns the study of vertical vortex-induced vibrations (VIV) in bridges which show certain sensitivity to this type of aeroelastic phenomenon. It focuses on the analysis of the wind-structure interaction mechanism on bluff sections, with the objective of making a good characterisation of the problem and subsequently addressing the analysis of sections with a complex geometry, which are representative of the bridge structural elements, such as arches, decks, towers and piers. This issue is of relative importance during the bridge design phase, since minor details of the aforementioned elements can significantly influence its sensitivity to aerodynamic problems. The shape and main dimensions of the deck cross section, the addition of safety barriers and windshields, the presence of braces to enhance the structure mechanical properties, the utilisation of cross sections in tandem arrangement, or the erection of a new bridge in the vicinity of another existing one are some of the aspects to be considered regarding the sensitivity to the aeroelastic effects. The study has been carried out mainly through the implementation of numerical simulations that reproduces the interaction between the airflow and the representative cross section of a structural bridge model, by the use of a CFD code based on the vortex particle method (VPM), thus following a Lagrangian scheme. The results have been validated with existing experimental data, values from wind tunnel tests and full scale observations from the different case studies: Alconétar (2006), Niterói (1980), Trans-Tokyo Bay (1995) and Volgograd (2010). Finally, a new semi-empirical model is proposed for the estimation of the critical wind velocity ranges and oscillation amplitudes based on the use of the Scanlan’s flutter derivatives and the power spectral density of aerodynamic force time history in the frequency domain.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Accessibility is an essential concept widely used to evaluate the impact of transport and land-use strategies in urban planning and policy making. Accessibility is typically evaluated by using separately a transport model or a land-use model. This paper embeds two accessibility indicators (i.e., potential and adaptive accessibility) in a land use and transport interaction (LUTI) model in order to assess transport policies implementation. The first aim is to define the adaptive accessibility, considering the competition factor at territorial level (e.g. workplaces and workers). The second aim is to identify the optimal implementation scenario of policy measures using potential and adaptive accessibility indicators. The analysis of the results in terms of social welfare and accessibility changes closes the paper. Two transport policy measures are applied in Madrid region: a cordon toll and increase bus frequency. They have been simulated through the MARS model (Metropolitan Activity Relocation Simulator, i.e. LUTI model). An optimisation procedure is performed by MARS for maximizing the value of the objective function in order to find the optimal policy implementation (first best). Both policy measures are evaluated in terms of accessibility. Results show that the introduction of the accessibility indicators (potential and adaptive) influence the optimal value of the toll price and bus frequency level, generating different results in terms of social welfare. Mapping the difference between potential and adaptive accessibility indicator shows that the main changes occur in areas where there is a strong competition among different land-use opportunities.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Identified neurons that control eye movements offer an excellent experimental target for the study of Information coding and neuronal interaction processes wíthin the central nervous system. Here are presented some prelimínary results of the motoneuron behaviour during steady eye fíxation, obtained by regressíon and analysis of variance techniques. A flexible information system intended for the systematic acquisitíon and analysis of simultaneous records of neuronal activity and both eyes angular position in a great amount of cells, oriented to the defínition of mathematical models, is also briefly outlíned.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ObjectKineticMonteCarlo models allow for the study of the evolution of the damage created by irradiation to time scales that are comparable to those achieved experimentally. Therefore, the essential ObjectKineticMonteCarlo parameters can be validated through comparison with experiments. However, this validation is not trivial since a large number of parameters is necessary, including migration energies of point defects and their clusters, binding energies of point defects in clusters, as well as the interactionradii. This is particularly cumbersome when describing an alloy, such as the Fe–Cr system, which is of interest for fusion energy applications. In this work we describe an ObjectKineticMonteCarlo model for Fe–Cr alloys in the dilute limit. The parameters used in the model come either from density functional theory calculations or from empirical interatomic potentials. This model is used to reproduce isochronal resistivity recovery experiments of electron irradiateddiluteFe–Cr alloys performed by Abe and Kuramoto. The comparison between the calculated results and the experiments reveal that an important parameter is the capture radius between substitutionalCr and self-interstitialFe atoms. A parametric study is presented on the effect of the capture radius on the simulated recovery curves.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of matter under conditions of high density, pressure, and temperature is a valuable subject for inertial confinement fusion (ICF), astrophysical phenomena, high-power laser interaction with matter, etc. In all these cases, matter is heated and compressed by strong shocks to high pressures and temperatures, becomes partially or completely ionized via thermal or pressure ionization, and is in the form of dense plasma. The thermodynamics and the hydrodynamics of hot dense plasmas cannot be predicted without the knowledge of the equation of state (EOS) that describes how a material reacts to pressure and how much energy is involved. Therefore, the equation of state often takes the form of pressure and energy as functions of density and temperature. Furthermore, EOS data must be obtained in a timely manner in order to be useful as input in hydrodynamic codes. By this reason, the use of fast, robust and reasonably accurate atomic models, is necessary for computing the EOS of a material.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A number of thrombectomy devices using a variety of methods have now been developed to facilitate clot removal. We present research involving one such experimental device recently developed in the UK, called a ‘GP’ Thrombus Aspiration Device (GPTAD). This device has the potential to bring about the extraction of a thrombus. Although the device is at a relatively early stage of development, the results look encouraging. In this work, we present an analysis and modeling of the GPTAD by means of the bond graph technique; it seems to be a highly effective method of simulating the device under a variety of conditions. Such modeling is useful in optimizing the GPTAD and predicting the result of clot extraction. The aim of this simulation model is to obtain the minimum pressure necessary to extract the clot and to verify that both the pressure and the time required to complete the clot extraction are realistic for use in clinical situations, and are consistent with any experimentally obtained data. We therefore consider aspects of rheology and mechanics in our modeling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interaction with smart objects can be accomplished with different technologies, such as tangible interfaces or touch computing, among others. Some of them require the object to be especially designed to be 'smart', and some other are limited in the variety and complexity of the possible actions. This paper describes a user-smart object interaction model and prototype based on the well known event-condition-action (ECA) reasoning, which can work, to a degree, independently of the intelligence embedded into the smart object. It has been designed for mobile devices to act as mediators between users and smart objects and provides an intuitive means for personalization of object's behavior. When the user is close to an object, this one publishes its 'event & action' capabilities to the user's device. The user may accept the object's module offering, which will enable him to configure and control that object, but also its actions with respect to other elements of the environment or the virtual world. The modular ECA interaction model facilitates the integration of different types of objects in a smart space, giving the user full control of their capabilities and facilitating creative mash-uping to build customized functionalities that combine physical and virtual actions

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tree-reweighted belief propagation is a message passing method that has certain advantages compared to traditional belief propagation (BP). However, it fails to outperform BP in a consistent manner, does not lend itself well to distributed implementation, and has not been applied to distributions with higher-order interactions. We propose a method called uniformly-reweighted belief propagation that mitigates these drawbacks. After having shown in previous works that this method can substantially outperform BP in distributed inference with pairwise interaction models, in this paper we extend it to higher-order interactions and apply it to LDPC decoding, leading performance gains over BP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show a procedure for constructing a probabilistic atlas based on affine moment descriptors. It uses a normalization procedure over the labeled atlas. The proposed linear registration is defined by closed-form expressions involving only geometric moments. This procedure applies both to atlas construction as atlas-based segmentation. We model the likelihood term for each voxel and each label using parametric or nonparametric distributions and the prior term is determined by applying the vote-rule. The probabilistic atlas is built with the variability of our linear registration. We have two segmentation strategy: a) it applies the proposed affine registration to bring the target image into the coordinate frame of the atlas or b) the probabilistic atlas is non-rigidly aligning with the target image, where the probabilistic atlas is previously aligned to the target image with our affine registration. Finally, we adopt a graph cut - Bayesian framework for implementing the atlas-based segmentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The solubility parameters of two SBS commercial rubbers with different structures (lineal and radial), and with slightly different styrene content have been determined by inverse gas chromatography technique. The Flory–Huggins interaction parameters of several polymer–solvent mixtures have also been calculated. The influence of the polymer composition, the solvent molecular weight and the temperature over these parameters have been discussed; besides, these parameters have been compared with previous ones, obtained by intrinsic viscosity measurements. From the Flory–Huggins interaction parameters, the infinite dilution activity coefficients of the solvents have been calculated and fitted to the well-known NRTL model. These NRTL binary interaction parameters have a great importance in modelling the separation steps in the process of obtaining the rubber.