43 resultados para Agent-Based model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

La planificación de la movilidad sostenible urbana es una tarea compleja que implica un alto grado de incertidumbre debido al horizonte de planificación a largo plazo, la amplia gama de paquetes de políticas posibles, la necesidad de una aplicación efectiva y eficiente, la gran escala geográfica, la necesidad de considerar objetivos económicos, sociales y ambientales, y la respuesta del viajero a los diferentes cursos de acción y su aceptabilidad política (Shiftan et al., 2003). Además, con las tendencias inevitables en motorización y urbanización, la demanda de terrenos y recursos de movilidad en las ciudades está aumentando dramáticamente. Como consecuencia de ello, los problemas de congestión de tráfico, deterioro ambiental, contaminación del aire, consumo de energía, desigualdades en la comunidad, etc. se hacen más y más críticos para la sociedad. Esta situación no es estable a largo plazo. Para enfrentarse a estos desafíos y conseguir un desarrollo sostenible, es necesario considerar una estrategia de planificación urbana a largo plazo, que aborde las necesarias implicaciones potencialmente importantes. Esta tesis contribuye a las herramientas de evaluación a largo plazo de la movilidad urbana estableciendo una metodología innovadora para el análisis y optimización de dos tipos de medidas de gestión de la demanda del transporte (TDM). La metodología nueva realizado se basa en la flexibilización de la toma de decisiones basadas en utilidad, integrando diversos mecanismos de decisión contrariedad‐anticipada y combinados utilidad‐contrariedad en un marco integral de planificación del transporte. La metodología propuesta incluye dos aspectos principales: 1) La construcción de escenarios con una o varias medidas TDM usando el método de encuesta que incorpora la teoría “regret”. La construcción de escenarios para este trabajo se hace para considerar específicamente la implementación de cada medida TDM en el marco temporal y marco espacial. Al final, se construyen 13 escenarios TDM en términos del más deseable, el más posible y el de menor grado de “regret” como resultado de una encuesta en dos rondas a expertos en el tema. 2) A continuación se procede al desarrollo de un marco de evaluación estratégica, basado en un Análisis Multicriterio de Toma de Decisiones (Multicriteria Decision Analysis, MCDA) y en un modelo “regret”. Este marco de evaluación se utiliza para comparar la contribución de los distintos escenarios TDM a la movilidad sostenible y para determinar el mejor escenario utilizando no sólo el valor objetivo de utilidad objetivo obtenido en el análisis orientado a utilidad MCDA, sino también el valor de “regret” que se calcula por medio del modelo “regret” MCDA. La función objetivo del MCDA se integra en un modelo de interacción de uso del suelo y transporte que se usa para optimizar y evaluar los impactos a largo plazo de los escenarios TDM previamente construidos. Un modelo de “regret”, llamado “referencedependent regret model (RDRM)” (modelo de contrariedad dependiente de referencias), se ha adaptado para analizar la contribución de cada escenario TDM desde un punto de vista subjetivo. La validación de la metodología se realiza mediante su aplicación a un caso de estudio en la provincia de Madrid. La metodología propuesta define pues un procedimiento técnico detallado para la evaluación de los impactos estratégicos de la aplicación de medidas de gestión de la demanda en el transporte, que se considera que constituye una herramienta de planificación útil, transparente y flexible, tanto para los planificadores como para los responsables de la gestión del transporte. Planning sustainable urban mobility is a complex task involving a high degree of uncertainty due to the long‐term planning horizon, the wide spectrum of potential policy packages, the need for effective and efficient implementation, the large geographical scale, the necessity to consider economic, social, and environmental goals, and the traveller’s response to the various action courses and their political acceptability (Shiftan et al., 2003). Moreover, with the inevitable trends on motorisation and urbanisation, the demand for land and mobility in cities is growing dramatically. Consequently, the problems of traffic congestion, environmental deterioration, air pollution, energy consumption, and community inequity etc., are becoming more and more critical for the society (EU, 2011). Certainly, this course is not sustainable in the long term. To address this challenge and achieve sustainable development, a long‐term perspective strategic urban plan, with its potentially important implications, should be established. This thesis contributes on assessing long‐term urban mobility by establishing an innovative methodology for optimizing and evaluating two types of transport demand management measures (TDM). The new methodology aims at relaxing the utility‐based decision‐making assumption by embedding anticipated‐regret and combined utilityregret decision mechanisms in an integrated transport planning framework. The proposed methodology includes two major aspects: 1) Construction of policy scenarios within a single measure or combined TDM policy‐packages using the survey method incorporating the regret theory. The purpose of building the TDM scenarios in this work is to address the specific implementation in terms of time frame and geographic scale for each TDM measure. Finally, 13 TDM scenarios are built in terms of the most desirable, the most expected and the least regret choice by means of the two‐round Delphi based survey. 2) Development of the combined utility‐regret analysis framework based on multicriteria decision analysis (MCDA). This assessment framework is used to compare the contribution of the TDM scenario towards sustainable mobility and to determine the best scenario considering not only the objective utility value obtained from the utilitybased MCDA, but also a regret value that is calculated via a regret‐based MCDA. The objective function of the utility‐based MCDA is integrated in a land use and transport interaction model and is used for optimizing and assessing the long term impacts of the constructed TDM scenarios. A regret based model, called referente dependent regret model (RDRM) is adapted to analyse the contribution of each TDM scenario in terms of a subjective point of view. The suggested methodology is implemented and validated in the case of Madrid. It defines a comprehensive technical procedure for assessing strategic effects of transport demand management measures, which can be useful, transparent and flexible planning tool both for planners and decision‐makers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We investigate optimal strategies to defend valuable goods against the attacks of a thief. Given the value of the goods and the probability of success for the thief, we look for the strategy that assures the largest benefit to each player irrespective of the strategy of his opponent. Two complementary approaches are used: agent-based modeling and game theory. It is shown that the compromise between the value of the goods and the probability of success defines the mixed Nash equilibrium of the game, that is compared with the results of the agent-based simulations and discussed in terms of the system parameters.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we describe the specification of amodel for the semantically interoperable representation of language resources for sentiment analysis. The model integrates "lemon", an RDF-based model for the specification of ontology-lexica (Buitelaar et al. 2009), which is used increasinglyfor the representation of language resources asLinked Data, with Marl, an RDF-based model for the representation of sentiment annotations (West-erski et al., 2011; Sánchez-Rada et al., 2013)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we present an innovative technique to tackle the problem of automatic road sign detection and tracking using an on-board stereo camera. It involves a continuous 3D analysis of the road sign during the whole tracking process. Firstly, a color and appearance based model is applied to generate road sign candidates in both stereo images. A sparse disparity map between the left and right images is then created for each candidate by using contour-based and SURF-based matching in the far and short range, respectively. Once the map has been computed, the correspondences are back-projected to generate a cloud of 3D points, and the best-fit plane is computed through RANSAC, ensuring robustness to outliers. Temporal consistency is enforced by means of a Kalman filter, which exploits the intrinsic smoothness of the 3D camera motion in traffic environments. Additionally, the estimation of the plane allows to correct deformations due to perspective, thus easing further sign classification.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Actualmente existen aplicaciones que permiten simular el comportamiento de bacterias en distintos hábitats y los procesos que ocurren en estos para facilitar su estudio y experimentación sin la necesidad de un laboratorio. Una de las aplicaciones de software libre para la simulación de poblaciones bacteriológicas mas usada es iDynoMiCS (individual-based Dynamics of Microbial Communities Simulator), un simulador basado en agentes que permite trabajar con varios modelos computacionales de bacterias en 2D y 3D. Este simulador permite una gran libertad al configurar una numerosa cantidad de variables con respecto al entorno, reacciones químicas y otros detalles importantes. Una característica importante es el poder simular de manera sencilla la conjugación de plásmidos entre bacterias. Los plásmidos son moléculas de ADN diferentes del cromosoma celular, generalmente circularles, que se replican, transcriben y conjugan independientemente del ADN cromosómico. Estas están presentes normalmente en bacterias procariotas, y en algunas ocasiones en eucariotas, sin embargo, en este tipo de células son llamados episomas. Dado el complejo comportamiento de los plásmidos y la gama de posibilidades que estos presentan como mecanismos externos al funcionamiento básico de la célula, en la mayoría de los casos confiriéndole distintas ventajas evolutivas, como por ejemplo: resistencia antibiótica, entre otros, resulta importante su estudio y subsecuente manipulación. Sin embargo, el marco operativo del iDynoMiCS, en cuanto a simulación de plásmidos se refiere, es demasiado sencillo y no permite realizar operaciones más complejas que el análisis de la propagación de un plásmido en la comunidad. El presente trabajo surge para resolver esta deficiencia de iDynomics. Aquí se analizarán, desarrollarán e implementarán las modificaciones necesarias para que iDynomics pueda simular satisfactoriamente y mas apegado a la realidad la conjugación de plásmidos y permita así mismo resolver distintas operaciones lógicas, como lo son los circuitos genéticos, basadas en plásmidos. También se analizarán los resultados obtenidos de acuerdo a distintos estudios relevantes y a la comparación de los resultados obtenidos con el código original de iDynomics. Adicionalmente se analizará un estudio comparando la eficiencia de detección de una sustancia mediante dos circuitos genéticos distintos. Asimismo el presente trabajo puede tener interés para el grupo LIA de la Facultad de Informática de la Universidad Politécnica de Madrid, el cual está participando en el proyecto europeo BACTOCOM que se centra en el estudio de la conjugación de plásmidos y circuitos genéticos. --ABSTRACT--Currently there are applications that simulate the behavior of bacteria in different habitats and the ongoing processes inside them to facilitate their study and experimentation without the need for an actual laboratory. One of the most used open source applications to simulate bacterial populations is iDynoMiCS (individual-based Dynamics of Microbial Communities Simulator), an agent-based simulator that allows working with several computer models of 2D and 3D bacteria in biofilms. This simulator allows great freedom by means of a large number of configurable variables regarding environment, chemical reactions and other important details of the simulation. Within these characteristics there exists a very basic framework to simulate plasmid conjugation. Plasmids are DNA molecules physically different from the cell’s chromosome, commonly found as small circular, double-stranded DNA molecules that are replicated, conjugated and transcribed independently of chromosomal DNA. These bacteria are normally present in prokaryotes and sometimes in eukaryotes, which in this case these cells are called episomes. Plasmids are external mechanisms to the cells basic operations, and as such, in the majority of the cases, confer to the host cell various evolutionary advantages, like antibiotic resistance for example. It is mperative to further study plasmids and the possibilities they present. However, the operational framework of the iDynoMiCS plasmid simulation is too simple, and does not allow more complex operations that the analysis of the spread of a plasmid in the community. This project was conceived to resolve this particular deficiency in iDynomics, moreover, in this paper is discussed, developed and implemented the necessary changes to iDynomics simulation software so it can satisfactorily and realistically simulate plasmid conjugation, and allow the possibility to solve various ogic operations, such as plasmid-based genetic circuits. Moreover the results obtained will be analyzed and compared with other relevant studies and with those obtained with the original iDynomics code. Conjointly, an additional study detailing the sensing of a substance with two different genetic circuits will be presented. This work may also be relevant to the LIA group of the Faculty of Informatics of the Polytechnic University of Madrid, which is participating in the European project BACTOCOM that focuses on the study of the of plasmid conjugation and genetic circuits.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La idea de dotar a un grupo de robots o agentes artificiales de un lenguaje ha sido objeto de intenso estudio en las ultimas décadas. Como no podía ser de otra forma los primeros intentos se enfocaron hacia el estudio de la emergencia de vocabularios compartidos convencionalmente por el grupo de robots. Las ventajas que puede ofrecer un léxico común son evidentes, como también lo es que un lenguaje con una estructura más compleja, en la que se pudieran combinar palabras, sería todavía más beneficioso. Surgen así algunas propuestas enfocadas hacia la emergencia de un lenguaje consensuado que muestre una estructura sintáctica similar al lenguaje humano, entre las que se encuentra este trabajo. Tomar el lenguaje humano como modelo supone adoptar algunas de las hipótesis y teorías que disciplinas como la filosofía, la psicología o la lingüística entre otras se han encargado de proponer. Según estas aproximaciones teóricas el lenguaje presenta una doble dimension formal y funcional. En base a su dimensión formal parece claro que el lenguaje sigue unas reglas, por lo que el uso de una gramática se ha considerado esencial para su representación, pero también porque las gramáticas son un dispositivo muy sencillo y potente que permite generar fácilmente estructuras simbólicas. En cuanto a la dimension funcional se ha tenido en cuenta la teoría quizá más influyente de los últimos tiempos, que no es otra que la Teoría de los Actos del Habla. Esta teoría se basa en la idea de Wittgenstein por la que el significado reside en el uso del lenguaje, hasta el punto de que éste se entiende como una manera de actuar y de comportarse, en definitiva como una forma de vida. Teniendo presentes estas premisas en esta tesis se pretende experimentar con modelos computacionales que permitan a un grupo de robots alcanzar un lenguaje común de manera autónoma, simplemente mediante interacciones individuales entre los robots, en forma de juegos de lenguaje. Para ello se proponen tres modelos distintos de lenguaje: • Un modelo basado en gramáticas probabilísticas y aprendizaje por refuerzo en el que las interacciones y el uso del lenguaje son claves para su emergencia y que emplea una gramática generativa estática y diseñada de antemano. Este modelo se aplica a dos grupos distintos: uno formado exclusivamente por robots y otro que combina robots y un humano, de manera que en este segundo caso se plantea un aprendizaje supervisado por humanos. • Un modelo basado en evolución gramatical que permite estudiar no solo el consenso sintáctico, sino también cuestiones relativas a la génesis del lenguaje y que emplea una gramática universal a partir de la cual los robots pueden evolucionar por sí mismos la gramática más apropiada según la situación lingüística que traten en cada momento. • Un modelo basado en evolución gramatical y aprendizaje por refuerzo que toma aspectos de los anteriores y amplia las posibilidades de los robots al permitir desarrollar un lenguaje que se adapta a situaciones lingüísticas dinámicas que pueden cambiar en el tiempo y también posibilita la imposición de restricciones de orden muy frecuentes en las estructuras sintácticas complejas. Todos los modelos implican un planteamiento descentralizado y auto-organizado, de manera que ninguno de los robots es el dueño del lenguaje y todos deben cooperar y colaborar de forma coordinada para lograr el consenso sintáctico. En cada caso se plantean experimentos que tienen como objetivo validar los modelos propuestos, tanto en lo relativo al éxito en la emergencia del lenguaje como en lo relacionado con cuestiones paralelas de importancia, como la interacción hombre-máquina o la propia génesis del lenguaje. ABSTRACT The idea of giving a language to a group of robots or artificial agents has been the subject of intense study in recent decades. The first attempts have focused on the development and emergence of a conventionally shared vocabulary. The advantages that can provide a common vocabulary are evident and therefore a more complex language that combines words would be even more beneficial. Thus some proposals are put forward towards the emergence of a consensual language with a sintactical structure in similar terms to the human language. This work follows this trend. Taking the human language as a model means taking some of the assumptions and theories that disciplines such as philosophy, psychology or linguistics among others have provided. According to these theoretical positions language has a double formal and functional dimension. Based on its formal dimension it seems clear that language follows rules, so that the use of a grammar has been considered essential for representation, but also because grammars are a very simple and powerful device that easily generates these symbolic structures. As for the functional dimension perhaps the most influential theory of recent times, the Theory of Speech Acts has been taken into account. This theory is based on the Wittgenstein’s idea about that the meaning lies in the use of language, to the extent that it is understood as a way of acting and behaving. Having into account these issues this work implements some computational models in order to test if they allow a group of robots to reach in an autonomous way a shared language by means of individual interaction among them, that is by means of language games. Specifically, three different models of language for robots are proposed: • A reinforcement learning based model in which interactions and language use are key to its emergence. This model uses a static probabilistic generative grammar which is designed beforehand. The model is applied to two different groups: one formed exclusively by robots and other combining robots and a human. Therefore, in the second case the learning process is supervised by the human. • A model based on grammatical evolution that allows us to study not only the syntactic consensus, but also the very genesis of language. This model uses a universal grammar that allows robots to evolve for themselves the most appropriate grammar according to the current linguistic situation they deal with. • A model based on grammatical evolution and reinforcement learning that takes aspects of the previous models and increases their possibilities. This model allows robots to develop a language in order to adapt to dynamic language situations that can change over time and also allows the imposition of syntactical order restrictions which are very common in complex syntactic structures. All models involve a decentralized and self-organized approach so that none of the robots is the language’s owner and everyone must cooperate and work together in a coordinated manner to achieve syntactic consensus. In each case experiments are presented in order to validate the proposed models, both in terms of success about the emergence of language and it relates to the study of important parallel issues, such as human-computer interaction or the very genesis of language.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The monkey anterior intraparietal area (AIP) encodes visual information about three-dimensional object shape that is used to shape the hand for grasping. We modeled shape tuning in visual AIP neurons and its relationship with curvature and gradient information from the caudal intraparietal area (CIP). The main goal was to gain insight into the kinds of shape parameterizations that can account for AIP tuning and that are consistent with both the inputs to AIP and the role of AIP in grasping. We first experimented with superquadric shape parameters. We considered superquadrics because they occupy a role in robotics that is similar to AIP , in that superquadric fits are derived from visual input and used for grasp planning. We also experimented with an alternative shape parameterization that was based on an Isomap dimension reduction of spatial derivatives of depth (i.e., distance from the observer to the object surface). We considered an Isomap-based model because its parameters lacked discontinuities between similar shapes. When we matched the dimension of the Isomap to the number of superquadric parameters, the superquadric model fit the AIP data somewhat more closely. However, higher-dimensional Isomaps provided excellent fits. Also, we found that the Isomap parameters could be approximated much more accurately than superquadric parameters by feedforward neural networks with CIP-like inputs. We conclude that Isomaps, or perhaps alternative dimension reductions of visual inputs to AIP, provide a promising model of AIP electrophysiology data. Further work is needed to test whether such shape parameterizations actually provide an effective basis for grasp control.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the growing body of research on traumatic brain injury and spinal cord injury, computational neuroscience has recently focused its modeling efforts on neuronal functional deficits following mechanical loading. However, in most of these efforts, cell damage is generally only characterized by purely mechanistic criteria, function of quantities such as stress, strain or their corresponding rates. The modeling of functional deficits in neurites as a consequence of macroscopic mechanical insults has been rarely explored. In particular, a quantitative mechanically based model of electrophysiological impairment in neuronal cells has only very recently been proposed (Jerusalem et al., 2013). In this paper, we present the implementation details of Neurite: the finite difference parallel program used in this reference. Following the application of a macroscopic strain at a given strain rate produced by a mechanical insult, Neurite is able to simulate the resulting neuronal electrical signal propagation, and thus the corresponding functional deficits. The simulation of the coupled mechanical and electrophysiological behaviors requires computational expensive calculations that increase in complexity as the network of the simulated cells grows. The solvers implemented in Neurite-explicit and implicit-were therefore parallelized using graphics processing units in order to reduce the burden of the simulation costs of large scale scenarios. Cable Theory and Hodgkin-Huxley models were implemented to account for the electrophysiological passive and active regions of a neurite, respectively, whereas a coupled mechanical model accounting for the neurite mechanical behavior within its surrounding medium was adopted as a link between lectrophysiology and mechanics (Jerusalem et al., 2013). This paper provides the details of the parallel implementation of Neurite, along with three different application examples: a long myelinated axon, a segmented dendritic tree, and a damaged axon. The capabilities of the program to deal with large scale scenarios, segmented neuronal structures, and functional deficits under mechanical loading are specifically highlighted.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The conception of IoT (Internet of Things) is accepted as the future tendency of Internet among academia and industry. It will enable people and things to be connected at anytime and anyplace, with anything and anyone. IoT has been proposed to be applied into many areas such as Healthcare, Transportation,Logistics, and Smart environment etc. However, this thesis emphasizes on the home healthcare area as it is the potential healthcare model to solve many problems such as the limited medical resources, the increasing demands for healthcare from elderly and chronic patients which the traditional model is not capable of. A remarkable change in IoT in semantic oriented vision is that vast sensors or devices are involved which could generate enormous data. Methods to manage the data including acquiring, interpreting, processing and storing data need to be implemented. Apart from this, other abilities that IoT is not capable of are concluded, namely, interoperation, context awareness and security & privacy. Context awareness is an emerging technology to manage and take advantage of context to enable any type of system to provide personalized services. The aim of this thesis is to explore ways to facilitate context awareness in IoT. In order to realize this objective, a preliminary research is carried out in this thesis. The most basic premise to realize context awareness is to collect, model, understand, reason and make use of context. A complete literature review for the existing context modelling and context reasoning techniques is conducted. The conclusion is that the ontology-based context modelling and ontology-based context reasoning are the most promising and efficient techniques to manage context. In order to fuse ontology into IoT, a specific ontology-based context awareness framework is proposed for IoT applications. In general, the framework is composed of eight components which are hardware, UI (User Interface), Context modelling, Context fusion, Context reasoning, Context repository, Security unit and Context dissemination. Moreover, on the basis of TOVE (Toronto Virtual Enterprise), a formal ontology developing methodology is proposed and illustrated which consists of four stages: Specification & Conceptualization, Competency Formulation, Implementation and Validation & Documentation. In addition, a home healthcare scenario is elaborated by listing its well-defined functionalities. Aiming at representing this specific scenario, the proposed ontology developing methodology is applied and the ontology-based model is developed in a free and open-source ontology editor called Protégé. Finally, the accuracy and completeness of the proposed ontology are validated to show that this proposed ontology is able to accurately represent the scenario of interest.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En este trabajo se aborda una cuestión central en el diseño en carga última de estructuras de hormigón armado y de fábrica: la posibilidad efectiva de que las deformaciones plásticas necesarias para verificar un estado de rotura puedan ser alcanzadas por las regiones de la estructura que deban desarrollar su capacidad última para verificar tal estado. Así, se parte de las decisiones de diseño que mediante mera estática aseguran un equilibrio de la estructura para las cargas últimas que deba resistir, pero determinando directamente el valor de las deformaciones necesarias para llegar a tal estado. Por tanto, no se acude a los teoremas de rotura sin más, sino que se formula el problema desde un punto de vista elastoplástico. Es decir, no se obvia el recorrido que la estructura deba realizar en un proceso de carga incremental monótono, de modo que las regiones no plastificadas contribuyen a coaccionar las libres deformaciones plásticas que, en la teoría de rotura, se suponen. En términos de trabajo y energía, se introduce en el balance del trabajo de las fuerzas externas y en el de la energía de deformación, aquella parte del sistema que no ha plastificado. Establecido así el balance energético como potencial del sistema es cuando la condición de estacionariedad del mismo hace determinados los campos de desplazamientos y, por tanto, el de las deformaciones plásticas también. En definitiva, se trata de un modo de verificar si la ductilidad de los diseños previstos es suficiente, y en qué medida, para verificar el estado de rotura previsto, para unas determinadas cargas impuestas. Dentro del desarrollo teórico del problema, se encuentran ciertas precisiones importantes. Entre ellas, la verificación de que el estado de rotura a que se llega de manera determinada mediante el balance energético elasto-plástico satisface las condiciones de la solución de rotura que los teoremas de carga última predicen, asegurando, por tanto, que la solución determinada -unicidad del problema elásticocoincide con el teorema de unicidad de la carga de rotura, acotando además cuál es el sistema de equilibrio y cuál es la deformada de colapso, aspectos que los teoremas de rotura no pueden asegurar, sino sólo el valor de la carga última a verificar. Otra precisión se basa en la particularidad de los casos en que el sistema presenta una superficie de rotura plana, haciendo infinitas las posibilidades de equilibrio para una misma deformada de colapso determinada, lo que está en la base de, aparentemente, poder plastificar a antojo en vigas y arcos. Desde el planteamiento anterior, se encuentra entonces que existe una condición inherente a cualquier sistema, definidas unas leyes constitutivas internas, que permite al mismo llegar al inicio del estado de rotura sin demandar deformación plástica alguna, produciéndose la plastificación simultánea de todas las regiones que hayan llegado a su solicitación de rotura. En cierto modo, se daría un colapso de apariencia frágil. En tal caso, el sistema conserva plenamente hasta el final su capacidad dúctil y tal estado actúa como representante canónico de cualquier otra solución de equilibrio que con idéntico criterio de diseño interno se prevea para tal estructura. En la medida que el diseño se acerque o aleje de la solución canónica, la demanda de ductilidad del sistema para verificar la carga última será menor o mayor. Las soluciones que se aparten en exceso de la solución canónica, no verificarán el estado de rotura previsto por falta de ductilidad: la demanda de deformación plástica de alguna región plastificada estará más allá de la capacidad de la misma, revelándose una carga de rotura por falta de ductilidad menor que la que se preveía por mero equilibrio. Para la determinación de las deformaciones plásticas de las rótulas, se ha tomado un modelo formulado mediante el Método de los Elementos de Contorno, que proporciona un campo continuo de desplazamientos -y, por ende, de deformaciones y de tensiones- incluso en presencia de fisuras en el contorno. Importante cuestión es que se formula la diferencia, nada desdeñable, de la capacidad de rotación plástica de las secciones de hormigón armado en presencia de cortante y en su ausencia. Para las rótulas de fábrica, la diferencia se establece para las condiciones de la excentricidad -asociadas al valor relativo de la compresión-, donde las diferencias entres las regiones plastificadas con esfuerzo normal relativo alto o bajo son reseñables. Por otro lado, si bien de manera un tanto secundaria, las condiciones de servicio también imponen un límite al diseño previo en carga última deseado. La plastificación lleva asociadas deformaciones considerables, sean locales como globales. Tal cosa impone que, en estado de servicio, si la plastificación de alguna región lleva asociadas fisuraciones excesivas para el ambiente del entorno, la solución sea inviable por ello. Asimismo, las deformaciones de las estructuras suponen un límite severo a las posibilidades de su diseño. Especialmente en edificación, las deformaciones activas son un factor crítico a la hora de decidirse por una u otra solución. Por tanto, al límite que se impone por razón de ductilidad, se debe añadir el que se imponga por razón de las condiciones de servicio. Del modo anterior, considerando las condiciones de ductilidad y de servicio en cada caso, se puede tasar cada decisión de diseño con la previsión de cuáles serán las consecuencias en su estado de carga última y de servicio. Es decir, conocidos los límites, podemos acotar cuáles son los diseños a priori que podrán satisfacer seguro las condiciones de ductilidad y de servicio previstas, y en qué medida. Y, en caso de no poderse satisfacer, qué correcciones debieran realizarse sobre el diseño previo para poderlas cumplir. Por último, de las consecuencias que se extraen de lo estudiado, se proponen ciertas líneas de estudio y de experimentación para poder llegar a completar o expandir de manera práctica los resultados obtenidos. ABSTRACT This work deals with a main issue for the ultimate load design in reinforced concrete and masonry structures: the actual possibility that needed yield strains to reach a ultimate state could be reached by yielded regions on the structure that should develop their ultimate capacity to fulfill such a state. Thus, some statically determined design decisions are posed as a start for prescribed ultimate loads to be counteracted, but finding out the determined value of the strains needed to reach the ultimate load state. Therefore, ultimate load theorems are not taken as they are, but a full elasto-plastic formulation point of view is used. As a result, the path the structure must develop in a monotonus increasing loading procedure is not neglected, leading to the fact that non yielded regions will restrict the supposed totally free yield strains under a pure ultimate load theory. In work and energy terms, in the overall account of external forces work and internal strain energy, those domains in the body not reaching their ultimate state are considered. Once thus established the energy balance of the system as its potential, by imposing on it the stationary condition, both displacements and yield strains appear as determined values. Consequently, what proposed is a means for verifying whether the ductility of prescribed designs is enough and the extent to which they are so, for known imposed loads. On the way for the theoretical development of the proposal, some important aspects have been found. Among these, the verification that the conditions for the ultimate state reached under the elastoplastic energy balance fulfills the conditions prescribed for the ultimate load state predicted through the ultimate load theorems, assuring, therefore, that the determinate solution -unicity of the elastic problemcoincides with the unicity ultimate load theorem, determining as well which equilibrium system and which collapse shape are linked to it, being these two last aspects unaffordable by the ultimate load theorems, that make sure only which is the value of the ultimate load leading to collapse. Another aspect is based on the particular case in which the yield surface of the system is flat -i.e. expressed under a linear expression-, turning out infinite the equilibrium possibilities for one determined collapse shape, which is the basis of, apparently, deciding at own free will the yield distribution in beams and arches. From the foresaid approach, is then found that there is an inherent condition in any system, once defined internal constitutive laws, which allows it arrive at the beginning of the ultimate state or collapse without any yield strain demand, reaching the collapse simultaneously for all regions that have come to their ultimate strength. In a certain way, it would appear to be a fragile collapse. In such a case case, the system fully keeps until the end its ductility, and such a state acts as a canonical representative of any other statically determined solution having the same internal design criteria that could be posed for the that same structure. The extent to which a design is closer to or farther from the canonical solution, the ductility demand of the system to verify the ultimate load will be higher or lower. The solutions being far in excess from the canonical solution, will not verify the ultimate state due to lack of ductility: the demand for yield strains of any yielded region will be beyond its capacity, and a shortcoming ultimate load by lack of ductility will appear, lower than the expected by mere equilibrium. For determining the yield strains of plastic hinges, a Boundary Element Method based model has been used, leading to a continuous displacement field -therefore, for strains and stresses as well- even if cracks on the boundary are present. An important aspect is that a remarkable difference is found in the rotation capacity between plastic hinges in reinforced concrete with or without shear. For masonry hinges, such difference appears when dealing with the eccentricity of axial forces -related to their relative value of compression- on the section, where differences between yield regions under high or low relative compressions are remarkable. On the other hand, although in a certain secondary manner, serviceability conditions impose limits to the previous ultimate load stated wanted too. Yield means always big strains and deformations, locally and globally. Such a thing imposes, for serviceability states, that if a yielded region is associated with too large cracking for the environmental conditions, the predicted design will be unsuitable due to this. Furthermore, displacements must be restricted under certain severe limits that restrain the possibilities for a free design. Especially in building structures, active displacements are a critical factor when chosing one or another solution. Then, to the limits due to ductility reasons, other limits dealing with serviceability conditions shoud be added. In the foresaid way, both considering ductility and serviceability conditions in every case, the results for ultimate load and serviceability to which every design decision will lead can be bounded. This means that, once the limits are known, it is possible to bound which a priori designs will fulfill for sure the prescribed ductility and serviceability conditions, and the extent to wich they will be fulfilled, And, in case they were not, which corrections must be performed in the previous design so that it will. Finally, from the consequences derived through what studied, several study and experimental fields are proposed, in order to achieve a completeness and practical expansion of the obtained results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we introduce B2DI model that extends BDI model to perform Bayesian inference under uncertainty. For scalability and flexibility purposes, Multiply Sectioned Bayesian Network (MSBN) technology has been selected and adapted to BDI agent reasoning. A belief update mechanism has been defined for agents, whose belief models are connected by public shared beliefs, and the certainty of these beliefs is updated based on MSBN. The classical BDI agent architecture has been extended in order to manage uncertainty using Bayesian reasoning. The resulting extended model, so-called B2DI, proposes a new control loop. The proposed B2DI model has been evaluated in a network fault diagnosis scenario. The evaluation has compared this model with two previously developed agent models. The evaluation has been carried out with a real testbed diagnosis scenario using JADEX. As a result, the proposed model exhibits significant improvements in the cost and time required to carry out a reliable diagnosis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cooperative systems are suitable for many types of applications and nowadays these system are vastly used to improve a previously defined system or to coordinate multiple devices working together. This paper provides an alternative to improve the reliability of a previous intelligent identification system. The proposed approach implements a cooperative model based on multi-agent architecture. This new system is composed of several radar-based systems which identify a detected object and transmit its own partial result by implementing several agents and by using a wireless network to transfer data. The proposed topology is a centralized architecture where the coordinator device is in charge of providing the final identification result depending on the group behavior. In order to find the final outcome, three different mechanisms are introduced. The simplest one is based on majority voting whereas the others use two different weighting voting procedures, both providing the system with learning capabilities. Using an appropriate network configuration, the success rate can be improved from the initial 80% up to more than 90%.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Runtime management of distributed information systems is a complex and costly activity. One of the main challenges that must be addressed is obtaining a complete and updated view of all the managed runtime resources. This article presents a monitoring architecture for heterogeneous and distributed information systems. It is composed of two elements: an information model and an agent infrastructure. The model negates the complexity and variability of these systems and enables the abstraction over non-relevant details. The infrastructure uses this information model to monitor and manage the modeled environment, performing and detecting changes in execution time. The agents infrastructure is further detailed and its components and the relationships between them are explained. Moreover, the proposal is validated through a set of agents that instrument the JEE Glassfish application server, paying special attention to support distributed configuration scenarios.