17 resultados para Computer models

em Universidad Politécnica de Madrid


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Actualmente existen aplicaciones que permiten simular el comportamiento de bacterias en distintos hábitats y los procesos que ocurren en estos para facilitar su estudio y experimentación sin la necesidad de un laboratorio. Una de las aplicaciones de software libre para la simulación de poblaciones bacteriológicas mas usada es iDynoMiCS (individual-based Dynamics of Microbial Communities Simulator), un simulador basado en agentes que permite trabajar con varios modelos computacionales de bacterias en 2D y 3D. Este simulador permite una gran libertad al configurar una numerosa cantidad de variables con respecto al entorno, reacciones químicas y otros detalles importantes. Una característica importante es el poder simular de manera sencilla la conjugación de plásmidos entre bacterias. Los plásmidos son moléculas de ADN diferentes del cromosoma celular, generalmente circularles, que se replican, transcriben y conjugan independientemente del ADN cromosómico. Estas están presentes normalmente en bacterias procariotas, y en algunas ocasiones en eucariotas, sin embargo, en este tipo de células son llamados episomas. Dado el complejo comportamiento de los plásmidos y la gama de posibilidades que estos presentan como mecanismos externos al funcionamiento básico de la célula, en la mayoría de los casos confiriéndole distintas ventajas evolutivas, como por ejemplo: resistencia antibiótica, entre otros, resulta importante su estudio y subsecuente manipulación. Sin embargo, el marco operativo del iDynoMiCS, en cuanto a simulación de plásmidos se refiere, es demasiado sencillo y no permite realizar operaciones más complejas que el análisis de la propagación de un plásmido en la comunidad. El presente trabajo surge para resolver esta deficiencia de iDynomics. Aquí se analizarán, desarrollarán e implementarán las modificaciones necesarias para que iDynomics pueda simular satisfactoriamente y mas apegado a la realidad la conjugación de plásmidos y permita así mismo resolver distintas operaciones lógicas, como lo son los circuitos genéticos, basadas en plásmidos. También se analizarán los resultados obtenidos de acuerdo a distintos estudios relevantes y a la comparación de los resultados obtenidos con el código original de iDynomics. Adicionalmente se analizará un estudio comparando la eficiencia de detección de una sustancia mediante dos circuitos genéticos distintos. Asimismo el presente trabajo puede tener interés para el grupo LIA de la Facultad de Informática de la Universidad Politécnica de Madrid, el cual está participando en el proyecto europeo BACTOCOM que se centra en el estudio de la conjugación de plásmidos y circuitos genéticos. --ABSTRACT--Currently there are applications that simulate the behavior of bacteria in different habitats and the ongoing processes inside them to facilitate their study and experimentation without the need for an actual laboratory. One of the most used open source applications to simulate bacterial populations is iDynoMiCS (individual-based Dynamics of Microbial Communities Simulator), an agent-based simulator that allows working with several computer models of 2D and 3D bacteria in biofilms. This simulator allows great freedom by means of a large number of configurable variables regarding environment, chemical reactions and other important details of the simulation. Within these characteristics there exists a very basic framework to simulate plasmid conjugation. Plasmids are DNA molecules physically different from the cell’s chromosome, commonly found as small circular, double-stranded DNA molecules that are replicated, conjugated and transcribed independently of chromosomal DNA. These bacteria are normally present in prokaryotes and sometimes in eukaryotes, which in this case these cells are called episomes. Plasmids are external mechanisms to the cells basic operations, and as such, in the majority of the cases, confer to the host cell various evolutionary advantages, like antibiotic resistance for example. It is mperative to further study plasmids and the possibilities they present. However, the operational framework of the iDynoMiCS plasmid simulation is too simple, and does not allow more complex operations that the analysis of the spread of a plasmid in the community. This project was conceived to resolve this particular deficiency in iDynomics, moreover, in this paper is discussed, developed and implemented the necessary changes to iDynomics simulation software so it can satisfactorily and realistically simulate plasmid conjugation, and allow the possibility to solve various ogic operations, such as plasmid-based genetic circuits. Moreover the results obtained will be analyzed and compared with other relevant studies and with those obtained with the original iDynomics code. Conjointly, an additional study detailing the sensing of a substance with two different genetic circuits will be presented. This work may also be relevant to the LIA group of the Faculty of Informatics of the Polytechnic University of Madrid, which is participating in the European project BACTOCOM that focuses on the study of the of plasmid conjugation and genetic circuits.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Kinépolis Madrid es uno de los mayores complejos cinematográficos del mundo contando incluso con records Guinness como el del complejo cinematográfico con mayor número de butacas del mundo. Está compuesto por 25 salas con capacidades entre 220 y 996 espectadores. Todas estas salas están equipadas con las últimas tecnologías de sonido e imagen y están adecuadamente acondicionadas para que las características acústicas de las mismas sean óptimas; no obstante, en el complejo no disponen de información sobre estas características. El presente PFG tratará de medir algunos de estos parámetros acústicos como la claridad, la definición o la inteligibilidad de la sala, pero se prestará especial atención al tiempo de reverberación de la misma ya que es uno de los parámetros más significativos a la hora de caracterizar acústicamente una sala. En concreto, se trabajará sobre la sala número 3, con capacidad para 327 espectadores, lo que la convierte en una de las salas de medio tamaño del recinto. Por otro lado, además de medir las características acústicas de la sala, se medirán las dimensiones de la misma para, posteriormente, construir dos modelos virtuales de la misma. Uno de ellos será un modelo detallado, mientras que el otro será más simple. A partir de estos modelos, se realizarán simulaciones para obtener los mismos parámetros medidos en la sala real. Una vez se obtengan los parámetros acústicos de ambas maneras, se compararán las medidas entre sí, estudiando si las diferencias entre los medidos y los simulados superan ciertos umbrales que estimarán si los modelos creados por ordenador realmente pueden representar a la sala real, o no. Por último, se obtendrán conclusiones para saber cuál de los dos modelos creados se acerca más a las medidas reales, cómo realizar las simulaciones, qué tipos de señal utilizar en las medidas y qué parámetros tener en cuenta, para así facilitar el trabajo en futuras experiencias ahorrando tiempo. ABSTRACT. Kinepolis Madrid is one of the largest cinema complexes in the world, having won even a Guinness as the cinema complex with more seats in the world. It consists of 25 cinemas whose capacities are between 220 and 996 people. All these cinemas are fully equipped with the latest audio and video technologies and are appropriately conditioned for the optimal acoustic characteristics; nevertheless, the resort does not have information on these features. This PFG’s aim, is trying to measure some acoustic parameters such as clarity, definition or intelligibility of the room, but paying special attention to the reverberation time since, it is one of the most significant acoustic parameters to characterize a room. In particular, the study will be developed at the cinema number 3 with capacity for 327 spectators, which turns it into one of the rooms of average size of the enclosure. In addition to measuring the acoustic characteristics of the room, the dimensions of it will be also measured, to then build two virtual models of it. One will be a detailed model, while the other one, will be much simpler. After that, simulations from those models will be performed in order to obtain the same data measured at the real room. Once the acoustic parameters had been obtained in both ways, there will be a comparison of all the measures together, studying whether differences between the real measured data and simulated ones exceed an estimated limit. This comparison, will give information about whether the computer models created can really represent the real room or not. Finally, conclusions to know which of the two created models is more appropriate to use, how to perform the simulations, what types of signal should be used in the measurements and which parameters to take into account in order to facilitate and saving time in future experiences, will be drawn.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the presence of a river flood, operators in charge of control must take decisions based on imperfect and incomplete sources of information (e.g., data provided by a limited number sensors) and partial knowledge about the structure and behavior of the river basin. This is a case of reasoning about a complex dynamic system with uncertainty and real-time constraints where bayesian networks can be used to provide an effective support. In this paper we describe a solution with spatio-temporal bayesian networks to be used in a context of emergencies produced by river floods. In the paper we describe first a set of types of causal relations for hydrologic processes with spatial and temporal references to represent the dynamics of the river basin. Then we describe how this was included in a computer system called SAIDA to provide assistance to operators in charge of control in a river basin. Finally the paper shows experimental results about the performance of the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The term "Logic Programming" refers to a variety of computer languages and execution models which are based on the traditional concept of Symbolic Logic. The expressive power of these languages offers promise to be of great assistance in facing the programming challenges of present and future symbolic processing applications in Artificial Intelligence, Knowledge-based systems, and many other areas of computing. The sequential execution speed of logic programs has been greatly improved since the advent of the first interpreters. However, higher inference speeds are still required in order to meet the demands of applications such as those contemplated for next generation computer systems. The execution of logic programs in parallel is currently considered a promising strategy for attaining such inference speeds. Logic Programming in turn appears as a suitable programming paradigm for parallel architectures because of the many opportunities for parallel execution present in the implementation of logic programs. This dissertation presents an efficient parallel execution model for logic programs. The model is described from the source language level down to an "Abstract Machine" level suitable for direct implementation on existing parallel systems or for the design of special purpose parallel architectures. Few assumptions are made at the source language level and therefore the techniques developed and the general Abstract Machine design are applicable to a variety of logic (and also functional) languages. These techniques offer efficient solutions to several areas of parallel Logic Programming implementation previously considered problematic or a source of considerable overhead, such as the detection and handling of variable binding conflicts in AND-Parallelism, the specification of control and management of the execution tree, the treatment of distributed backtracking, and goal scheduling and memory management issues, etc. A parallel Abstract Machine design is offered, specifying data areas, operation, and a suitable instruction set. This design is based on extending to a parallel environment the techniques introduced by the Warren Abstract Machine, which have already made very fast and space efficient sequential systems a reality. Therefore, the model herein presented is capable of retaining sequential execution speed similar to that of high performance sequential systems, while extracting additional gains in speed by efficiently implementing parallel execution. These claims are supported by simulations of the Abstract Machine on sample programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present two approaches to cluster dialogue-based information obtained by the speech understanding module and the dialogue manager of a spoken dialogue system. The purpose is to estimate a language model related to each cluster, and use them to dynamically modify the model of the speech recognizer at each dialogue turn. In the first approach we build the cluster tree using local decisions based on a Maximum Normalized Mutual Information criterion. In the second one we take global decisions, based on the optimization of the global perplexity of the combination of the cluster-related LMs. Our experiments show a relative reduction of the word error rate of 15.17%, which helps to improve the performance of the understanding and the dialogue manager modules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many cities in Europe have difficulties to meet the air quality standards set by the European legislation, most particularly the annual mean Limit Value for NO2. Road transport is often the main source of air pollution in urban areas and therefore, there is an increasing need to estimate current and future traffic emissions as accurately as possible. As a consequence, a number of specific emission models and emission factors databases have been developed recently. They present important methodological differences and may result in largely diverging emission figures and thus may lead to alternative policy recommendations. This study compares two approaches to estimate road traffic emissions in Madrid (Spain): the COmputer Programme to calculate Emissions from Road Transport (COPERT4 v.8.1) and the Handbook Emission Factors for Road Transport (HBEFA v.3.1), representative of the ‘average-speed’ and ‘traffic situation’ model types respectively. The input information (e.g. fleet composition, vehicle kilometres travelled, traffic intensity, road type, etc.) was provided by the traffic model developed by the Madrid City Council along with observations from field campaigns. Hourly emissions were computed for nearly 15 000 road segments distributed in 9 management areas covering the Madrid city and surroundings. Total annual NOX emissions predicted by HBEFA were a 21% higher than those of COPERT. The discrepancies for NO2 were lower (13%) since resulting average NO2/NOX ratios are lower for HBEFA. The larger differences are related to diesel vehicle emissions under “stop & go” traffic conditions, very common in distributor/secondary roads of the Madrid metropolitan area. In order to understand the representativeness of these results, the resulting emissions were integrated in an urban scale inventory used to drive mesoscale air quality simulations with the Community Multiscale Air Quality (CMAQ) modelling system (1 km2 resolution). Modelled NO2 concentrations were compared with observations through a series of statistics. Although there are no remarkable differences between both model runs, the results suggest that HBEFA may overestimate traffic emissions. However, the results are strongly influenced by methodological issues and limitations of the traffic model. This study was useful to provide a first alternative estimate to the official emission inventory in Madrid and to identify the main features of the traffic model that should be improved to support the application of an emission system based on “real world” emission factors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To propose an automated patient-specific algorithm for the creation of accurate and smooth meshes of the aortic anatomy, to be used for evaluating rupture risk factors of abdominal aortic aneurysms (AAA). Finite element (FE) analyses and simulations require meshes to be smooth and anatomically accurate, capturing both the artery wall and the intraluminal thrombus (ILT). The two main difficulties are the modeling of the arterial bifurcations, and of the ILT, which has an arbitrary shape that is conforming to the aortic wall.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information integration is a very important topic. Reusing the knowledge and having common representations have been (and it is) an active research topic in the process systems community. Conventional (structural) But only structural models have been dealt with so far. In this paper the issue of integration is related with two types of different knowledge, functional and structural. Functional representation and analysis have proved very useful, but still it is developed and presented in a completely isolated way from the classic structural description of the process. This paper presents an architecture to integrate both representations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is the subjective and objective evaluation of angledependent absorption coefficients. As the assumption of a constant absorption coefficient over the angle of incidence is not always held, a new model acknowledging an angle-dependent reflection must be considered, to get a more accurate prediction in the sound field. The study provides information about the behavior of different materials in several rooms, depending on the reflection modeling of incident sound waves. An objective evaluation was run for an implementation of angle-dependent reflection factors in the image source and ray tracing simulation models. Results obtained were analysed after comparison to diffuse-field averaged data. However, changes in acoustic characteristics of a room do not always mean a variation in the listener’s perception. Thus, additional subjective evaluation allowed a comparison between the different results obtained with the computer simulation and the response from the individuals who participated in the listening test. The listening test was designed following a three-alternative forced-choice (3AFC) paradigm. In each interaction asked to the subjects a sequence of either three pink noise bursts or three natural signals was alternated. These results were supposed to show the influence and perception of the two different ways to implement surface reflection –either with diffuse or angle-dependent absorption properties. Results show slightly audible effects when material properties were exaggerated. El objetivo de este trabajo es la evaluación objetiva y subjetiva del coeficiente de absorción en función del ángulo de incidencia de la onda de sonido. La suposición de un coeficiente de absorción constante con respecto al ángulo de incidencia no siempre se sostiene. Por ello, un nuevo modelo considerando la reflexión dependiente del ángulo se debe tener en cuenta para obtener predicciones más certeras en el campo del sonido. El estudio proporciona información sobre el comportamiento de diferentes materiales en distintos recintos, dependientes del modelo de reflexión de las ondas de sonido incidentes. Debido a las dificultades a la hora de realizar las medidas y, por lo tanto, a la falta de datos, los coeficientes de absorción dependientes del ángulo a menudo no se tienen en cuenta a la hora de realizar las simulaciones. Hoy en día, aún no hay una tendencia de aplicar el coeficiente de absorción dependiente del ángulo para mejorar los modelos de reflexión. Por otra parte, para una medición satisfactoria de la absorción dependiente del ángulo, sólo hay unos pocos métodos. Las técnicas de medición actuales llevan mucho tiempo y hay algunos materiales, condiciones y ángulos que no pueden ser reproducidos y, por lo tanto, no es posible su medición. Sin embargo, en el presente estudio, los ángulos de incidencia de las ondas de sonido son conocidos y almacenados en una de base de datos para cada uno de los materiales, de modo que los coeficientes de absorción para el ángulo dado pueden ser devueltos siempre que sean requeridos por el usuario. Para realizar el estudio se llevó a cabo una evaluación objetiva, por medio de la implementación del factor de reflexión dependiente del ángulo en los modelos de fuentes imagen y trazado de rayos. Los resultados fueron analizados después de ser comparados con el promedio de los datos obtenidos en medidas en el campo difuso. La simulación se hizo una vez se configuraron un número de materiales creados por el autor, a partir de los datos existentes en la literatura y los catálogos de fabricantes. Los modelos de Komatsu y Mechel sirvieron como referencia para los materiales porosos, configurando la resistividad al aire o el grosor, y para los paneles perforados, introduciendo el radio de los orificios y la distancia entre centros, respectivamente. Estos materiales se situaban en la pared opuesta a la que se consideraba que debía alojar a la fuente sonora. El resto de superficies se modelaban con el mismo material, variando su coeficiente de absorción y/o de dispersión. Al mismo tiempo, una serie de recintos fueron modelados para poder reproducir distintos escenarios de los que obtener los resultados. Sin embargo, los cambios en las características acústicas de un recinto no significan variaciones en la percepción por parte del oyente. Por ello, una evaluación subjetiva adicional permitió una comparación entre los diferentes resultados obtenidos mediante la simulación informática y la respuesta de los individuos que participaron en la prueba de escucha. Ésta fue diseñada bajo las pautas del modelo de test three-alternative forced-choice (3AFC), con treinta y dos preguntas diferentes. En cada iteración los sujetos fueron preguntados por una secuencia alterna entre tres señales, siendo dos de ellas iguales. Éstas podían ser tanto ráfagas de ruido rosa como señales naturales, en este test se utilizó un fragmento de una obra clásica interpretada por un piano. Antes de contestar al cuestionario, los bloques de preguntas eran ordenados al azar. Para cada ensayo, la mezcla era diferente, así los sujetos no repetían la misma prueba, evitando un sesgo por efectos de aprendizaje. Los bloques se barajaban recordando siempre el orden inicial, para después almacenar los resultados reordenados. La prueba de escucha fue realizada por veintitrés personas, toda ellas con conocimientos dentro del campo de la acústica. Antes de llevar a cabo la prueba de escucha en un entorno adecuado, una hoja con las instrucciones fue facilitada a cada persona. Los resultados muestran la influencia y percepción de las dos maneras distintas de implementar las reflexiones de una superficie –ya sea con respecto a la propiedad de difusión o de absorción dependiente del ángulo de los materiales. Los resultados objetivos, después de ejecutar las simulaciones, muestran los datos medios obtenidos para comprender el comportamiento de distintos materiales de acuerdo con el modelo de reflexión utilizado en el caso de estudio. En las tablas proporcionadas en la memoria se muestran los valores del tiempo de reverberación, la claridad y el tiempo de caída temprana. Los datos de las características del recinto obtenidos en este análisis tienen una fuerte dependencia respecto al coeficiente de absorción de los diferentes materiales que recubren las superficies del cuarto. En los resultados subjetivos, la media de percepción, a la hora de distinguir las distintas señales, por parte de los sujetos, se situó significativamente por debajo del umbral marcado por el punto de inflexión de la función psicométrica. Sin embargo, es posible concluir que la mayoría de los individuos tienden a ser capaces de detectar alguna diferencia entre los estímulos presentados en el 3AFC test. En conclusión, la hipótesis de que los valores del coeficiente de absorción dependiente del ángulo difieren es contrastada. Pero la respuesta subjetiva de los individuos muestra que únicamente hay ligeras variaciones en la percepción si el coeficiente varía en intervalos pequeños entre los valores manejados en la simulación. Además, si los parámetros de los materiales acústicos no son exagerados, los sujetos no perciben ninguna variación. Los primeros resultados obtenidos, proporcionando información respecto a la dependencia del ángulo, llevan a una nueva consideración en el campo de la acústica, y en la realización de nuevos proyectos en el futuro. Para futuras líneas de investigación, las simulaciones se deberían realizar con distintos tipos de recintos, buscando escenarios con geometrías irregulares. También, la implementación de distintos materiales para obtener resultados más certeros. Otra de las fases de los futuros proyectos puede realizarse teniendo en cuenta el coeficiente de dispersión dependiente del ángulo de incidencia de la onda de sonido. En la parte de la evaluación subjetiva, realizar una serie de pruebas de escucha con distintos individuos, incluyendo personas sin una formación relacionada con la ingeniería acústica.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A semi-automatic segmentation algorithm for abdominal aortic aneurysms (AAA), and based on Active Shape Models (ASM) and texture models, is presented in this work. The texture information is provided by a set of four 3D magnetic resonance (MR) images, composed of axial slices of the abdomen, where lumen, wall and intraluminal thrombus (ILT) are visible. Due to the reduced number of images in the MRI training set, an ASM and a custom texture model based on border intensity statistics are constructed. For the same reason the shape is characterized from 35-computed tomography angiography (CTA) images set so the shape variations are better represented. For the evaluation, leave-one-out experiments have been held over the four MRI set.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

n this article, a tool for simulating the channel impulse response for indoor visible light communications using 3D computer-aided design (CAD) models is presented. The simulation tool is based on a previous Monte Carlo ray-tracing algorithm for indoor infrared channel estimation, but including wavelength response evaluation. The 3D scene, or the simulation environment, can be defined using any CAD software in which the user specifies, in addition to the setting geometry, the reflection characteristics of the surface materials as well as the structures of the emitters and receivers involved in the simulation. Also, in an effort to improve the computational efficiency, two optimizations are proposed. The first one consists of dividing the setting into cubic regions of equal size, which offers a calculation improvement of approximately 50% compared to not dividing the 3D scene into sub-regions. The second one involves the parallelization of the simulation algorithm, which provides a computational speed-up proportional to the number of processors used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Enabling Subject Matter Experts (SMEs) to formulate knowledge without the intervention of Knowledge Engineers (KEs) requires providing SMEs with methods and tools that abstract the underlying knowledge representation and allow them to focus on modeling activities. Bridging the gap between SME-authored models and their representation is challenging, especially in the case of complex knowledge types like processes, where aspects like frame management, data, and control flow need to be addressed. In this paper, we describe how SME-authored process models can be provided with an operational semantics and grounded in a knowledge representation language like F-logic in order to support process-related reasoning. The main results of this work include a formalism for process representation and a mechanism for automatically translating process diagrams into executable code following such formalism. From all the process models authored by SMEs during evaluation 82% were well-formed, all of which executed correctly. Additionally, the two optimizations applied to the code generation mechanism produced a performance improvement at reasoning time of 25% and 30% with respect to the base case, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purely data-driven approaches for machine learning present difficulties when data are scarce relative to the complexity of the model or when the model is forced to extrapolate. On the other hand, purely mechanistic approaches need to identify and specify all the interactions in the problem at hand (which may not be feasible) and still leave the issue of how to parameterize the system. In this paper, we present a hybrid approach using Gaussian processes and differential equations to combine data-driven modeling with a physical model of the system. We show how different, physically inspired, kernel functions can be developed through sensible, simple, mechanistic assumptions about the underlying system. The versatility of our approach is illustrated with three case studies from motion capture, computational biology, and geostatistics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The basic equations for modelling two-dimensional hydrodynamics and transport in estuaries and coastal regions have been developed. By using the finite element method, it is possible to transform the model into a discretized counterpart. The model has been applied in order to study the dispersion of an effluent within the Bay of Santander. The results obtained by means of a computer program are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel GPU-based nonparametric moving object detection strategy for computer vision tools requiring real-time processing is proposed. An alternative and efficient Bayesian classifier to combine nonparametric background and foreground models allows increasing correct detections while avoiding false detections. Additionally, an efficient region of interest analysis significantly reduces the computational cost of the detections.