887 resultados para LINK-BASED AND MULTIDIMENSIONAL QUERY LANGUAGE (LMDQL)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Throughout the 1990s and up to 2005, the adoption of an open-door policy substantially increased the volume of Myanmar's external trade. Imports grew more rapidly than exports in the 1990s owing to the release of pent-up consumer demand during the transition to a market economy. Accordingly, trade deficits expanded. Confronted by a shortage of foreign currency, the government after the late 1990s resorted to rigid controls over the private sector's trade activities. Despite this tightening of policy, Myanmar's external sector has improved since 2000 largely because of the emergence of new export commodities, namely garments and natural gas. Foreign direct investments in Myanmar significantly contributed to the exploration and development of new gas fields. As trade volume grew, Myanmar strengthened its trade relations with neighboring countries such as China, Thailand and India. Although the development of external trade and foreign investment inflows exerted a considerable impact on the Myanmar economy, the external sector has not yet begun to function as a vigorous engine for broad-based and sustainable development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the introduction of the Doi Moi ('renovation') economic reform in 1986, Vietnam has experienced a transformation of its economic management, from a central planning economy to a market-oriented economy. High economic growth, created by the liberalization of activities in all sectors of the economy, has changed the economic structure of the country, and the once agriculture-based and poverty-stricken land now generates a midlevel income and possesses many industrial bases. Economic growth has also changed the landscape of the country. Business complexes have been built in metropolises like Ho Chi Minh City and Hanoi, and rice fields have been converted into industrial zones. As the number of enterprises increased, areas began to emerge where many enterprises agglomerated. Some of these 'clusters' were groups of cottage industry households, while many others were large-scale industrial clusters. As Porter [1998] argues, industrial clusters are the source of a nation's 'competitive advantage'. McCarty et al. [2005] indicate that in some key industries in Vietnam, some clusters of enterprises have been created, although the degree of agglomeration differs from one industry to another. Using industry census data from 2001, they include dot density maps for the 12 leading manufacturing industries in Vietnam. They show that most of the industries analyzed are clustered either in Hanoi or Ho Chi Minh City (or both). Among these 12 industries, the garments industry has the greatest tendency to cluster, followed by textile, rice, seafood, and paper industries. The fact that industrial clusters have begun to form in some areas could be a positive sign for Vietnam's future economic development. What is lacking in McCarty et al. [2005], however, is the identification of the participants in the industrial clusters. Some argue for the importance of small and medium enterprises (SMEs) in Vietnam's economic development (e.g. Nguyen Tri Thanh [2007], Tran Tien Cuong et al. [2008]), while others stress the impact of foreign direct investments (FDI) (for example, Tuan Bui [2009]). Adding information about the participants in the above cluster study (and in other studies of spatial patterns of location of enterprises) may broaden the scope for analysis of economic development in Vietnam. This study aims to reveal the characteristics of industrial clusters in terms of their participants and locations. The findings of the study may provide basic information for evaluating the effects of agglomeration and the robustness of the effects in the industrial clusters in Vietnam. Section 1 describes the characteristics of economic entities in Vietnam such as ownership, size of enterprise, and location. Section 2 examines qualitative aspects of industrial clusters identified in McCarty et al. [2005] and uses information on the size and ownership of clusters. Three key industries (garments, consumer electronics, and motor vehicle) are selected for the study. Section 3 identifies another type of cluster commonly seen in Vietnam, composed of local industries and called 'craft villages'. Many such villages have been developed since the early 1990s. The study points out that some of these villages have become industrialized (or are becoming industrialized) by introducing modern modes of production and by employing thousands of laborers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Next Generation Networks (NGN) provide Telecommunications operators with the possibility to share their resources and infrastructure, facilitate the interoperability with other networks, and simplify and unify the management, operation and maintenance of service offerings, thus enabling the fast and cost-effective creation of new personal, broadband ubiquitous services. Unfortunately, service creation over NGN is far from the success of service creation in the Web, especially when it comes to Web 2.0. This paper presents a novel approach to service creation and delivery, with a platform that opens to non-technically skilled users the possibility to create, manage and share their own convergent (NGN-based and Web-based) services. To this end, the business approach to user-generated services is analyzed and the technological bases supporting the proposal are explained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the 2005 Miracle’s team approach to the Ad-Hoc Information Retrieval tasks. The goal for the experiments this year was twofold: to continue testing the effect of combination approaches on information retrieval tasks, and improving our basic processing and indexing tools, adapting them to new languages with strange encoding schemes. The starting point was a set of basic components: stemming, transforming, filtering, proper nouns extraction, paragraph extraction, and pseudo-relevance feedback. Some of these basic components were used in different combinations and order of application for document indexing and for query processing. Second-order combinations were also tested, by averaging or selective combination of the documents retrieved by different approaches for a particular query. In the multilingual track, we concentrated our work on the merging process of the results of monolingual runs to get the overall multilingual result, relying on available translations. In both cross-lingual tracks, we have used available translation resources, and in some cases we have used a combination approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main goal of the bilingual and monolingual participation of the MIRACLE team in CLEF 2004 was to test the effect of combination approaches on information retrieval. The starting point was a set of basic components: stemming, transformation, filtering, generation of n-grams, weighting and relevance feedback. Some of these basic components were used in different combinations and order of application for document indexing and for query processing. A second order combination was also tested, mainly by averaging or selective combination of the documents retrieved by different approaches for a particular query.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a level set based variational approach that incorporates shape priors into edge-based and region-based models. The evolution of the active contour depends on local and global information. It has been implemented using an efficient narrow band technique. For each boundary pixel we calculate its dynamic according to its gray level, the neighborhood and geometric properties established by training shapes. We also propose a criterion for shape aligning based on affine transformation using an image normalization procedure. Finally, we illustrate the benefits of the our approach on the liver segmentation from CT images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ciao is a public domain, next generation multi-paradigm programming environment with a unique set of features: Ciao offers a complete Prolog system, supporting ISO-Prolog, but its novel modular design allows both restricting and extending the language. As a result, it allows working with fully declarative subsets of Prolog and also to extend these subsets (or ISO-Prolog) both syntactically and semantically. Most importantly, these restrictions and extensions can be activated separately on each program module so that several extensions can coexist in the same application for different modules. Ciao also supports (through such extensions) programming with functions, higher-order (with predicate abstractions), constraints, and objects, as well as feature terms (records), persistence, several control rules (breadth-first search, iterative deepening, ...), concurrency (threads/engines), a good base for distributed execution (agents), and parallel execution. Libraries also support WWW programming, sockets, external interfaces (C, Java, TclTk, relational databases, etc.), etc. Ciao offers support for programming in the large with a robust module/object system, module-based separate/incremental compilation (automatically -no need for makefiles), an assertion language for declaring (optional) program properties (including types and modes, but also determinacy, non-failure, cost, etc.), automatic static inference and static/dynamic checking of such assertions, etc. Ciao also offers support for programming in the small producing small executables (including only those builtins used by the program) and support for writing scripts in Prolog. The Ciao programming environment includes a classical top-level and a rich emacs interface with an embeddable source-level debugger and a number of execution visualization tools. The Ciao compiler (which can be run outside the top level shell) generates several forms of architecture-independent and stand-alone executables, which run with speed, efficiency and executable size which are very competive with other commercial and academic Prolog/CLP systems. Library modules can be compiled into compact bytecode or C source files, and linked statically, dynamically, or autoloaded. The novel modular design of Ciao enables, in addition to modular program development, effective global program analysis and static debugging and optimization via source to source program transformation. These tasks are performed by the Ciao preprocessor ( ciaopp, distributed separately). The Ciao programming environment also includes lpdoc, an automatic documentation generator for LP/CLP programs. It processes Prolog files adorned with (Ciao) assertions and machine-readable comments and generates manuals in many formats including postscript, pdf, texinfo, info, HTML, man, etc. , as well as on-line help, ascii README files, entries for indices of manuals (info, WWW, ...), and maintains WWW distribution sites.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract interpreters rely on the existence of a nxpoint algorithm that calculates a least upper bound approximation of the semantics of the program. Usually, that algorithm is described in terms of the particular language in study and therefore it is not directly applicable to programs written in a different source language. In this paper we introduce a generic, block-based, and uniform representation of the program control flow graph and a language-independent nxpoint algorithm that can be applied to a variety of languages and, in particular, Java. Two major characteristics of our approach are accuracy (obtained through a topdown, context sensitive approach) and reasonable efficiency (achieved by means of memoization and dependency tracking techniques). We have also implemented the proposed framework and show some initial experimental results for standard benchmarks, which further support the feasibility of the solution adopted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present and evaluate a compiler from Prolog (and extensions) to JavaScript which makes it possible to use (constraint) logic programming to develop the client side of web applications while being compliant with current industry standards. Targeting JavaScript makes (C)LP programs executable in virtually every modern computing device with no additional software requirements from the point of view of the user. In turn, the use of a very high-level language facilitates the development of high-quality, complex software. The compiler is a back end of the Ciao system and supports most of its features, including its module system and its rich language extension mechanism based on packages. We present an overview of the compilation process and a detailed description of the run-time system, including the support for modular compilation into separate JavaScript code. We demonstrate the maturity of the compiler by testing it with complex code such as a CLP(FD) library written in Prolog with attributed variables. Finally, we validate our proposal by measuring the performance of some LP and CLP(FD) benchmarks running on top of major JavaScript engines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, Computational Fluid Dynamics (CFD) solvers are widely used within the industry to model fluid flow phenomenons. Several fluid flow model equations have been employed in the last decades to simulate and predict forces acting, for example, on different aircraft configurations. Computational time and accuracy are strongly dependent on the fluid flow model equation and the spatial dimension of the problem considered. While simple models based on perfect flows, like panel methods or potential flow models can be very fast to solve, they usually suffer from a poor accuracy in order to simulate real flows (transonic, viscous). On the other hand, more complex models such as the full Navier- Stokes equations provide high fidelity predictions but at a much higher computational cost. Thus, a good compromise between accuracy and computational time has to be fixed for engineering applications. A discretisation technique widely used within the industry is the so-called Finite Volume approach on unstructured meshes. This technique spatially discretises the flow motion equations onto a set of elements which form a mesh, a discrete representation of the continuous domain. Using this approach, for a given flow model equation, the accuracy and computational time mainly depend on the distribution of nodes forming the mesh. Therefore, a good compromise between accuracy and computational time might be obtained by carefully defining the mesh. However, defining an optimal mesh for complex flows and geometries requires a very high level expertize in fluid mechanics and numerical analysis, and in most cases a simple guess of regions of the computational domain which might affect the most the accuracy is impossible. Thus, it is desirable to have an automatized remeshing tool, which is more flexible with unstructured meshes than its structured counterpart. However, adaptive methods currently in use still have an opened question: how to efficiently drive the adaptation ? Pioneering sensors based on flow features generally suffer from a lack of reliability, so in the last decade more effort has been made in developing numerical error-based sensors, like for instance the adjoint-based adaptation sensors. While very efficient at adapting meshes for a given functional output, the latter method is very expensive as it requires to solve a dual set of equations and computes the sensor on an embedded mesh. Therefore, it would be desirable to develop a more affordable numerical error estimation method. The current work aims at estimating the truncation error, which arises when discretising a partial differential equation. These are the higher order terms neglected in the construction of the numerical scheme. The truncation error provides very useful information as it is strongly related to the flow model equation and its discretisation. On one hand, it is a very reliable measure of the quality of the mesh, therefore very useful in order to drive a mesh adaptation procedure. On the other hand, it is strongly linked to the flow model equation, so that a careful estimation actually gives information on how well a given equation is solved, which may be useful in the context of _ -extrapolation or zonal modelling. The following work is organized as follows: Chap. 1 contains a short review of mesh adaptation techniques as well as numerical error prediction. In the first section, Sec. 1.1, the basic refinement strategies are reviewed and the main contribution to structured and unstructured mesh adaptation are presented. Sec. 1.2 introduces the definitions of errors encountered when solving Computational Fluid Dynamics problems and reviews the most common approaches to predict them. Chap. 2 is devoted to the mathematical formulation of truncation error estimation in the context of finite volume methodology, as well as a complete verification procedure. Several features are studied, such as the influence of grid non-uniformities, non-linearity, boundary conditions and non-converged numerical solutions. This verification part has been submitted and accepted for publication in the Journal of Computational Physics. Chap. 3 presents a mesh adaptation algorithm based on truncation error estimates and compares the results to a feature-based and an adjoint-based sensor (in collaboration with Jorge Ponsín, INTA). Two- and three-dimensional cases relevant for validation in the aeronautical industry are considered. This part has been submitted and accepted in the AIAA Journal. An extension to Reynolds Averaged Navier- Stokes equations is also included, where _ -estimation-based mesh adaptation and _ -extrapolation are applied to viscous wing profiles. The latter has been submitted in the Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering. Keywords: mesh adaptation, numerical error prediction, finite volume Hoy en día, la Dinámica de Fluidos Computacional (CFD) es ampliamente utilizada dentro de la industria para obtener información sobre fenómenos fluidos. La Dinámica de Fluidos Computacional considera distintas modelizaciones de las ecuaciones fluidas (Potencial, Euler, Navier-Stokes, etc) para simular y predecir las fuerzas que actúan, por ejemplo, sobre una configuración de aeronave. El tiempo de cálculo y la precisión en la solución depende en gran medida de los modelos utilizados, así como de la dimensión espacial del problema considerado. Mientras que modelos simples basados en flujos perfectos, como modelos de flujos potenciales, se pueden resolver rápidamente, por lo general aducen de una baja precisión a la hora de simular flujos reales (viscosos, transónicos, etc). Por otro lado, modelos más complejos tales como el conjunto de ecuaciones de Navier-Stokes proporcionan predicciones de alta fidelidad, a expensas de un coste computacional mucho más elevado. Por lo tanto, en términos de aplicaciones de ingeniería se debe fijar un buen compromiso entre precisión y tiempo de cálculo. Una técnica de discretización ampliamente utilizada en la industria es el método de los Volúmenes Finitos en mallas no estructuradas. Esta técnica discretiza espacialmente las ecuaciones del movimiento del flujo sobre un conjunto de elementos que forman una malla, una representación discreta del dominio continuo. Utilizando este enfoque, para una ecuación de flujo dado, la precisión y el tiempo computacional dependen principalmente de la distribución de los nodos que forman la malla. Por consiguiente, un buen compromiso entre precisión y tiempo de cálculo se podría obtener definiendo cuidadosamente la malla, concentrando sus elementos en aquellas zonas donde sea estrictamente necesario. Sin embargo, la definición de una malla óptima para corrientes y geometrías complejas requiere un nivel muy alto de experiencia en la mecánica de fluidos y el análisis numérico, así como un conocimiento previo de la solución. Aspecto que en la mayoría de los casos no está disponible. Por tanto, es deseable tener una herramienta que permita adaptar los elementos de malla de forma automática, acorde a la solución fluida (remallado). Esta herramienta es generalmente más flexible en mallas no estructuradas que con su homóloga estructurada. No obstante, los métodos de adaptación actualmente en uso todavía dejan una pregunta abierta: cómo conducir de manera eficiente la adaptación. Sensores pioneros basados en las características del flujo en general, adolecen de una falta de fiabilidad, por lo que en la última década se han realizado grandes esfuerzos en el desarrollo numérico de sensores basados en el error, como por ejemplo los sensores basados en el adjunto. A pesar de ser muy eficientes en la adaptación de mallas para un determinado funcional, este último método resulta muy costoso, pues requiere resolver un doble conjunto de ecuaciones: la solución y su adjunta. Por tanto, es deseable desarrollar un método numérico de estimación de error más asequible. El presente trabajo tiene como objetivo estimar el error local de truncación, que aparece cuando se discretiza una ecuación en derivadas parciales. Estos son los términos de orden superior olvidados en la construcción del esquema numérico. El error de truncación proporciona una información muy útil sobre la solución: es una medida muy fiable de la calidad de la malla, obteniendo información que permite llevar a cabo un procedimiento de adaptación de malla. Está fuertemente relacionado al modelo matemático fluido, de modo que una estimación precisa garantiza la idoneidad de dicho modelo en un campo fluido, lo que puede ser útil en el contexto de modelado zonal. Por último, permite mejorar la precisión de la solución resolviendo un nuevo sistema donde el error local actúa como término fuente (_ -extrapolación). El presenta trabajo se organiza de la siguiente manera: Cap. 1 contiene una breve reseña de las técnicas de adaptación de malla, así como de los métodos de predicción de los errores numéricos. En la primera sección, Sec. 1.1, se examinan las estrategias básicas de refinamiento y se presenta la principal contribución a la adaptación de malla estructurada y no estructurada. Sec 1.2 introduce las definiciones de los errores encontrados en la resolución de problemas de Dinámica Computacional de Fluidos y se examinan los enfoques más comunes para predecirlos. Cap. 2 está dedicado a la formulación matemática de la estimación del error de truncación en el contexto de la metodología de Volúmenes Finitos, así como a un procedimiento de verificación completo. Se estudian varias características que influyen en su estimación: la influencia de la falta de uniformidad de la malla, el efecto de las no linealidades del modelo matemático, diferentes condiciones de contorno y soluciones numéricas no convergidas. Esta parte de verificación ha sido presentada y aceptada para su publicación en el Journal of Computational Physics. Cap. 3 presenta un algoritmo de adaptación de malla basado en la estimación del error de truncación y compara los resultados con sensores de featured-based y adjointbased (en colaboración con Jorge Ponsín del INTA). Se consideran casos en dos y tres dimensiones, relevantes para la validación en la industria aeronáutica. Este trabajo ha sido presentado y aceptado en el AIAA Journal. También se incluye una extensión de estos métodos a las ecuaciones RANS (Reynolds Average Navier- Stokes), en donde adaptación de malla basada en _ y _ -extrapolación son aplicados a perfiles con viscosidad de alas. Este último trabajo se ha presentado en los Actas de la Institución de Ingenieros Mecánicos, Parte G: Journal of Aerospace Engineering. Palabras clave: adaptación de malla, predicción del error numérico, volúmenes finitos

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Self-consciousness implies not only self or group recognition, but also real knowledge of one’s own identity. Self-consciousness is only possible if an individual is intelligent enough to formulate an abstract self-representation. Moreover, it necessarily entails the capability of referencing and using this elf-representation in connection with other cognitive features, such as inference, and the anticipation of the consequences of both one’s own and other individuals’ acts. In this paper, a cognitive architecture for self-consciousness is proposed. This cognitive architecture includes several modules: abstraction, self-representation, other individuals'representation, decision and action modules. It includes a learning process of self-representation by direct (self-experience based) and observational learning (based on the observation of other individuals). For model implementation a new approach is taken using Modular Artificial Neural Networks (MANN). For model testing, a virtual environment has been implemented. This virtual environment can be described as a holonic system or holarchy, meaning that it is composed of autonomous entities that behave both as a whole and as part of a greater whole. The system is composed of a certain number of holons interacting. These holons are equipped with cognitive features, such as sensory perception, and a simplified model of personality and self-representation. We explain holons’ cognitive architecture that enables dynamic self-representation. We analyse the effect of holon interaction, focusing on the evolution of the holon’s abstract self-representation. Finally, the results are explained and analysed and conclusions drawn.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a new category of CAD applications devoted to the definition and parameterization of hull forms, called programmed design. Programmed design relies on two prerequisites. The first one is a product model with a variety of types large enough to face the modeling of any type of ship. The second one is a design language dedicated to create the product model. The main purpose of the language is to publish the modeling algorithms of the application in the designer knowledge domain to let the designer create parametric model scripts. The programmed design is an evolution of the parametric design but it is not just parametric design. It is a tool to create parametric design tools. It provides a methodology to extract the design knowledge by abstracting a design experience in order to store and reuse it. Programmed design is related with the organizational and architectural aspects of the CAD applications but not with the development of modeling algorithms. It is built on top and relies on existing algorithms provided by a comprehensive product model. Programmed design can be useful to develop new applications, to support the evolution of existing applications or even to integrate different types of application in a single one. A three-level software architecture is proposed to make the implementation of the programmed design easier. These levels are the conceptual level based on the design language, the mathematical level based on the geometric formulation of the product model and the visual level based on the polyhedral representation of the model as required by the graphic card. Finally, some scenarios of the use of programmed design are discussed. For instance, the development of specialized parametric hull form generators for a ship type or a family of ships or the creation of palettes of hull form components to be used as parametric design patterns. Also two new processes of reverse engineering which can considerably improve the application have been detected: the creation of the mathematical level from the visual level and the creation of the conceptual level from the mathematical level. © 2012 Elsevier Ltd. All rights reserved. 1. Introduction

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Common European Framework of Reference for Languages (CEFR) "describes in a comprehensive way what language learners have to learn to do in order to use a language for communication and what knowledge and skills they have to develop so as to be able to act effectively" (Council of Europe, 2001: 1). This paper reports on the findings of two studies whose purpose was to assess written production competence descriptors meant for their inclusion into the Academic and Professional English Language Portfolio KELP) for students of engineering and architecture. The main objective of these studies was to establish whether the language competence descriptors were a satisfactory valid tool in their language programmes from the point of view of clarity, relevance and reliability, as perceived by the students and fellow English for Academic Purposes (RAP) / English for Science and Technology (EST) instructors. The studies shed light on how to improve unsatisfactory descriptors. Results show that the final descriptor lists were on the whole well calibrated and fairly well written: the great majority was considered valid for both teachers and students involved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new design for a photovoltaic concentrator, the most recent advance based on the Kohler concept, is presented. The system is mirror-based, and with geometry that guaranties a maximum sunlight collection area (without shadows, like those caused by secondary stages or receivers and heat-sinks in other mirror-based systems). Designed for a concentration of 1000x, this off axis system combines both good acceptance angle and good irradiance uniformity on the solar cell. The advanced performance features (concentration-acceptance products ?CAP- about 0.73 and affordable peak and average irradiances) are achieved through the combination of four reflective folds combined with four refractive surfaces, all of them free-form, performing Köhler integration 2 . In Köhler devices, the irradiance uniformity is not achieved through additional optical stages (TIR prisms), thus no complex/expensive elements to manufacture are required. The rim angle and geometry are such that the secondary stage and receivers are hidden below the primary mirrors, so maximum collection is assured. The entire system was designed to allow loose assembly/alignment tolerances (through high acceptance angle) and to be manufactured using already well-developed methods for mass production, with high potential for low cost. The optical surfaces for Köhler integration, although with a quite different optical behavior, have approximately the same dimensions and can be manufactured with the same techniques as the more traditional secondary optical elements used for concentration (typically plastic injection molding or glass molding).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este proyecto, recoge el estudio de diferentes simuladores sobre comunicaciones móviles, que se encargan de analizar el comportamiento de las tecnologías UMTS (Universal Mobile Telecommunications System), 3G y LTE (Long Term Evolution),3.9G, centrándose principalmente en el caso de los simuladores LTE, ya que es la tecnología que se está implantando en la actualidad. Por ello, antes de analizar las características de la interfaz radio más importante de esta generación, la 3.9G, se hará una overview general de cómo han ido evolucionando las comunicaciones móviles a lo largo de la historia, se analizarán las características de la tecnología móvil actual, la 3.9G, para posteriormente centrarse en un par de simuladores que demostrarán, mediante resultados gráficos, estas características. Hoy en día, el uso de estos simuladores es totalmente necesario, ya que las comunicaciones móviles, avanzan a un ritmo vertiginoso y es necesario por lo tanto conocer las prestaciones que pueden producir las diferentes tecnologías móviles utilizadas. Los simuladores utilizados por este proyecto, permiten analizar el comportamiento de varios escenarios, ya que existen diferentes tipos de simuladores, tanto a nivel de enlace como a nivel de sistema. Se mencionarán una serie de simuladores correspondientes a la tercera generación UMTS, pero los simuladores en cuestión que se estudiarán y analizarán con más profundidad en este proyecto fin de carrera son los simuladores “Link-Level” y “System-Level”, desarrollados por el “Institute of Communications and Radio-Frecuency Engineering” de la Universidad de Viena. Estos simuladores permiten realizar diferentes simulaciones, como analizar el comportamiento entre una estación base y un único usuario, para el caso de los simuladores a nivel de enlace, o bien analizar el comportamiento de toda una red en el caso de los simuladores a nivel de sistema. Con los resultados que se pueden obtener de ambos simuladores, se realizarán una serie de preguntas, basadas en la práctica realizada por el profesor de la universidad Politécnica de Madrid, Pedro García del Pino, tanto de tipo teóricas como de tipo prácticas, para comprobar que se han entendido los simuladores analizados. Finalmente se citarán las conclusiones que se obtiene de este proyecto, así como las líneas futuras de acción. PROJECT ABSTRACT This project includes the study of different simulators on mobile communications, which are responsible for analyzing the behavior of UMTS (Universal Mobile Telecommunications System), 3G and LTE (Long Term Evolution), 3.9G, mainly focusing on the case of LTE simulators because it is the technology that is being implemented today. Therefore, before analyzing the characteristics of the most important radio interface of this generation, 3.9G, there will give a general overview how the mobile communications have evolved throughout history, analyzing the characteristics of current mobile technology, the 3.9G, later focus on a pair of simulators that demonstrate through graphical results, these characteristics. Today, the use of these simulators is absolutely necessary, because mobile communications advance at a high rate, and it is necessary to know the features that can produce different mobile technologies that are used. The simulators used for this project, allow to analyze the behavior of several scenarios, as there are different types of simulators, both link and system level. It mentioned a number of simulators for the third generation UMTS, but the simulators in question to be studied and analyzed in this final project are the simulators "Link-Level" and "System-Level", developed by the "Institute of Communications and Radio-Frequency Engineering" at the University of Vienna. These simulators allow realize different simulations, analyze the behavior between a base station and a single user, in the case of the link-level simulators or analyze the performance of a network in the case of system-level simulators. With the results that can be obtained from both simulators, will perform a series of questions, based on the practice developed by Pedro García del Pino, Professor of “Universidad Politécnica de Madrid (UPM)”. These questions will be both of a theoretical and practical type, to check that have been understood the analyzed simulators. Finally, it quotes the conclusions obtained from this project and mention the future lines of action.