959 resultados para Mixed capacitated arc routing problem
Resumo:
Low quality of wireless links leads to perpetual transmission failures in lossy wireless environments. To mitigate this problem, opportunistic routing (OR) has been proposed to improve the throughput of wireless multihop ad-hoc networks by taking advantage of the broadcast nature of wireless channels. However, OR can not be directly applied to wireless sensor networks (WSNs) due to some intrinsic design features of WSNs. In this paper, we present a new OR solution for WSNs with suitable adaptations to their characteristics. Our protocol, called SCAD-Sensor Context-aware Adaptive Duty-cycled beaconless opportunistic routing protocol is a cross-layer routing approach and it selects packet forwarders based on multiple sensor context information. To reach a balance between performance and energy-efficiency, SCAD adapts the duty-cycles of sensors according to real-time traffic loads and energy drain rates. We compare SCAD against other protocols through extensive simulations. Evaluation results show that SCAD outperforms other protocols in highly dynamic scenarios.
Resumo:
Index tracking has become one of the most common strategies in asset management. The index-tracking problem consists of constructing a portfolio that replicates the future performance of an index by including only a subset of the index constituents in the portfolio. Finding the most representative subset is challenging when the number of stocks in the index is large. We introduce a new three-stage approach that at first identifies promising subsets by employing data-mining techniques, then determines the stock weights in the subsets using mixed-binary linear programming, and finally evaluates the subsets based on cross validation. The best subset is returned as the tracking portfolio. Our approach outperforms state-of-the-art methods in terms of out-of-sample performance and running times.
Resumo:
We prove exponential rates of convergence of hp-version discontinuous Galerkin (dG) interior penalty finite element methods for second-order elliptic problems with mixed Dirichlet-Neumann boundary conditions in axiparallel polyhedra. The dG discretizations are based on axiparallel, σ-geometric anisotropic meshes of mapped hexahedra and anisotropic polynomial degree distributions of μ-bounded variation. We consider piecewise analytic solutions which belong to a larger analytic class than those for the pure Dirichlet problem considered in [11, 12]. For such solutions, we establish the exponential convergence of a nonconforming dG interpolant given by local L 2 -projections on elements away from corners and edges, and by suitable local low-order quasi-interpolants on elements at corners and edges. Due to the appearance of non-homogeneous, weighted norms in the analytic regularity class, new arguments are introduced to bound the dG consistency errors in elements abutting on Neumann edges. The non-homogeneous norms also entail some crucial modifications of the stability and quasi-optimality proofs, as well as of the analysis for the anisotropic interpolation operators. The exponential convergence bounds for the dG interpolant constructed in this paper generalize the results of [11, 12] for the pure Dirichlet case.
Resumo:
Quaternary marine tephras in the Izu-Bonin Arc offer significant information about explosive volcanic activities of the arc. Visual core descriptions, petrographic examinations, and chemical and grain-size analyses were conducted on tephras of backarc, arc, and forearc origin. Tephras are black and white and occur in simple and multiple modes with mixed and nonmixed ashes of black and white glass shards. The grain size distributions of the tephras are classified into three categories: coarse, white pumiceous, and fine white and black well-sorted types. The frequency of occurrence of the white and black tephras differs within the tectonic settings of the arc. Chemically, the Quaternary tephras in this region belong to low-alkali tholeiitic series with lower K2O and TiO2 than normal ordinary arc volcanic materials. Several tephras from different sites along the forearc correlate with each other and with tephras in the Shikoku Basin site and with Aogashima volcanics. These volcanic ashes resemble those in other backarc rifting areas, such as in the Fiji, Okinawa (Ryukyu), and Mariana regions.
Resumo:
The influence of climate on forest stand composition, development and growth is undeniable. Many studies have tried to quantify the effect of climatic variables on forest growth and yield. These works become especially important because there is a need to predict the effects of climate change on the development of forest ecosystems. One of the ways of facing this problem is the inclusion of climatic variables into the classic empirical growth models. The work has a double objective: (i) to identify the indicators which best describe the effect of climate on Pinus halepensis growth and (ii) to quantify such effect in several scenarios of rainfall decrease which are likely to occur in the Mediterranean area. A growth mixed model for P. halepensis including climatic variables is presented in this work. Growth estimates are based on data from the Spanish National Forest Inventory (SNFI). The best results are obtained for the indices including rainfall, or rainfall and temperature together, with annual precipitation, precipitation effectiveness, Emberger?s index or free bioclimatic intensity standing out among them. The final model includes Emberger?s index, free bioclimatic intensity and interactions between competition and climate indices. The results obtained show that a rainfall decrease about 5% leads to a decrease in volume growth of 5.5?7.5% depending on site quality.
Resumo:
A linear method is developed for solving the nonlinear differential equations of a lumped-parameter thermal model of a spacecraft moving in a closed orbit. This method, based on perturbation theory, is compared with heuristic linearizations of the same equations. The essential feature of the linear approach is that it provides a decomposition in thermal modes, like the decomposition of mechanical vibrations in normal modes. The stationary periodic solution of the linear equations can be alternately expressed as an explicit integral or as a Fourier series. This method is applied to a minimal thermal model of a satellite with ten isothermal parts (nodes), and the method is compared with direct numerical integration of the nonlinear equations. The computational complexity of this method is briefly studied for general thermal models of orbiting spacecraft, and it is concluded that it is certainly useful for reduced models and conceptual design but it can also be more efficient than the direct integration of the equations for large models. The results of the Fourier series computations for the ten-node satellite model show that the periodic solution at the second perturbative order is sufficiently accurate.
Resumo:
Dynamic thermal management techniques require a collection of on-chip thermal sensors that imply a significant area and power overhead. Finding the optimum number of temperature monitors and their location on the chip surface to optimize accuracy is an NP-hard problem. In this work we improve the modeling of the problem by including area, power and networking constraints along with the consideration of three inaccuracy terms: spatial errors, sampling rate errors and monitor-inherent errors. The problem is solved by the simulated annealing algorithm. We apply the algorithm to a test case employing three different types of monitors to highlight the importance of the different metrics. Finally we present a case study of the Alpha 21364 processor under two different constraint scenarios.
Resumo:
Este trabajo aborda el problema de modelizar sistemas din´amicos reales a partir del estudio de sus series temporales, usando una formulaci´on est´andar que pretende ser una abstracci´on universal de los sistemas din´amicos, independientemente de su naturaleza determinista, estoc´astica o h´ıbrida. Se parte de modelizaciones separadas de sistemas deterministas por un lado y estoc´asticos por otro, para converger finalmente en un modelo h´ıbrido que permite estudiar sistemas gen´ericos mixtos, esto es, que presentan una combinaci´on de comportamiento determinista y aleatorio. Este modelo consta de dos componentes, uno determinista consistente en una ecuaci´on en diferencias, obtenida a partir de un estudio de autocorrelaci´on, y otro estoc´astico que modeliza el error cometido por el primero. El componente estoc´astico es un generador universal de distribuciones de probabilidad, basado en un proceso compuesto de variables aleatorias, uniformemente distribuidas en un intervalo variable en el tiempo. Este generador universal es deducido en la tesis a partir de una nueva teor´ıa sobre la oferta y la demanda de un recurso gen´erico. El modelo resultante puede formularse conceptualmente como una entidad con tres elementos fundamentales: un motor generador de din´amica determinista, una fuente interna de ruido generadora de incertidumbre y una exposici´on al entorno que representa las interacciones del sistema real con el mundo exterior. En las aplicaciones estos tres elementos se ajustan en base al hist´orico de las series temporales del sistema din´amico. Una vez ajustados sus componentes, el modelo se comporta de una forma adaptativa tomando como inputs los nuevos valores de las series temporales del sistema y calculando predicciones sobre su comportamiento futuro. Cada predicci´on se presenta como un intervalo dentro del cual cualquier valor es equipro- bable, teniendo probabilidad nula cualquier valor externo al intervalo. De esta forma el modelo computa el comportamiento futuro y su nivel de incertidumbre en base al estado actual del sistema. Se ha aplicado el modelo en esta tesis a sistemas muy diferentes mostrando ser muy flexible para afrontar el estudio de campos de naturaleza dispar. El intercambio de tr´afico telef´onico entre operadores de telefon´ıa, la evoluci´on de mercados financieros y el flujo de informaci´on entre servidores de Internet son estudiados en profundidad en la tesis. Todos estos sistemas son modelizados de forma exitosa con un mismo lenguaje, a pesar de tratarse de sistemas f´ısicos totalmente distintos. El estudio de las redes de telefon´ıa muestra que los patrones de tr´afico telef´onico presentan una fuerte pseudo-periodicidad semanal contaminada con una gran cantidad de ruido, sobre todo en el caso de llamadas internacionales. El estudio de los mercados financieros muestra por su parte que la naturaleza fundamental de ´estos es aleatoria con un rango de comportamiento relativamente acotado. Una parte de la tesis se dedica a explicar algunas de las manifestaciones emp´ıricas m´as importantes en los mercados financieros como son los “fat tails”, “power laws” y “volatility clustering”. Por ´ultimo se demuestra que la comunicaci´on entre servidores de Internet tiene, al igual que los mercados financieros, una componente subyacente totalmente estoc´astica pero de comportamiento bastante “d´ocil”, siendo esta docilidad m´as acusada a medida que aumenta la distancia entre servidores. Dos aspectos son destacables en el modelo, su adaptabilidad y su universalidad. El primero es debido a que, una vez ajustados los par´ametros generales, el modelo se “alimenta” de los valores observables del sistema y es capaz de calcular con ellos comportamientos futuros. A pesar de tener unos par´ametros fijos, la variabilidad en los observables que sirven de input al modelo llevan a una gran riqueza de ouputs posibles. El segundo aspecto se debe a la formulaci´on gen´erica del modelo h´ıbrido y a que sus par´ametros se ajustan en base a manifestaciones externas del sistema en estudio, y no en base a sus caracter´ısticas f´ısicas. Estos factores hacen que el modelo pueda utilizarse en gran variedad de campos. Por ´ultimo, la tesis propone en su parte final otros campos donde se han obtenido ´exitos preliminares muy prometedores como son la modelizaci´on del riesgo financiero, los algoritmos de routing en redes de telecomunicaci´on y el cambio clim´atico. Abstract This work faces the problem of modeling dynamical systems based on the study of its time series, by using a standard language that aims to be an universal abstraction of dynamical systems, irrespective of their deterministic, stochastic or hybrid nature. Deterministic and stochastic models are developed separately to be merged subsequently into a hybrid model, which allows the study of generic systems, that is to say, those having both deterministic and random behavior. This model is a combination of two different components. One of them is deterministic and consisting in an equation in differences derived from an auto-correlation study and the other is stochastic and models the errors made by the deterministic one. The stochastic component is an universal generator of probability distributions based on a process consisting in random variables distributed uniformly within an interval varying in time. This universal generator is derived in the thesis from a new theory of offer and demand for a generic resource. The resulting model can be visualized as an entity with three fundamental elements: an engine generating deterministic dynamics, an internal source of noise generating uncertainty and an exposure to the environment which depicts the interactions between the real system and the external world. In the applications these three elements are adjusted to the history of the time series from the dynamical system. Once its components have been adjusted, the model behaves in an adaptive way by using the new time series values from the system as inputs and calculating predictions about its future behavior. Every prediction is provided as an interval, where any inner value is equally probable while all outer ones have null probability. So, the model computes the future behavior and its level of uncertainty based on the current state of the system. The model is applied to quite different systems in this thesis, showing to be very flexible when facing the study of fields with diverse nature. The exchange of traffic between telephony operators, the evolution of financial markets and the flow of information between servers on the Internet are deeply studied in this thesis. All these systems are successfully modeled by using the same “language”, in spite the fact that they are systems physically radically different. The study of telephony networks shows that the traffic patterns are strongly weekly pseudo-periodic but mixed with a great amount of noise, specially in the case of international calls. It is proved that the underlying nature of financial markets is random with a moderate range of variability. A part of this thesis is devoted to explain some of the most important empirical observations in financial markets, such as “fat tails”, “power laws” and “volatility clustering”. Finally it is proved that the communication between two servers on the Internet has, as in the case of financial markets, an underlaying random dynamics but with a narrow range of variability, being this lack of variability more marked as the distance between servers is increased. Two aspects of the model stand out as being the most important: its adaptability and its universality. The first one is due to the fact that once the general parameters have been adjusted , the model is “fed” on the observable manifestations of the system in order to calculate its future behavior. Despite the fact that the model has fixed parameters the variability in the observable manifestations of the system, which are used as inputs of the model, lead to a great variability in the possible outputs. The second aspect is due to the general “language” used in the formulation of the hybrid model and to the fact that its parameters are adjusted based on external manifestations of the system under study instead of its physical characteristics. These factors made the model suitable to be used in great variety of fields. Lastly, this thesis proposes other fields in which preliminary and promising results have been obtained, such as the modeling of financial risk, the development of routing algorithms for telecommunication networks and the assessment of climate change.
Resumo:
Sandwich panels of laminated gypsum and rock wool have shown large pathology of cracking due to excessive slabs deflection. Currently the most widespread use of this material is as vertical elements of division or partition, with no structural function, what justifies that there are no studies on the mechanism of fracture and mechanical properties related to it. Therefore, and in order to reduce the cracking problem, it is necessary to progress in the simulation and prediction of the behaviour under tensile and shear load of such panels, although in typical applications have no structural responsability.
Resumo:
This paper presents an ant colony optimization algorithm to sequence the mixed assembly lines considering the inventory and the replenishment of components. This is a NP-problem that cannot be solved to optimality by exact methods when the size of the problem growth. Groups of specialized ants are implemented to solve the different parts of the problem. This is intended to differentiate each part of the problem. Different types of pheromone structures are created to identify good car sequences, and good routes for the replenishment of components vehicle. The contribution of this paper is the collaborative approach of the ACO for the mixed assembly line and the replenishment of components and the jointly solution of the problem.
Resumo:
La necesidad de desarrollar técnicas para predecir la respuesta vibroacústica de estructuras espaciales lia ido ganando importancia en los últimos años. Las técnicas numéricas existentes en la actualidad son capaces de predecir de forma fiable el comportamiento vibroacústico de sistemas con altas o bajas densidades modales. Sin embargo, ambos rangos no siempre solapan lo que hace que sea necesario el desarrollo de métodos específicos para este rango, conocido como densidad modal media. Es en este rango, conocido también como media frecuencia, donde se centra la presente Tesis doctoral, debido a la carencia de métodos específicos para el cálculo de la respuesta vibroacústica. Para las estructuras estudiadas en este trabajo, los mencionados rangos de baja y alta densidad modal se corresponden, en general, con los rangos de baja y alta frecuencia, respectivamente. Los métodos numéricos que permiten obtener la respuesta vibroacústica para estos rangos de frecuencia están bien especificados. Para el rango de baja frecuencia se emplean técnicas deterministas, como el método de los Elementos Finitos, mientras que, para el rango de alta frecuencia las técnicas estadísticas son más utilizadas, como el Análisis Estadístico de la Energía. En el rango de medias frecuencias ninguno de estos métodos numéricos puede ser usado con suficiente precisión y, como consecuencia -a falta de propuestas más específicas- se han desarrollado métodos híbridos que combinan el uso de métodos de baja y alta frecuencia, intentando que cada uno supla las deficiencias del otro en este rango medio. Este trabajo propone dos soluciones diferentes para resolver el problema de la media frecuencia. El primero de ellos, denominado SHFL (del inglés Subsystem based High Frequency Limit procedure), propone un procedimiento multihíbrido en el cuál cada subestructura del sistema completo se modela empleando una técnica numérica diferente, dependiendo del rango de frecuencias de estudio. Con este propósito se introduce el concepto de límite de alta frecuencia de una subestructura, que marca el límite a partir del cual dicha subestructura tiene una densidad modal lo suficientemente alta como para ser modelada utilizando Análisis Estadístico de la Energía. Si la frecuencia de análisis es menor que el límite de alta frecuencia de la subestructura, ésta se modela utilizando Elementos Finitos. Mediante este método, el rango de media frecuencia se puede definir de una forma precisa, estando comprendido entre el menor y el mayor de los límites de alta frecuencia de las subestructuras que componen el sistema completo. Los resultados obtenidos mediante la aplicación de este método evidencian una mejora en la continuidad de la respuesta vibroacústica, mostrando una transición suave entre los rangos de baja y alta frecuencia. El segundo método propuesto se denomina HS-CMS (del inglés Hybrid Substructuring method based on Component Mode Synthesis). Este método se basa en la clasificación de la base modal de las subestructuras en conjuntos de modos globales (que afectan a todo o a varias partes del sistema) o locales (que afectan a una única subestructura), utilizando un método de Síntesis Modal de Componentes. De este modo es posible situar espacialmente los modos del sistema completo y estudiar el comportamiento del mismo desde el punto de vista de las subestructuras. De nuevo se emplea el concepto de límite de alta frecuencia de una subestructura para realizar la clasificación global/local de los modos en la misma. Mediante dicha clasificación se derivan las ecuaciones globales del movimiento, gobernadas por los modos globales, y en las que la influencia del conjunto de modos locales se introduce mediante modificaciones en las mismas (en su matriz dinámica de rigidez y en el vector de fuerzas). Las ecuaciones locales se resuelven empleando Análisis Estadístico de Energías. Sin embargo, este último será un modelo híbrido, en el cual se introduce la potencia adicional aportada por la presencia de los modos globales. El método ha sido probado para el cálculo de la respuesta de estructuras sometidas tanto a cargas estructurales como acústicas. Ambos métodos han sido probados inicialmente en estructuras sencillas para establecer las bases e hipótesis de aplicación. Posteriormente, se han aplicado a estructuras espaciales, como satélites y reflectores de antenas, mostrando buenos resultados, como se concluye de la comparación de las simulaciones y los datos experimentales medidos en ensayos, tanto estructurales como acústicos. Este trabajo abre un amplio campo de investigación a partir del cual es posible obtener metodologías precisas y eficientes para reproducir el comportamiento vibroacústico de sistemas en el rango de la media frecuencia. ABSTRACT Over the last years an increasing need of novel prediction techniques for vibroacoustic analysis of space structures has arisen. Current numerical techniques arc able to predict with enough accuracy the vibro-acoustic behaviour of systems with low and high modal densities. However, space structures are, in general, very complex and they present a range of frequencies in which a mixed behaviour exist. In such cases, the full system is composed of some sub-structures which has low modal density, while others present high modal density. This frequency range is known as the mid-frequency range and to develop methods for accurately describe the vibro-acoustic response in this frequency range is the scope of this dissertation. For the structures under study, the aforementioned low and high modal densities correspond with the low and high frequency ranges, respectively. For the low frequency range, deterministic techniques as the Finite Element Method (FEM) are used while, for the high frequency range statistical techniques, as the Statistical Energy Analysis (SEA), arc considered as more appropriate. In the mid-frequency range, where a mixed vibro-acoustic behaviour is expected, any of these numerical method can not be used with enough confidence level. As a consequence, it is usual to obtain an undetermined gap between low and high frequencies in the vibro-acoustic response function. This dissertation proposes two different solutions to the mid-frequency range problem. The first one, named as The Subsystem based High Frequency Limit (SHFL) procedure, proposes a multi-hybrid procedure in which each sub-structure of the full system is modelled with the appropriate modelling technique, depending on the frequency of study. With this purpose, the concept of high frequency limit of a sub-structure is introduced, marking out the limit above which a substructure has enough modal density to be modelled by SEA. For a certain analysis frequency, if it is lower than the high frequency limit of the sub-structure, the sub-structure is modelled through FEM and, if the frequency of analysis is higher than the high frequency limit, the sub-structure is modelled by SEA. The procedure leads to a number of hybrid models required to cover the medium frequency range, which is defined as the frequency range between the lowest substructure high frequency limit and the highest one. Using this procedure, the mid-frequency range can be define specifically so that, as a consequence, an improvement in the continuity of the vibro-acoustic response function is achieved, closing the undetermined gap between the low and high frequency ranges. The second proposed mid-frequency solution is the Hybrid Sub-structuring method based on Component Mode Synthesis (HS-CMS). The method adopts a partition scheme based on classifying the system modal basis into global and local sets of modes. This classification is performed by using a Component Mode Synthesis, in particular a Craig-Bampton transformation, in order to express the system modal base into the modal bases associated with each sub-structure. Then, each sub-structure modal base is classified into global and local set, fist ones associated with the long wavelength motion and second ones with the short wavelength motion. The high frequency limit of each sub-structure is used as frequency frontier between both sets of modes. From this classification, the equations of motion associated with global modes are derived, which include the interaction of local modes by means of corrections in the dynamic stiffness matrix and the force vector of the global problem. The local equations of motion are solved through SEA, where again interactions with global modes arc included through the inclusion of an additional input power into the SEA model. The method has been tested for the calculation of the response function of structures subjected to structural and acoustic loads. Both methods have been firstly tested in simple structures to establish their basis and main characteristics. Methods are also verified in space structures, as satellites and antenna reflectors, providing good results as it is concluded from the comparison with experimental results obtained in both, acoustic and structural load tests. This dissertation opens a wide field of research through which further studies could be performed to obtain efficient and accurate methodologies to appropriately reproduce the vibro-acoustic behaviour of complex systems in the mid-frequency range.
Resumo:
Background: This study examined the daily surgical scheduling problem in a teaching hospital. This problem relates to the use of multiple operating rooms and different types of surgeons in a typical surgical day with deterministic operation durations (preincision, incision, and postincision times). Teaching hospitals play a key role in the health-care system; however, existing models assume that the duration of surgery is independent of the surgeon's skills. This problem has not been properly addressed in other studies. We analyze the case of a Spanish public hospital, in which continuous pressures and budgeting reductions entail the more efficient use of resources. Methods: To obtain an optimal solution for this problem, we developed a mixed-integer programming model and user-friendly interface that facilitate the scheduling of planned operations for the following surgical day. We also implemented a simulation model to assist the evaluation of different dispatching policies for surgeries and surgeons. The typical aspects we took into account were the type of surgeon, potential overtime, idling time of surgeons, and the use of operating rooms. Results: It is necessary to consider the expertise of a given surgeon when formulating a schedule: such skill can decrease the probability of delays that could affect subsequent surgeries or cause cancellation of the final surgery. We obtained optimal solutions for a set of given instances, which we obtained through surgical information related to acceptable times collected from a Spanish public hospital. Conclusions: We developed a computer-aided framework with a user-friendly interface for use by a surgical manager that presents a 3-D simulation of the problem. Additionally, we obtained an efficient formulation for this complex problem. However, the spread of this kind of operation research in Spanish public health hospitals will take a long time since there is a lack of knowledge of the beneficial techniques and possibilities that operational research can offer for the health-care system.
Resumo:
Los sistemas empotrados son cada día más comunes y complejos, de modo que encontrar procesos seguros, eficaces y baratos de desarrollo software dirigidos específicamente a esta clase de sistemas es más necesario que nunca. A diferencia de lo que ocurría hasta hace poco, en la actualidad los avances tecnológicos en el campo de los microprocesadores de los últimos tiempos permiten el desarrollo de equipos con prestaciones más que suficientes para ejecutar varios sistemas software en una única máquina. Además, hay sistemas empotrados con requisitos de seguridad (safety) de cuyo correcto funcionamiento depende la vida de muchas personas y/o grandes inversiones económicas. Estos sistemas software se diseñan e implementan de acuerdo con unos estándares de desarrollo software muy estrictos y exigentes. En algunos casos puede ser necesaria también la certificación del software. Para estos casos, los sistemas con criticidades mixtas pueden ser una alternativa muy valiosa. En esta clase de sistemas, aplicaciones con diferentes niveles de criticidad se ejecutan en el mismo computador. Sin embargo, a menudo es necesario certificar el sistema entero con el nivel de criticidad de la aplicación más crítica, lo que hace que los costes se disparen. La virtualización se ha postulado como una tecnología muy interesante para contener esos costes. Esta tecnología permite que un conjunto de máquinas virtuales o particiones ejecuten las aplicaciones con unos niveles de aislamiento tanto temporal como espacial muy altos. Esto, a su vez, permite que cada partición pueda ser certificada independientemente. Para el desarrollo de sistemas particionados con criticidades mixtas se necesita actualizar los modelos de desarrollo software tradicionales, pues estos no cubren ni las nuevas actividades ni los nuevos roles que se requieren en el desarrollo de estos sistemas. Por ejemplo, el integrador del sistema debe definir las particiones o el desarrollador de aplicaciones debe tener en cuenta las características de la partición donde su aplicación va a ejecutar. Tradicionalmente, en el desarrollo de sistemas empotrados, el modelo en V ha tenido una especial relevancia. Por ello, este modelo ha sido adaptado para tener en cuenta escenarios tales como el desarrollo en paralelo de aplicaciones o la incorporación de una nueva partición a un sistema ya existente. El objetivo de esta tesis doctoral es mejorar la tecnología actual de desarrollo de sistemas particionados con criticidades mixtas. Para ello, se ha diseñado e implementado un entorno dirigido específicamente a facilitar y mejorar los procesos de desarrollo de esta clase de sistemas. En concreto, se ha creado un algoritmo que genera el particionado del sistema automáticamente. En el entorno de desarrollo propuesto, se han integrado todas las actividades necesarias para desarrollo de un sistema particionado, incluidos los nuevos roles y actividades mencionados anteriormente. Además, el diseño del entorno de desarrollo se ha basado en la ingeniería guiada por modelos (Model-Driven Engineering), la cual promueve el uso de los modelos como elementos fundamentales en el proceso de desarrollo. Así pues, se proporcionan las herramientas necesarias para modelar y particionar el sistema, así como para validar los resultados y generar los artefactos necesarios para el compilado, construcción y despliegue del mismo. Además, en el diseño del entorno de desarrollo, la extensión e integración del mismo con herramientas de validación ha sido un factor clave. En concreto, se pueden incorporar al entorno de desarrollo nuevos requisitos no-funcionales, la generación de nuevos artefactos tales como documentación o diferentes lenguajes de programación, etc. Una parte clave del entorno de desarrollo es el algoritmo de particionado. Este algoritmo se ha diseñado para ser independiente de los requisitos de las aplicaciones así como para permitir al integrador del sistema implementar nuevos requisitos del sistema. Para lograr esta independencia, se han definido las restricciones al particionado. El algoritmo garantiza que dichas restricciones se cumplirán en el sistema particionado que resulte de su ejecución. Las restricciones al particionado se han diseñado con una capacidad expresiva suficiente para que, con un pequeño grupo de ellas, se puedan expresar la mayor parte de los requisitos no-funcionales más comunes. Las restricciones pueden ser definidas manualmente por el integrador del sistema o bien pueden ser generadas automáticamente por una herramienta a partir de los requisitos funcionales y no-funcionales de una aplicación. El algoritmo de particionado toma como entradas los modelos y las restricciones al particionado del sistema. Tras la ejecución y como resultado, se genera un modelo de despliegue en el que se definen las particiones que son necesarias para el particionado del sistema. A su vez, cada partición define qué aplicaciones deben ejecutar en ella así como los recursos que necesita la partición para ejecutar correctamente. El problema del particionado y las restricciones al particionado se modelan matemáticamente a través de grafos coloreados. En dichos grafos, un coloreado propio de los vértices representa un particionado del sistema correcto. El algoritmo se ha diseñado también para que, si es necesario, sea posible obtener particionados alternativos al inicialmente propuesto. El entorno de desarrollo, incluyendo el algoritmo de particionado, se ha probado con éxito en dos casos de uso industriales: el satélite UPMSat-2 y un demostrador del sistema de control de una turbina eólica. Además, el algoritmo se ha validado mediante la ejecución de numerosos escenarios sintéticos, incluyendo algunos muy complejos, de más de 500 aplicaciones. ABSTRACT The importance of embedded software is growing as it is required for a large number of systems. Devising cheap, efficient and reliable development processes for embedded systems is thus a notable challenge nowadays. Computer processing power is continuously increasing, and as a result, it is currently possible to integrate complex systems in a single processor, which was not feasible a few years ago.Embedded systems may have safety critical requirements. Its failure may result in personal or substantial economical loss. The development of these systems requires stringent development processes that are usually defined by suitable standards. In some cases their certification is also necessary. This scenario fosters the use of mixed-criticality systems in which applications of different criticality levels must coexist in a single system. In these cases, it is usually necessary to certify the whole system, including non-critical applications, which is costly. Virtualization emerges as an enabling technology used for dealing with this problem. The system is structured as a set of partitions, or virtual machines, that can be executed with temporal and spatial isolation. In this way, applications can be developed and certified independently. The development of MCPS (Mixed-Criticality Partitioned Systems) requires additional roles and activities that traditional systems do not require. The system integrator has to define system partitions. Application development has to consider the characteristics of the partition to which it is allocated. In addition, traditional software process models have to be adapted to this scenario. The V-model is commonly used in embedded systems development. It can be adapted to the development of MCPS by enabling the parallel development of applications or adding an additional partition to an existing system. The objective of this PhD is to improve the available technology for MCPS development by providing a framework tailored to the development of this type of system and by defining a flexible and efficient algorithm for automatically generating system partitionings. The goal of the framework is to integrate all the activities required for developing MCPS and to support the different roles involved in this process. The framework is based on MDE (Model-Driven Engineering), which emphasizes the use of models in the development process. The framework provides basic means for modeling the system, generating system partitions, validating the system and generating final artifacts. The framework has been designed to facilitate its extension and the integration of external validation tools. In particular, it can be extended by adding support for additional non-functional requirements and support for final artifacts, such as new programming languages or additional documentation. The framework includes a novel partitioning algorithm. It has been designed to be independent of the types of applications requirements and also to enable the system integrator to tailor the partitioning to the specific requirements of a system. This independence is achieved by defining partitioning constraints that must be met by the resulting partitioning. They have sufficient expressive capacity to state the most common constraints and can be defined manually by the system integrator or generated automatically based on functional and non-functional requirements of the applications. The partitioning algorithm uses system models and partitioning constraints as its inputs. It generates a deployment model that is composed by a set of partitions. Each partition is in turn composed of a set of allocated applications and assigned resources. The partitioning problem, including applications and constraints, is modeled as a colored graph. A valid partitioning is a proper vertex coloring. A specially designed algorithm generates this coloring and is able to provide alternative partitions if required. The framework, including the partitioning algorithm, has been successfully used in the development of two industrial use cases: the UPMSat-2 satellite and the control system of a wind-power turbine. The partitioning algorithm has been successfully validated by using a large number of synthetic loads, including complex scenarios with more that 500 applications.