80 resultados para modelling and simulating


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper will present an open-source simulation tool, which is being developed in the frame of an European research project1. The tool, whose final version will be freely available through a website, allows the modelling and the design of different types of grid-connected PV systems, such as large grid-connected plants and building-integrated installations. The tool is based on previous software developed by the IES-UPM2, whose models and energy losses scenarios have been validated in the commissioning of PV projects3 carried out in Spain, Portugal, France and Italy, whose aggregated capacity is nearly 300MW. This link between design and commissioning is one of the key points of tool presented here, which is not usually addressed by present commercial software. The tool provides, among other simulation results, the energy yield, the analysis and breakdown of energy losses, and the estimations of financial returns adapted to the legal and financial frameworks of each European country. Besides, educational facilities will be developed and integrated in the tool, not only devoted to learn how to use this software, but also to train the users on the best design PV systems practices. The tool will also include the recommendation of several PV community experts, which have been invited to identify present necessities in the field of PV systems simulation. For example, the possibility of using meteorological forecasts as input data, or modelling the integration of large energy storage systems, such as vanadium redox or lithium-ion batteries. Finally, it is worth mentioning that during the verification and testing stages of this software development, it will be also open to the suggestions received from the different actors of the PV community, such as promoters, installers, consultants, etc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A comprehensive assessment of nitrogen (N) flows at the landscape scale is fundamental to understand spatial interactions in the N cascade and to inform the development of locally optimised N management strategies. To explore these interactions, complete N budgets were estimated for two contrasting hydrological catchments (dominated by agricultural grassland vs. semi-natural peat-dominated moorland), forming part of an intensively studied landscape in southern Scotland. Local scale atmospheric dispersion modelling and detailed farm and field inventories provided high resolution estimations of input fluxes. Direct agricultural inputs (i.e. grazing excreta, N2 fixation, organic and synthetic fertiliser) accounted for most of the catchment N inputs, representing 82% in the grassland and 62% in the moorland catchment, while atmospheric deposition made a significant contribution, particularly in the moorland catchment, contributing 38% of the N inputs. The estimated catchment N budgets highlighted areas of key uncertainty, particularly N2 exchange and stream N export. The resulting N balances suggest that the study catchments have a limited capacity to store N within soils, vegetation and groundwater. The "catchment N retention", i.e. the amount of N which is either stored within the catchment or lost through atmospheric emissions, was estimated to be 13% of the net anthropogenic input in the moorland and 61% in the grassland catchment. These values contrast with regional scale estimates: Catchment retentions of net anthropogenic input estimated within Europe at the regional scale range from 50% to 90%, with an average of 82% (Billen et al., 2011). This study emphasises the need for detailed budget analyses to identify the N status of European landscapes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We aim at understanding the multislip behaviour of metals subject to irreversible deformations at small-scales. By focusing on the simple shear of a constrained single-crystal strip, we show that discrete Dislocation Dynamics (DD) simulations predict a strong latent hardening size effect, with smaller being stronger in the range [1.5 µm, 6 µm] for the strip height. We attempt to represent the DD pseudo-experimental results by developing a flow theory of Strain Gradient Crystal Plasticity (SGCP), involving both energetic and dissipative higher-order terms and, as a main novelty, a strain gradient extension of the conventional latent hardening. In order to discuss the capability of the SGCP theory proposed, we implement it into a Finite Element (FE) code and set its material parameters on the basis of the DD results. The SGCP FE code is specifically developed for the boundary value problem under study so that we can implement a fully implicit (Backward Euler) consistent algorithm. Special emphasis is placed on the discussion of the role of the material length scales involved in the SGCP model, from both the mechanical and numerical points of view.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract This paper presents a new method to extract knowledge from existing data sets, that is, to extract symbolic rules using the weights of an Artificial Neural Network. The method has been applied to a neural network with special architecture named Enhanced Neural Network (ENN). This architecture improves the results that have been obtained with multilayer perceptron (MLP). The relationship among the knowledge stored in the weights, the performance of the network and the new implemented algorithm to acquire rules from the weights is explained. The method itself gives a model to follow in the knowledge acquisition with ENN.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En esta tesis se aborda el problema de la modelización, análisis y optimización de pórticos metálicos planos de edificación frente a los estados límites último y de servicio. El objetivo general es presentar una técnica secuencial ordenada de optimización discreta para obtener el coste mínimo de pórticos metálicos planos de edificación, teniendo en cuenta las especificaciones del EC-3, incorporando las uniones semirrígidas y elementos no prismáticos en el proceso de diseño. Asimismo se persigue valorar su grado de influencia sobre el diseño final. El horizonte es extraer conclusiones prácticas que puedan ser de utilidad y aplicación simple para el proyecto de estructuras metálicas. La cantidad de publicaciones técnicas y científicas sobre la respuesta estructural de entramados metálicos es inmensa; por ello se ha hecho un esfuerzo intenso en recopilar el estado actual del conocimiento, sobre las líneas y necesidades actuales de investigación. Se ha recabado información sobre los métodos modernos de cálculo y diseño, sobre los factores que influyen sobre la respuesta estructural, sobre técnicas de modelización y de optimización, al amparo de las indicaciones que algunas normativas actuales ofrecen sobre el tema. En esta tesis se ha desarrollado un procedimiento de modelización apoyado en el método de los elementos finitos implementado en el entorno MatLab; se han incluido aspectos claves tales como el comportamiento de segundo orden, la comprobación ante inestabilidad y la búsqueda del óptimo del coste de la estructura frente a estados límites, teniendo en cuenta las especificaciones del EC-3. También se ha modelizado la flexibilidad de las uniones y se ha analizado su influencia en la respuesta de la estructura y en el peso y coste final de la misma. Se han ejecutado algunos ejemplos de aplicación y se ha contrastado la validez del modelo con resultados de algunas estructuras ya analizadas en referencias técnicas conocidas. Se han extraído conclusiones sobre el proceso de modelización y de análisis, sobre la repercusión de la flexibilidad de las uniones en la respuesta de la estructura. El propósito es extraer conclusiones útiles para la etapa de proyecto. Una de las principales aportaciones del trabajo en su enfoque de optimización es la incorporación de una formulación de elementos no prismáticos con uniones semirrígidas en sus extremos. Se ha deducido una matriz de rigidez elástica para dichos elementos. Se ha comprobado su validez para abordar el análisis no lineal; para ello se han comparado los resultados con otros obtenidos tras aplicar otra matriz deducida analíticamente existente en la literatura y también mediante el software comercial SAP2000. Otra de las aportaciones de esta tesis es el desarrollo de un método de optimización del coste de pórticos metálicos planos de edificación en el que se tienen en cuenta aspectos tales como las imperfecciones, la posibilidad de incorporar elementos no prismáticos y la caracterización de las uniones semirrígidas, valorando la influencia de su flexibilidad sobre la respuesta de la estructura. Así, se han realizado estudios paramétricos para valorar la sensibilidad y estabilidad de las soluciones obtenidas, así como rangos de validez de las conclusiones obtenidas. This thesis deals with the problems of modelling, analysis and optimization of plane steel frames with regard to ultimate and serviceability limit states. The objective of this work is to present an organized sequential technique of discrete optimization for achieving the minimum cost of plane steel frames, taking into consideration the EC-3 specifications as well as including effects of the semi-rigid joints and non-prismatic elements in the design process. Likewise, an estimate of their influence on the final design is an aim of this work. The final objective is to draw practical conclusions which can be handful and easily applicable for a steel-structure project. An enormous amount of technical and scientific publications regarding steel frames is currently available, thus making the achievement of a comprehensive and updated knowledge a considerably hard task. In this work, a large variety of information has been gathered and classified, especially that related to current research lines and needs. Thus, the literature collected encompasses references related to state-of-the-art design methods, factors influencing the structural response, modelling and optimization techniques, as well as calculation and updated guidelines of some steel Design Codes about the subject. In this work a modelling procedure based on the finite element implemented within the MatLab programming environment has been performed. Several keys aspects have been included, such as second order behaviour, the safety assessment against structural instability and the search for an optimal cost considering the limit states according to EC-3 specifications. The flexibility of joints has been taken into account in the procedure hereby presented; its effects on the structural response, on the optimum weight and on the final cost have also been analysed. In order to confirm the validity and adequacy of this procedure, some application examples have been carried out. The results obtained were compared with those available from other authors. Several conclusions about the procedure that comprises modelling, analysis and design stages, as well as the effect of the flexibility of connections on the structural response have been drawn. The purpose is to point out some guidelines for the early stages of a project. One of the contributions of this thesis is an attempt for optimizing plane steel frames in which both non-prismatic beam-column-type elements and semi-rigid connections have been considered. Thus, an elastic stiffness matrix has been derived. Its validity has been tested through comparing its accuracy with other analytically-obtained matrices available in the literature, and with results obtained by the commercial software SAP2000. Another achievement of this work is the development of a method for cost optimization of plane steel building frames in which some relevant aspects have been taken in consideration. These encompass geometric imperfections, non-prismatic beam elements and the numerical characterization of semi-rigid connections, evaluating the effect of its flexibility on the structural response. Hence, some parametric analyses have been performed in order to assess the sensitivity, the stability of the outcomes and their range of applicability as well.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta Tesis Doctoral trata sobre la caracterización acústica de los ecosistemas naturales y la evaluación del impacto ambiental del ruido antropogénico sobre sus potenciales receptores en estos lugares, incluidos los receptores no humanos y sus efectos ecológicos, además, analiza las implicaciones para su gestión a distintas escalas y se lleva a cabo una valoración económica. Este trabajo ofrece soluciones para caracterizar los paisajes sonoros de forma compatible con distintas escalas de trabajo, nivel de esfuerzo técnico y en contextos de recursos limitados que haga viable su tratamiento como cualquier otra variable ambiental en el ámbito de la conservación y gestión del medio natural. Se han adaptado herramientas y metodologías propias de disciplinas como la acústica ambiental, bioacústica y ecología del paisaje, para servir a los objetivos específicos de la evaluación y gestión de los paisajes sonoros y el ruido ambiental en amplias extensiones geográficas. Se ha establecido un método general de muestreo sistemático para trabajo de campo y también se han adaptado métodos de modelización informática, que permiten analizar escenarios sonoros dinámicos en el tiempo y en el espacio, desde localizaciones puntuales hasta la escala del paisaje. Es posible elaborar cartografía ambiental con esta información y se ha representado gráficamente la zona de influencia de distintas fuentes de ruido sobre la calidad de distintos hábitats faunísticos. Se recomienda el uso del indicador del nivel de presión sonora equivalente (Leq) por su operatividad en medición y modelización, y su adaptabilidad a cualquier dimensión espacial y temporal que se requiera, por ejemplo en función del paisaje, actividades o especies que se establezcan como objeto de análisis. Se ha comprobado que las voces y conversaciones de parte de los excursionistas en zonas de reposo, observación y descanso (Laguna Grande de Peñalara) es la fuente de ruido que con mayor frecuencia identifican los propios visitantes (51%) y causa un incremento del nivel de presión sonora equivalente de unos 4,5 dBA sobre el nivel correspondiente al ambiente natural (Lnat). También se ha comprobado que carreteras con bajo nivel de tráfico (IMD<1000) pueden causar estrés fisiológico sobre la fauna y afectar a la calidad de sus hábitats. La isófona de 30 dBA del índice Leq (24h) permite dividir a los corzos de la zona de estudio en dos grupos con diferente nivel de estrés fisiológico, más elevado en los que se sitúan más cerca de la carretera con mayor volumen de tráfico y se expone a mayores niveles de ruido. Por otro lado, ha sido posible delimitar una zona de exclusión para la nidificación de buitre negro alrededor de las carreteras, coincidente con la isófona Leq (24h) de 40 dBA que afecta al 11% de su hábitat potencial. Además se ha llevado a cabo una novedosa valoración económica de la contaminación acústica en espacios naturales protegidos, mediante el análisis de la experiencia sonora de los visitantes del antiguo Parque Natural de Peñalara, y se ha constatado su disposición al pago de una entrada de acceso a estos lugares (aproximadamente 1 euro) si redundara en una mejora de su estado de conservación. En conclusión, los espacios naturales protegidos pueden sufrir un impacto ambiental significativo causado por fuentes de ruido localizadas en su interior pero también lejanas a ellos, que se sitúan fuera del ámbito de competencias de sus gestores. Sucesos sonoros como el sobrevuelo de aviones pueden incrementar en aproximadamente 8 dBA el nivel de referencia Lnat en las zonas tranquilas del parque. Se recomienda llevar a cabo una gestión activa del medio ambiente sonoro y se considera necesario extender la investigación sobre los efectos ecológicos del ruido ambiental a otros lugares y especies animales. ABSTRACT This PhD Thesis deals with acoustic characterization of natural ecosystems and anthropogenic noise impact assessment on potential receivers, including non-human receivers and their ecological effects. Besides, its management implications at different scales are analyzed and an economic valuation is performed. This study provides solutions for characterizing soundscapes in a compatible way with different working scales, level of technical effort and in a context of limited resources, so its treatment becomes feasible as for any other environmental variable in conservation and environmental management. Several tools and methodologies have been adapted from a variety of disciplines such as environmental acoustics, bioacoustics and landscape ecology, to better serve the specific goals of assessing and managing soundscapes and environmental noise in large areas. A procedure has been established for systematic field measurement surveys and noise common computer modelling methods have also been adapted in order to analyze dynamic soundscapes across time and space, from local to landscape scales. It is possible to create specific thematic cartography as for instance delimiting potential influence zone from different noise sources on animal habitats quality. Use of equivalent continuous sound pressure level index (Leq) is recommended because it provides great flexibility in operation for noise measurement and modelling, and because of its adaptability to any required temporal and spatial dimension, for instance landscape, activities or the target species established as study subjects. It has been found that human voices and conversations in a resting and contemplation area (Laguna Grande de Peñalara) is the most frequently referred noise source by national park visitors (51 %) when asked. Human voices alter this recreational area by increasing the sound pressure level approximately 4.5 dBA over the natural ambient level (Lnat). It has also been found that low traffic roads (AADT<1000 ) may cause physiological stress on wildlife and affect the quality of their habitats. It has also been possible to define a road-effect zone by noise mapping, which suggests an effective habitat loss within the Leq (24h) 30 dBA isophone in case of Roe deer and also divide the study area in two groups with different physiological stress level, higher for those exposed to higher noise levels and traffic volume. On the other hand, it has been possible to determine an exclusion area for Cinereous vulture nesting surrounding roads which is coincident with the Leq (24h) 40 dBA isophone and affects 11 % of the vulture potential habitat. It has also been performed an economic estimation of noise pollution impact on visitors’ perception and results showed that visitors would be willing to pay an entrance fee of approximately 1 euro if such payment is really bringing an improvement of the conservation status. In conclusion, protected areas may be significantly affected by anthropogenic noise sources located within the park borders but perturbations may also be caused by large-distance noise sources outside the park managers’ jurisdiction. Aircraft overflight events disrupted quietness and caused Leq increases of almost 8 dBA during a monitoring period with respect to Lnat reference levels in the park quiet areas. It is recommended to actively manage the acoustic environment. Finally, further research on ecological impacts of environmental noise needs to be extended to other species and places.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Maximizing energy autonomy is a consistent challenge when deploying mobile robots in ionizing radiation or other hazardous environments. Having a reliable robot system is essential for successful execution of missions and to avoid manual recovery of the robots in environments that are harmful to human beings. For deployment of robots missions at short notice, the ability to know beforehand the energy required for performing the task is essential. This paper presents a on-line method for predicting energy requirements based on the pre-determined power models for a mobile robot. A small mobile robot, Khepera III is used for the experimental study and the results are promising with high prediction accuracy. The applications of the energy prediction models in energy optimization and simulations are also discussed along with examples of significant energy savings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El flameo o flutter es un fenómeno vibratorio debido a la interacción de fuerzas inerciales, elásticas y aerodinámicas. Consiste en un intercambio de energía, que se puede observar en el cambio de amortiguamientos, entre dos o más modos estructurales, denominados modos críticos, cuyas frecuencias tienden a acercarse (coalescencia de frecuencias). Los ensayos en vuelo de flameo suponen un gran riesgo debido a la posibilidad de una perdida brusca de estabilidad aeroelástica (flameo explosivo) con la posibilidad de destrucción de la aeronave. Además existen otros fenómenos asociados que pueden aparecer como el LCO (Limit Cycle Oscillation) y la interacción con los mandos de vuelo. Debido a esto, se deben llevar a cabo análisis exhaustivos, que incluyen GVT (vibraciones en tierra), antes de comenzar los ensayos en vuelo, y estos últimos deben ser ejecutados con robustos procedimientos. El objetivo de los ensayos es delimitar la frontera de estabilidad sin llegar a ella, manteniéndose siempre dentro de la envolvente estable de vuelo. Para lograrlo se necesitan métodos de predicción, siendo el “Flutter Margin”, el más utilizado. Para saber cuánta estabilidad aeroelástica tiene el avión y lo lejos que está de la frontera de estabilidad (a través de métodos de predicción) los parámetros modales, en particular la frecuencia y el amortiguamiento, son de vital importancia. El ensayo en vuelo consiste en la excitación de la estructura a diferentes condiciones de vuelo, la medición de la respuesta y su análisis para obtener los dos parámetros mencionados. Un gran esfuerzo se dedica al análisis en tiempo real de las señales como un medio de reducir el riesgo de este tipo de ensayos. Existen numerosos métodos de Análisis Modal, pero pocos capaces de analizar las señales procedentes de los ensayos de flameo, debido a sus especiales características. Un método novedoso, basado en la Descomposición por Valores Singulares (SVD) y la factorización QR, ha sido desarrollado y aplicado al análisis de señales procedentes de vuelos de flameo del F-18. El método es capaz de identificar frecuencia y amortiguamiento de los modos críticos. El algoritmo se basa en la capacidad del SVD para el análisis, modelización y predicción de series de datos con características periódicas y en su capacidad de identificar el rango de una matriz, así como en la aptitud del QR para seleccionar la mejor base vectorial entre un conjunto de vectores para representar el campo vectorial que forman. El análisis de señales de flameo simuladas y reales demuestra, bajo ciertas condiciones, la efectividad, robustez, resistencia al ruido y capacidad de automatización del método propuesto. ABSTRACT Flutter involves the interaction between inertial, elastic and aerodynamic forces. It consists on an exchange of energy, identified by change in damping, between two or more structural modes, named critical modes, whose frequencies tend to get closer to each other (frequency coalescence). Flight flutter testing involves high risk because of the possibility of an abrupt lost in aeroelastic stability (hard flutter) that may lead to aircraft destruction. Moreover associated phenomena may happen during the flight as LCO (Limit Cycle Oscillation) and coupling with flight controls. Because of that, intensive analyses, including GVT (Ground Vibration Test), have to be performed before beginning the flights test and during them consistent procedures have to be followed. The test objective is to identify the stability border, maintaining the aircraft always inside the stable domain. To achieve that flutter speed prediction methods have to be used, the most employed being the “Flutter Margin”. In order to know how much aeroelastic stability remains and how far the aircraft is from the stability border (using the prediction methods), modal parameters, in particular frequency and damping are paramount. So flight test consists in exciting the structure at various flight conditions, measuring the response and identifying in real-time these two parameters. A great deal of effort is being devoted to real-time flight data analysis as an effective way to reduce the risk. Numerous Modal Analysis algorithms are available, but very few are suitable to analyze signals coming from flutter testing due to their special features. A new method, based on Singular Value Decomposition (SVD) and QR factorization, has been developed and applied to the analysis of F-18 flutter flight-test data. The method is capable of identifying the frequency and damping of the critical aircraft modes. The algorithm relies on the capability of SVD for the analysis, modelling and prediction of data series with periodic features and also on its power to identify matrix rank as well as QR competence for selecting the best basis among a set of vectors in order to represent a given vector space of such a set. The analysis of simulated and real flutter flight test data demonstrates, under specific conditions, the effectiveness, robustness, noise-immunity and the capability for automation of the method proposed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La innovación en Sistemas Intesivos en Software está alcanzando relevancia por múltiples razones: el software está presente en sectores como automóvil, teléfonos móviles o salud. Las empresas necesitan conocer aquellos factores que afectan a la innovación para incrementar las probabilidades de éxito en el desarrollo de sus productos y, la evaluación de productos sofware es un mecanismo potente para capturar este conocimiento. En consecuencia, las empresas necesitan evaluar sus productos desde la perpectiva de innovación para reducir la distancia entre los productos desarrollados y el mercado. Esto es incluso más relevante en el caso de los productos intensivos en software, donde el tiempo real, la oportunidad, complejidad, interoperabilidad, capacidad de respuesta y compartción de recursos son características críticas de los nuevos sistemas. La evaluación de la innovación de productos ya ha sido estudiada y se han definido algunos esquemas de evaluación pero no son específicos para Sistemas intensivos en Sofwtare; además, no se ha alcanzado consenso en los factores ni el procedimiento de evaluación. Por lo tanto, tiene sentido trabajar en la definición de un marco de evaluación de innovación enfocado a Sistemas intesivos en Software. Esta tesis identifica los elementos necesarios para construir in marco para la evaluación de de Sistemas intensivos en Software desde el punto de vista de la innovación. Se han identificado dos componentes como partes del marco de evaluación: un modelo de referencia y una herramienta adaptativa y personalizable para la realización de la evaluación y posicionamiento de la innovación. El modelo de referencia está compuesto por cuatro elementos principales que caracterizan la evaluación de innovación de productos: los conceptos, modelos de innovación, cuestionarios de evaluación y la evaluación de productos. El modelo de referencia aporta las bases para definir instancias de los modelos de evaluación de innovación de productos que pueden se evaluados y posicionados en la herramienta a través de cuestionarios y que de forma automatizada aporta los resultados de la evaluación y el posicionamiento respecto a la innovación de producto. El modelo de referencia ha sido rigurosamente construido aplicando modelado conceptual e integración de vistas junto con la aplicación de métodos cualitativos de investigación. La herramienta ha sido utilizada para evaluar productos como Skype a través de la instanciación del modelo de referencia. ABSTRACT Innovation in Software intensive Systems is becoming relevant for several reasons: software is present embedded in many sectors like automotive, robotics, mobile phones or heath care. Firms need to have knowledge about factors affecting the innovation to increase the probability of success in their product development and the assessment of innovation in software products is a powerful mechanism to capture this knowledge. Therefore, companies need to assess products from an innovation perspective to reduce the gap between their developed products and the market. This is even more relevant in the case of SiSs, where real time, timeliness, complexity, interoperability, reactivity, and resource sharing are critical features of a new system. Many authors have analysed product innovation assessment and some schemas have been developed but they are not specific to SiSs; in addition, there is no consensus about the factors or the procedures for performing an assessment. Therefore, it has sense to work in the definition of a customized software product innovation evaluation framework. This thesis identifies the elements needed to build a framework to assess software products from the innovation perspective. Two components have been identified as part of the framework to assess Software intensive Systems from the innovation perspective: a reference-model and an adaptive and customizable tool to perform the assessment and to position product innovation. The reference-model is composed by four main elements characterizing product innovation assessment: concepts, innovation models, assessment questionnaires and product assessment. The reference model provides the umbrella to define instances of product innovation assessment models that can be assessed and positioned through questionnaires in the proposed tool that also provides automation in the assessment and positioning of innovation. The reference-model has been rigorously built by applying conceptual modelling and view integration integrated with qualitative research methods. The tool has been used to assess products like Skype through models instantiated from the reference-model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Los dispositivos móviles modernos disponen cada vez de más funcionalidad debido al rápido avance de las tecnologías de las comunicaciones y computaciones móviles. Sin embargo, la capacidad de la batería no ha experimentado un aumento equivalente. Por ello, la experiencia de usuario en los sistemas móviles modernos se ve muy afectada por la vida de la batería, que es un factor inestable de difícil de control. Para abordar este problema, investigaciones anteriores han propuesto un esquema de gestion del consumo (PM) centrada en la energía y que proporciona una garantía sobre la vida operativa de la batería mediante la gestión de la energía como un recurso de primera clase en el sistema. Como el planificador juega un papel fundamental en la administración del consumo de energía y en la garantía del rendimiento de las aplicaciones, esta tesis explora la optimización de la experiencia de usuario para sistemas móviles con energía limitada desde la perspectiva de un planificador que tiene en cuenta el consumo de energía en un contexto en el que ésta es un recurso de primera clase. En esta tesis se analiza en primer lugar los factores que contribuyen de forma general a la experiencia de usuario en un sistema móvil. Después se determinan los requisitos esenciales que afectan a la experiencia de usuario en la planificación centrada en el consumo de energía, que son el reparto proporcional de la potencia, el cumplimiento de las restricciones temporales, y cuando sea necesario, el compromiso entre la cuota de potencia y las restricciones temporales. Para cumplir con los requisitos, el algoritmo clásico de fair queueing y su modelo de referencia se extienden desde los dominios de las comunicaciones y ancho de banda de CPU hacia el dominio de la energía, y en base a ésto, se propone el algoritmo energy-based fair queueing (EFQ) para proporcionar una planificación basada en la energía. El algoritmo EFQ está diseñado para compartir la potencia consumida entre las tareas mediante su planificación en función de la energía consumida y de la cuota reservada. La cuota de consumo de cada tarea con restricciones temporales está protegida frente a diversos cambios que puedan ocurrir en el sistema. Además, para dar mejor soporte a las tareas en tiempo real y multimedia, se propone un mecanismo para combinar con el algoritmo EFQ para dar preferencia en la planificación durante breves intervalos de tiempo a las tareas más urgentes con restricciones temporales.Las propiedades del algoritmo EFQ se evaluan a través del modelado de alto nivel y la simulación. Los resultados de las simulaciones indican que los requisitos esenciales de la planificación centrada en la energía pueden lograrse. El algoritmo EFQ se implementa más tarde en el kernel de Linux. Para evaluar las propiedades del planificador EFQ basado en Linux, se desarrolló un banco de pruebas experimental basado en una sitema empotrado, un programa de banco de pruebas multihilo, y un conjunto de pruebas de código abierto. A través de experimentos específicamente diseñados, esta tesis verifica primero las propiedades de EFQ en la gestión de la cuota de consumo de potencia y la planificación en tiempo real y, a continuación, explora los beneficios potenciales de emplear la planificación EFQ en la optimización de la experiencia de usuario para sistemas móviles con energía limitada. Los resultados experimentales sobre la gestión de la cuota de energía muestran que EFQ es más eficaz que el planificador de Linux-CFS en la gestión de energía, logrando un reparto proporcional de la energía del sistema independientemente de en qué dispositivo se consume la energía. Los resultados experimentales en la planificación en tiempo real demuestran que EFQ puede lograr de forma eficaz, flexible y robusta el cumplimiento de las restricciones temporales aunque se dé el caso de aumento del el número de tareas o del error en la estimación de energía. Por último, un análisis comparativo de los resultados experimentales sobre la optimización de la experiencia del usuario demuestra que, primero, EFQ es más eficaz y flexible que los algoritmos tradicionales de planificación del procesador, como el que se encuentra por defecto en el planificador de Linux y, segundo, que proporciona la posibilidad de optimizar y preservar la experiencia de usuario para los sistemas móviles con energía limitada. Abstract Modern mobiledevices have been becoming increasingly powerful in functionality and entertainment as the next-generation mobile computing and communication technologies are rapidly advanced. However, the battery capacity has not experienced anequivalent increase. The user experience of modern mobile systems is therefore greatly affected by the battery lifetime,which is an unstable factor that is hard to control. To address this problem, previous works proposed energy-centric power management (PM) schemes to provide strong guarantee on the battery lifetime by globally managing energy as the first-class resource in the system. As the processor scheduler plays a pivotal role in power management and application performance guarantee, this thesis explores the user experience optimization of energy-limited mobile systemsfrom the perspective of energy-centric processor scheduling in an energy-centric context. This thesis first analyzes the general contributing factors of the mobile system user experience.Then itdetermines the essential requirements on the energy-centric processor scheduling for user experience optimization, which are proportional power sharing, time-constraint compliance, and when necessary, a tradeoff between the power share and the time-constraint compliance. To meet the requirements, the classical fair queuing algorithm and its reference model are extended from the network and CPU bandwidth sharing domain to the energy sharing domain, and based on that, the energy-based fair queuing (EFQ) algorithm is proposed for performing energy-centric processor scheduling. The EFQ algorithm is designed to provide proportional power shares to tasks by scheduling the tasks based on their energy consumption and weights. The power share of each time-sensitive task is protected upon the change of the scheduling environment to guarantee a stable performance, and any instantaneous power share that is overly allocated to one time-sensitive task can be fairly re-allocated to the other tasks. In addition, to better support real-time and multimedia scheduling, certain real-time friendly mechanism is combined into the EFQ algorithm to give time-limited scheduling preference to the time-sensitive tasks. Through high-level modelling and simulation, the properties of the EFQ algorithm are evaluated. The simulation results indicate that the essential requirements of energy-centric processor scheduling can be achieved. The EFQ algorithm is later implemented in the Linux kernel. To assess the properties of the Linux-based EFQ scheduler, an experimental test-bench based on an embedded platform, a multithreading test-bench program, and an open-source benchmark suite is developed. Through specifically-designed experiments, this thesis first verifies the properties of EFQ in power share management and real-time scheduling, and then, explores the potential benefits of employing EFQ scheduling in the user experience optimization for energy-limited mobile systems. Experimental results on power share management show that EFQ is more effective than the Linux-CFS scheduler in managing power shares and it can achieve a proportional sharing of the system power regardless of on which device the energy is spent. Experimental results on real-time scheduling demonstrate that EFQ can achieve effective, flexible and robust time-constraint compliance upon the increase of energy estimation error and task number. Finally, a comparative analysis of the experimental results on user experience optimization demonstrates that EFQ is more effective and flexible than traditional processor scheduling algorithms, such as those of the default Linux scheduler, in optimizing and preserving the user experience of energy-limited mobile systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A body with a shape similar to a hot wire with its sheath, but no prongs, has been placed close to the wall of a turbulent channel at Re_tau = 600. The results of the channel flow, without the wire, agree with previous published ones, despite the modest resolution and domain size. A simplified, two-dimensional version of the wire at the same Reynolds number has been studied to compare the dynamic response of cold and hot wires, where a slightly bigger perturbation is seen in the hot case, but an almost identical dynamic response. The cold wire seems to be able to measure instantaneous velocity with total drag after proper calibration. Being a DNS, the complete description of the flow field around the wire is obtained.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The current magnetic confinement nuclear fusion power reactor concepts going beyond ITER are based on assumptions about the availability of materials with extreme mechanical, heat, and neutron load capacity. In Europe, the development of such structural and armour materials together with the necessary production, machining, and fabrication technologies is pursued within the EFDA long-term fusion materials programme. This paper reviews the progress of work within the programme in the area of tungsten and tungsten alloys. Results, conclusions, and future projections are summarized for each of the programme´s main subtopics, which are: (1) fabrication, (2) structural W materials, (3) W armour materials, and (4) materials science and modelling. It gives a detailed overview of the latest results on materials research, fabrication processes, joining options, high heat flux testing, plasticity studies, modelling, and validation experiments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To support the efficient execution of post-genomic multi-centric clinical trials in breast cancer we propose a solution that streamlines the assessment of the eligibility of patients for available trials. The assessment of the eligibility of a patient for a trial requires evaluating whether each eligibility criterion is satisfied and is often a time consuming and manual task. The main focus in the literature has been on proposing different methods for modelling and formalizing the eligibility criteria. However the current adoption of these approaches in clinical care is limited. Less effort has been dedicated to the automatic matching of criteria to the patient data managed in clinical care. We address both aspects and propose a scalable, efficient and pragmatic patient screening solution enabling automatic evaluation of eligibility of patients for a relevant set of trials. This covers the flexible formalization of criteria and of other relevant trial metadata and the efficient management of these representations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Predicting failures in a distributed system based on previous events through logistic regression is a standard approach in literature. This technique is not reliable, though, in two situations: in the prediction of rare events, which do not appear in enough proportion for the algorithm to capture, and in environments where there are too many variables, as logistic regression tends to overfit on this situations; while manually selecting a subset of variables to create the model is error- prone. On this paper, we solve an industrial research case that presented this situation with a combination of elastic net logistic regression, a method that allows us to automatically select useful variables, a process of cross-validation on top of it and the application of a rare events prediction technique to reduce computation time. This process provides two layers of cross- validation that automatically obtain the optimal model complexity and the optimal mode l parameters values, while ensuring even rare events will be correctly predicted with a low amount of training instances. We tested this method against real industrial data, obtaining a total of 60 out of 80 possible models with a 90% average model accuracy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Assessing video quality is a complex task. While most pixel-based metrics do not present enough correlation between objective and subjective results, algorithms need to correspond to human perception when analyzing quality in a video sequence. For analyzing the perceived quality derived from concrete video artifacts in determined region of interest we present a novel methodology for generating test sequences which allow the analysis of impact of each individual distortion. Through results obtained after subjective assessment it is possible to create psychovisual models based on weighting pixels belonging to different regions of interest distributed by color, position, motion or content. Interesting results are obtained in subjective assessment which demonstrates the necessity of new metrics adapted to human visual system.