18 resultados para Android,Multihoming,LISP,LISPmob,Performance,Test,Development,Analysis

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A system dedicated to the optical transmittance characterization of Fresnel lenses has been developed at NREL, in collaboration with the UPM. The system quantifies the optical efficiency of the lens by generating a performance map. The shape of the focused spot may also be analyzed to understand change in the lens performance. The primary instrument components (lasers and CCD detector) have been characterized to confirm their capability for performing optical transmittance measurements. Measurements performed on SoG and PMMA lenses subject to a variety of indoor conditions (e.g., UV and damp heat) identified differences in the optical efficiency of the evaluated lenses, demonstrating the ability of the Scanning Lens Instrument (SLI) to distinguish between the aged lenses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C’s Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers’ observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is sound

Relevância:

100.00% 100.00%

Publicador:

Resumo:

(Matsukawa and Habeck, 2007) analyse the main instruments for risk mitigation in infrastructure financing with Multilateral Financial Institutions (MFIs). Their review coincided with the global financial crisis of 2007-08, and is highly relevant in current times considering the sovereign debt crisis, the lack of available capital and the increases in bank regulation in Western economies. The current macroeconomic environment has seen a slowdown in the level of finance for infrastructure projects, as they pose a higher credit risk given their requirements for long term investments. The rationale for this work is to look for innovative solutions that are focused on the credit risk mitigation of infrastructure and energy projects whilst optimizing the economic capital allocation for commercial banks. This objective is achieved through risk-sharing with MFIs and looking for capital relief in project finance transactions. This research finds out the answer to the main question: "What is the impact of risk-sharing with MFIs on project finance transactions to increase their efficiency and viability?", and is developed from the perspective of a commercial bank assessing the economic capital used and analysing the relevant variables for it: Probability of Default, Loss Given Default and Recovery Rates, (Altman, 2010). An overview of project finance for the infrastructure and energy sectors in terms of the volume of transactions worldwide is outlined, along with a summary of risk-sharing financing with MFIs. A review of the current regulatory framework beneath risk-sharing in structured finance with MFIs is also analysed. From here, the impact of risk-sharing and the diversification effect in infrastructure and energy projects is assessed, from the perspective of economic capital allocation for a commercial bank. CreditMetrics (J. P. Morgan, 1997) is applied over an existing well diversified portfolio of project finance infrastructure and energy investments, working with the main risk capital measures: economic capital, RAROC, and EVA. The conclusions of this research show that economic capital allocation on a portfolio of project finance along with risk-sharing with MFIs have a huge impact on capital relief whilst increasing performance profitability for commercial banks. There is an outstanding diversification effect due to the portfolio, which is combined with risk mitigation and an improvement in recovery rates through Partial Credit Guarantees issued by MFIs. A stress test scenario analysis is applied to the current assumptions and credit risk model, considering a downgrade in the rating for the commercial bank (lender) and an increase of default in emerging countries, presenting a direct impact on economic capital, through an increase in expected loss and a decrease in performance profitability. Getting capital relief through risk-sharing makes it more viable for commercial banks to finance infrastructure and energy projects, with the beneficial effect of a direct impact of these investments on GDP growth and employment. The main contribution of this work is to promote a strategic economic capital allocation in infrastructure and energy financing through innovative risk-sharing with MFIs and economic pricing to create economic value added for banks, and to allow the financing of more infrastructure and energy projects. This work suggests several topics for further research in relation to issues analysed. (Matsukawa and Habeck, 2007) analizan los principales instrumentos de mitigación de riesgos en las Instituciones Financieras Multilaterales (IFMs) para la financiación de infraestructuras. Su presentación coincidió con el inicio de la crisis financiera en Agosto de 2007, y sus consecuencias persisten en la actualidad, destacando la deuda soberana en economías desarrolladas y los problemas capitalización de los bancos. Este entorno macroeconómico ha ralentizado la financiación de proyectos de infraestructuras. El actual trabajo de investigación tiene su motivación en la búsqueda de soluciones para la financiación de proyectos de infraestructuras y de energía, mitigando los riesgos inherentes, con el objeto de reducir el consumo de capital económico en los bancos financiadores. Este objetivo se alcanza compartiendo el riesgo de la financiación con IFMs, a través de estructuras de risk-sharing. La investigación responde la pregunta: "Cuál es el impacto de risk-sharing con IFMs, en la financiación de proyectos para aumentar su eficiencia y viabilidad?". El trabajo se desarrolla desde el enfoque de un banco comercial, estimando el consumo de capital económico en la financiación de proyectos y analizando las principales variables del riesgo de crédito, Probability of Default, Loss Given Default and Recovery Rates, (Altman, 2010). La investigación presenta las cifras globales de Project Finance en los sectores de infraestructuras y de energía, y analiza el marco regulatorio internacional en relación al consumo de capital económico en la financiación de proyectos en los que participan IFMs. A continuación, el trabajo modeliza una cartera real, bien diversificada, de Project Finance de infraestructuras y de energía, aplicando la metodología CreditMet- rics (J. P. Morgan, 1997). Su objeto es estimar el consumo de capital económico y la rentabilidad de la cartera de proyectos a través del RAROC y EVA. La modelización permite estimar el efecto diversificación y la liberación de capital económico consecuencia del risk-sharing. Los resultados muestran el enorme impacto del efecto diversificación de la cartera, así como de las garantías parciales de las IFMs que mitigan riesgos, mejoran el recovery rate de los proyectos y reducen el consumo de capital económico para el banco comercial, mientras aumentan la rentabilidad, RAROC, y crean valor económico, EVA. En escenarios económicos de inestabilidad, empeoramiento del rating de los bancos, aumentos de default en los proyectos y de correlación en las carteras, hay un impacto directo en el capital económico y en la pérdida de rentabilidad. La liberación de capital económico, como se plantea en la presente investigación, permitirá financiar más proyectos de infraestructuras y de energía, lo que repercutirá en un mayor crecimiento económico y creación de empleo. La principal contribución de este trabajo es promover la gestión activa del capital económico en la financiación de infraestructuras y de proyectos energéticos, a través de estructuras innovadoras de risk-sharing con IFMs y de creación de valor económico en los bancos comerciales, lo que mejoraría su eficiencia y capitalización. La aportación metodológica del trabajo se convierte por su originalidad en una contribución, que sugiere y facilita nuevas líneas de investigación académica en las principales variables del riesgo de crédito que afectan al capital económico en la financiación de proyectos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La inmensa mayoría de los flujos de relevancia ingenieril permanecen sin estudiar en el marco de la teoría de estabilidad global. Esto es debido a dos razones fundamentalmente, las dificultades asociadas con el análisis de los flujos turbulentos y los inmensos recursos computacionales requeridos para obtener la solución del problema de autovalores asociado al análisis de inestabilidad de flujos tridimensionales, también conocido como problema TriGlobal. En esta tesis se aborda el problema asociado con la tridimensionalidad. Se ha desarrollado una metodología general para obtener soluciones de problemas de análisis modal de las inestabilidades lineales globales mediante el acoplamiento de métodos de evolución temporal, desarrollados en este trabajo, con códigos de mecánica de fluidos computacional de segundo orden, utilizados de forma general en la industria. Esta metodología consiste en la resolución del problema de autovalores asociado al análisis de inestabilidad mediante métodos de proyección en subespacios de Krylov, con la particularidad de que dichos subespacios son generados por medio de la integración temporal de un vector inicial usando cualquier código de mecánica de fluidos computacional. Se han elegido tres problemas desafiantes en función de la exigencia de recursos computacionales necesarios y de la complejidad física para la demostración de la presente metodología: (i) el flujo en el interior de una cavidad tridimensional impulsada por una de sus tapas, (ii) el flujo alrededor de un cilindro equipado con aletas helicoidales a lo largo su envergadura y (iii) el flujo a través de una cavidad abierta tridimensinal en ausencia de homogeneidades espaciales. Para la validación de la tecnología se ha obtenido la solución del problema TriGlobal asociado al flujo en la cavidad tridimensional, utilizando el método de evolución temporal desarrollado acoplado con los operadores numéricos de flujo incompresible del código CFD OpenFOAM (código libre). Los resultados obtenidos coinciden plentamente con la literatura. La aplicación de esta metodología al estudio de inestabilidades globales de flujos abiertos tridimensionales ha proporcionado por primera vez, información sobre la transición tridimensional de estos flujos. Además, la metodología ha sido adaptada para resolver problemas adjuntos TriGlobales, permitiendo el control de flujo basado en modificaciones de las inestabilidades globales. Finalmente, se ha demostrado que la cantidad moderada de los recursos computacionales requeridos para la solución del problema de valor propio TriGlobal usando este método numérico, junto a su versatilidad al poder acoplarse a cualquier código aerodinámico, permite la realización de análisis de inestabilidad global y control de flujos complejos de relevancia industrial. Abstract Most flows of engineering relevance still remain unexplored in a global instability theory context for two reasons. First, because of the difficulties associated with the analysis of turbulent flows and, second, for the formidable computational resources required for the solution of the eigenvalue problem associated with the instability analysis of three-dimensional base flows, also known as TriGlobal problem. In this thesis, the problem associated with the three-dimensionality is addressed by means of the development of a general approach to the solution of large-scale global linear instability analysis by coupling a time-stepping approach with second order aerodynamic codes employed in industry. Three challenging flows in the terms of required computational resources and physical complexity have been chosen for demonstration of the present methodology; (i) the flow inside a wall-bounded three-dimensional lid-driven cavity, (ii) the flow past a cylinder fitted with helical strakes and (iii) the flow over a inhomogeneous three-dimensional open cavity. Results in excellent agreement with the literature have been obtained for the three-dimensional lid-driven cavity by using this methodology coupled with the incompressible solver of the open-source toolbox OpenFOAM®, which has served as validation. Moreover, significant physical insight of the instability of three-dimensional open flows has been gained through the application of the present time-stepping methodology to the other two cases. In addition, modifications to the present approach have been proposed in order to perform adjoint instability analysis of three-dimensional base flows and flow control; validation and TriGlobal examples are presented. Finally, it has been demonstrated that the moderate amount of computational resources required for the solution of the TriGlobal eigenvalue problem using this method enables the performance of instability analysis and control of flows of industrial relevance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La empresa social es un modelo organizativo que presenta un interesante potencial para resolver problemáticas sociales. La empresa social ha despertado interés tanto en países industrializados como en economías en vías de desarrollo porque representa un modelo dentro del capitalismo que persigue objetivos sociales mediante la realización de actividades de mercado (compra y venta de productos y/o servicios principalmente). A pesar de sus raíces lejanas en el tiempo se trata de un campo de conocimiento relativamente joven, donde la literatura académica presenta escasez de estudios empíricos. El desarrollo teórico para buscar claridad conceptual ha sido el principal caballo de batalla de los últimos años, y por tanto, se ha prestado poca atención a generar evidencias sobre cómo funcionan las empresas sociales y sobre sus claves de su éxito. Se considera que la mejora en la comprensión de este modelo organizativo pasa por la construcción de herramientas para que académicos y practicantes mejoren su conocimiento sobre los mecanismos internos de las empresas sociales. En este contexto nace la presente tesis doctoral sobre empresa social, que tiene por objetivo la creación de un marco de análisis que permita el estudio de las empresas sociales desde una dimensión organizativa, es decir, que aborde los elementos clave que describen el funcionamiento de este tipo de organizaciones. Para ello, en este trabajo se aborda la construcción del modelo para el análisis organizativo de las empresas sociales a partir del análisis semántico de las 45 principales definiciones de empresa social. A partir de este análisis se identifican dos dimensiones de análisis de la empresa social: -Cuatro principios, comunes a todas las manifestaciones del fenómeno, que recogen la esencia del concepto. -Ocho elementos organizativos específicos de la empresa social que describen la forma en la que cada iniciativa se implementa en un contexto determinado. Es decir, elementos de diseño presentes en diferente medida que dan lugar a tipologías de empresa social diferentes. Estos elementos son: la proposición de valor social, la búsqueda de impacto a largo plazo, la cultura organizativa, la conexión con los beneficiarios, el liderazgo emprendedor y los mecanismos de gobernanza, el ecosistema colaborativo, la estrategia empresarial y la orientación a la autosuficiencia económica. A partir de este marco de análisis, se construyen dos herramientas de diagnóstico que permiten su aplicación al estudio de empresas sociales: una tabla de indicadores para el análisis externo (por parte de un investigador ajeno a la organización) y un cuestionario de diagnóstico para el análisis interno (a través del personal de la empresa social objeto de estudio). Las herramientas intentan dar respuesta a la necesidad de desarrollar constructos para el estudio empírico de las empresas sociales. Para analizar la utilidad del modelo y de las herramientas se llevaron a cabo tres estudios de caso: -La empresa social ACCIONA Microenergía Perú que proporciona energía eléctrica a comunidades rurales aisladas en la región peruana de Cajamarca. -La empresa social Integra-e que propone un mecanismo de inserción socio-laboral en Madrid para jóvenes en riesgo de exclusión a través de la formación en Tecnologías de la Información y la Comunicación (TIC). -Un conjunto de redes de telecentros pertenecientes a la red LAC de la fundación Telecentres.org que proporcionan acceso a servicios de información (Internet entre otros) en diferentes países de Latinoamérica. La aplicación de las herramientas mostró ser útil en los tres estudios de caso para obtener una relación de evidencias con las que analizar la proximidad de una organización al ideal de empresa social. El ejercicio de análisis también resultó interesante como ejercicio reflexivo para las entidades participantes. Los resultados del cuestionario fueron especialmente interesantes en los telecentros de la Fundación Telecentre.org ya que al ser un estudio multicaso se pudo realizar un rico análisis estadístico sobre el funcionamiento de los telecentros y su desempeño. El estudio permitió identificar relaciones interesantes entre los ocho elementos de diseño del modelo propuesto y el desempeño de la organización. En particular, se detectó que para todos los casos estudiados: -La dimensión económica es la componente del desempeño que mayor desafíos plantea. -La existencia de una alta correlación entre el desempeño y siete de los ocho elementos organizativos del modelo. -La importancia de la cultura organizativa como elemento que explica el desempeño global de la organización y la satisfacción de los empleados. El campo de la empresa social presenta importantes retos de futuro, como la claridad conceptual, el desarrollo de estudios empíricos y la medida de su impacto social. El conocimiento de las claves organizativas puede ayudar a diseñar empresas sociales más robustas o a que organizaciones con fines sociales que no se basan en mecanismos de mercado consideren la posibilidad de incorporar éstos en su estrategia. ABSTRACT Social enterprise is an organizational model with a strong potential to help solving social problems. Recently, interest for the model has risen in both industrialized and developing countries because it is organized to achieve altruistic or social goals through market activities (mainly sales of products and services). Despite its historic roots, it is a relatively young field of research, where academic literature has little empirical data to accompany the theoretical development of social enterprise. Conceptual clarification has been the main challenge during the recent years, and there has been little attention given to generate evidence on how social enterprises operate and their keys to success. Progress in empirical study involves the construction of tools for researchers, in order to increase understanding of the internal mechanisms of social enterprises. This thesis aims to create a conceptual framework to study social enterprises from an organizational point of view, by analyzing the key elements that explain the operation and organization of this organizational model. The framework for the organizational analysis of social enterprises was built supported by the semantic analysis of 45 main definitions of social enterprise. The framework is divided into two dimensions: -There are four principles which capture the essence of the social enterprise concept, and are present in the manifestations of cases. -There are eight design elements which help analyze the characteristics of each particular social enterprise initiative: the social value proposition, social impact orientation, organizational culture, links to beneficiaries, entrepreneurial leadership, collaborative ecosystem, entrepreneurial strategy and orientation to economic self-sufficiency. Two diagnostic tools were developed to apply the framework to case studies: a scoreboard of indicators (to be used by the researcher during external analysis of the organization) and a questionnaire (to be answered by the social enterprise staff). The dissertation undertakes the study of three case studies: -ACCIONA Microenergia Peru, a social enterprise that provides electricity to isolated rural communities in the Peruvian region of Cajamarca. -Integra-e, a social enterprise located in Madrid that promotes socioprofessional integration of young people through training in ICT. -A sample of telecenters of the LAC network that provide access to information services (such as Internet) in Latin America. Applying the tools proved to be useful in all three cases, because it helped to obtain evidence to compare the proximity of an organization to an ideal type of social enterprise. In all the cases studied, the economic sustainability proved to be the biggest challenge for the organizations. The application of the questionnaire to the telecenters was especially informative because it was a multicase study which provided a rich statistical analysis on the performance of call centers. The study identified unique relationships between the model elements and the organziation performance. A statistical analysis shows a high correlation between performance and seven organizational elements described in the model. The organizational culture seems to be an important factor in explaining the overall organizational performance and employee satisfaction. The field of social enterprise has significant future challenges -such as conceptual clarity, the development of empirical studies and social impact assessment. A deep understanding of key organizational aspects of social enterprises can help in the design of more robust organizations and to bring success to social-purpose organizations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este proyecto describe la metodología a seguir para conectar la plataforma Arduino a dispositivos Android y establecer una conexión que permita controlar dicha plataforma. Sobre Arduino se acoplará un módulo 3G que permitirá hacer uso de funcionalidades propias de los teléfonos móviles. El objetivo final del proyecto era el control del módulo 3G mediante comandos AT enviados desde un dispositivo Android (tableta) conectado a través de USB. Para ello, se ha desarrollado una aplicación de demostración que permite el uso de algunas de las funcionalidades de comunicación del módulo 3G. Para alcanzar el objetivo propuesto se ha investigado sobre temas tales como: internet de las cosas, las tecnologías de comunicaciones móviles, el sistema operativo Android y el desarrollo de aplicaciones móviles, la plataforma Arduino, el funcionamiento del módulo 3G y sobre la comunicación serie que permitirá comunicarse entre Android y módulo 3G. El proyecto proporciona una guía de iniciación con explicaciones de los diferentes dispositivos, tecnologías y pasos a seguir para la integración de las diferentes plataformas que se han usado en el proyecto: Arduino, Módulo de comunicaciones 3G, y Android. ABSTRACT. This project describes the methodology to connect the Arduino platform to Android devices and establish a connection to allow the platform control. A 3G module will be engaged on Arduino allowing the usage of mobile phones functionalities. The main objective of the project was the control of 3G module through AT commands sent from an Android device (tablet) connected via USB. For that, a demonstration application was developed to permit the use of some communication features of 3G module. To achieve the target, an investigation has been carried out about issues such as: internet of things, mobile communications technologies, the Android operating system and mobile applications development, the Arduino platform, the 3G module operation and serial communication that allows the communication between Android and the 3G module. The project provides a starter guide with explanations of the different devices, technologies and steps for the integration of the different platforms that have been used in the project: Arduino, 3G communications module and Android.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract. Receptive fields of retinal and other sensory neurons show a large variety of spatiotemporal linear and non linear types of responses to local stimuli. In visual neurons, these responses present either asymmetric sensitive zones or center-surround organization. In most cases, the nature of the responses suggests the existence of a kind of distributed computation prior to the integration by the final cell which is evidently supported by the anatomy. We describe a new kind of discrete and continuous filters to model the kind of computations taking place in the receptive fields of retinal cells. To show their performance in the analysis of diferent non-trivial neuron-like structures, we use a computer tool specifically programmed by the authors to that efect. This tool is also extended to study the efect of lesions on the whole performance of our model nets.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

With the ever growing trend of smart phones and tablets, Android is becoming more and more popular everyday. With more than one billion active users i to date, Android is the leading technology in smart phone arena. In addition to that, Android also runs on Android TV, Android smart watches and cars. Therefore, in recent years, Android applications have become one of the major development sectors in software industry. As of mid 2013, the number of published applications on Google Play had exceeded one million and the cumulative number of downloads was more than 50 billionii. A 2013 survey also revealed that 71% of the mobile application developers work on developing Android applicationsiii. Considering this size of Android applications, it is quite evident that people rely on these applications on a daily basis for the completion of simple tasks like keeping track of weather to rather complex tasks like managing one’s bank accounts. Hence, like every other kind of code, Android code also needs to be verified in order to work properly and achieve a certain confidence level. Because of the gigantic size of the number of applications, it becomes really hard to manually test Android applications specially when it has to be verified for various versions of the OS and also, various device configurations such as different screen sizes and different hardware availability. Hence, recently there has been a lot of work on developing different testing methods for Android applications in Computer Science fraternity. The model of Android attracts researchers because of its open source nature. It makes the whole research model more streamlined when the code for both, application and the platform are readily available to analyze. And hence, there has been a great deal of research in testing and static analysis of Android applications. A great deal of this research has been focused on the input test generation for Android applications. Hence, there are a several testing tools available now, which focus on automatic generation of test cases for Android applications. These tools differ with one another on the basis of their strategies and heuristics used for this generation of test cases. But there is still very little work done on the comparison of these testing tools and the strategies they use. Recently, some research work has been carried outiv in this regard that compared the performance of various available tools with respect to their respective code coverage, fault detection, ability to work on multiple platforms and their ease of use. It was done, by running these tools on a total of 60 real world Android applications. The results of this research showed that although effective, these strategies being used by the tools, also face limitations and hence, have room for improvement. The purpose of this thesis is to extend this research into a more specific and attribute-­‐ oriented way. Attributes refer to the tasks that can be completed using the Android platform. It can be anything ranging from a basic system call for receiving an SMS to more complex tasks like sending the user to another application from the current one. The idea is to develop a benchmark for Android testing tools, which is based on the performance related to these attributes. This will allow the comparison of these tools with respect to these attributes. For example, if there is an application that plays some audio file, will the testing tool be able to generate a test input that will warrant the execution of this audio file? Using multiple applications using different attributes, it can be visualized that which testing tool is more useful for which kinds of attributes. In this thesis, it was decided that 9 attributes covering the basic nature of tasks, will be targeted for the assessment of three testing tools. Later this can be done for much more attributes to compare even more testing tools. The aim of this work is to show that this approach is effective and can be used on a much larger scale. One of the flagship features of this work, which also differentiates it with the previous work, is that the applications used, are all specially made for this research. The reason for doing that is to analyze just that specific attribute in isolation, which the application is focused on, and not allow the tool to get bottlenecked by something trivial, which is not the main attribute under testing. This means 9 applications, each focused on one specific attribute. The main contributions of this thesis are: A summary of the three existing testing tools and their respective techniques for automatic test input generation of Android Applications. • A detailed study of the usage of these testing tools using the 9 applications specially designed and developed for this study. • The analysis of the obtained results of the study carried out. And a comparison of the performance of the selected tools.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Un caloducto en bucle cerrado o Loop Heat Pipe (LHP) es un dispositivo de transferencia de calor cuyo principio de operación se basa en la evaporación/condensación de un fluido de trabajo, que es bombeado a través de un circuito cerrado gracias a fuerzas de capilaridad. Gracias a su flexibilidad, su baja masa y su mínimo (incluso nulo) consumo de potencia, su principal aplicación ha sido identificada como parte del subsistema de control térmico de vehículos espaciales. En el presente trabajo se ha desarrollado un LHP capaz de funcionar eficientemente a temperaturas de hasta 125 oC, siguiendo la actual tendencia de los equipos a bordo de satélites de incrementar su temperatura de operación. En la selección del diseño optimo para dicho LHP, la compatibilidad entre materiales y fluido de trabajo se identificó como uno de los puntos clave. Para seleccionar la mejor combinación, se llevó a cabo una exhaustiva revisión del estado del arte, además de un estudio especifico que incluía el desarrollo de un banco de ensayos de compatibilidad. Como conclusión, la combinación seleccionada como la candidata idónea para ser integrada en el LHP capaz de operar hasta 125 oC fue un evaporador de acero inoxidable, líneas de titanio y amoniaco como fluido de trabajo. En esa línea se diseñó y fabricó un prototipo para ensayos y se desarrolló un modelo de simulación con EcosimPro para evaluar sus prestaciones. Se concluyó que el diseño era adecuado para el rango de operación definido. La incompatibilidad entre el fluido de trabajo y los materiales del LHP está ligada a la generación de gases no condensables. Para un estudio más detallado de los efectos de dichos gases en el funcionamiento del LHP se analizó su comportamiento con diferentes cantidades de nitrógeno inyectadas en su cámara de compensación, simulando un gas no condensable formado en el interior del dispositivo. El estudio se basó en el análisis de las temperaturas medidas experimentalmente a distintos niveles de potencia y temperatura de sumidero o fuente fría. Adicionalmente, dichos resultados se compararon con las predicciones obtenidas por medio del modelo en EcosimPro. Las principales conclusiones obtenidas fueron dos. La primera indica que una cantidad de gas no condensable más de dos veces mayor que la cantidad generada al final de la vida de un satélite típico de telecomunicaciones (15 años) tiene efectos casi despreciables en el funcionamiento del LHP. La segunda es que el principal efecto del gas no condensable es una disminución de la conductancia térmica, especialmente a bajas potencias y temperaturas de sumidero. El efecto es más significativo cuanto mayor es la cantidad de gas añadida. Asimismo, durante la campaña de ensayos se observó un fenómeno no esperado para grandes cantidades de gas no condensable. Dicho fenómeno consiste en un comportamiento oscilatorio, detectado tanto en los ensayos como en la simulación. Este efecto es susceptible de una investigación más profunda y los resultados obtenidos pueden constituir la base para dicha tarea. ABSTRACT Loop Heat Pipes (LHPs) are heat transfer devices whose operating principle is based on the evaporation/condensation of a working fluid, and which use capillary pumping forces to ensure the fluid circulation. Thanks to their flexibility, low mass and minimum (even null) power consumption, their main application has been identified as part of the thermal control subsystem in spacecraft. In the present work, an LHP able to operate efficiently up to 125 oC has been developed, which is in line with the current tendency of satellite on-board equipment to increase their operating temperatures. In selecting the optimal LHP design for the elevated temperature application, the compatibility between the materials and working fluid has been identified as one of the main drivers. An extensive literature review and a dedicated trade-off were performed, in order to select the optimal combination of fluids and materials for the LHP. The trade-off included the development of a dedicated compatibility test stand. In conclusion, the combination of stainless steel evaporator, titanium piping and ammonia as working fluid was selected as the best candidate to operate up to 125 oC. An LHP prototype was designed and manufactured and a simulation model in EcosimPro was developed to evaluate its performance. The first conclusion was that the defined LHP was suitable for the defined operational range. Incompatibility between the working fluid and LHP materials is linked to Non Condensable Gas (NCG) generation. Therefore, the behaviour of the LHP developed with different amounts of nitrogen injected in its compensation chamber to simulate NCG generation, was analyzed. The LHP performance was studied by analysis of the test results at different temperatures and power levels. The test results were also compared to simulations in EcosimPro. Two additional conclusions can be drawn: (i) the effects of an amount of more than two times the expected NCG at the end of life of a typical telecommunications satellite (15 years) is almost negligible on the LHP operation, and (ii) the main effect of the NCG is a decrease in the LHP thermal conductance, especially at low temperatures and low power levels. This decrease is more significant with the progressive addition of NCG. An unexpected phenomenon was observed in the LHP operation with large NCG amounts. Namely, an oscillatory behaviour, which was observed both in the tests and the simulation. This effect provides the basis for further studies concerning oscillations in LHPs.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This thesis contributes to the analysis and design of printed reflectarray antennas. The main part of the work is focused on the analysis of dual offset antennas comprising two reflectarray surfaces, one of them acts as sub-reflector and the second one acts as mainreflector. These configurations introduce additional complexity in several aspects respect to conventional dual offset reflectors, however they present a lot of degrees of freedom that can be used to improve the electrical performance of the antenna. The thesis is organized in four parts: the development of an analysis technique for dualreflectarray antennas, a preliminary validation of such methodology using equivalent reflector systems as reference antennas, a more rigorous validation of the software tool by manufacturing and testing a dual-reflectarray antenna demonstrator and the practical design of dual-reflectarray systems for some applications that show the potential of these kind of configurations to scan the beam and to generate contoured beams. In the first part, a general tool has been implemented to analyze high gain antennas which are constructed of two flat reflectarray structures. The classic reflectarray analysis based on MoM under local periodicity assumption is used for both sub and main reflectarrays, taking into account the incident angle on each reflectarray element. The incident field on the main reflectarray is computed taking into account the field radiated by all the elements on the sub-reflectarray.. Two approaches have been developed, one which employs a simple approximation to reduce the computer run time, and the other which does not, but offers in many cases, improved accuracy. The approximation is based on computing the reflected field on each element on the main reflectarray only once for all the fields radiated by the sub-reflectarray elements, assuming that the response will be the same because the only difference is a small variation on the angle of incidence. This approximation is very accurate when the reflectarray elements on the main reflectarray show a relatively small sensitivity to the angle of incidence. An extension of the analysis technique has been implemented to study dual-reflectarray antennas comprising a main reflectarray printed on a parabolic surface, or in general in a curved surface. In many applications of dual-reflectarray configurations, the reflectarray elements are in the near field of the feed-horn. To consider the near field radiated by the horn, the incident field on each reflectarray element is computed using a spherical mode expansion. In this region, the angles of incidence are moderately wide, and they are considered in the analysis of the reflectarray to better calculate the actual incident field on the sub-reflectarray elements. This technique increases the accuracy for the prediction of co- and cross-polar patterns and antenna gain respect to the case of using ideal feed models. In the second part, as a preliminary validation, the proposed analysis method has been used to design a dual-reflectarray antenna that emulates previous dual-reflector antennas in Ku and W-bands including a reflectarray as subreflector. The results for the dualreflectarray antenna compare very well with those of the parabolic reflector and reflectarray subreflector; radiation patterns, antenna gain and efficiency are practically the same when the main parabolic reflector is substituted by a flat reflectarray. The results show that the gain is only reduced by a few tenths of a dB as a result of the ohmic losses in the reflectarray. The phase adjustment on two surfaces provided by the dual-reflectarray configuration can be used to improve the antenna performance in some applications requiring multiple beams, beam scanning or shaped beams. Third, a very challenging dual-reflectarray antenna demonstrator has been designed, manufactured and tested for a more rigorous validation of the analysis technique presented. The proposed antenna configuration has the feed, the sub-reflectarray and the main-reflectarray in the near field one to each other, so that the conventional far field approximations are not suitable for the analysis of such antenna. This geometry is used as benchmarking for the proposed analysis tool in very stringent conditions. Some aspects of the proposed analysis technique that allow improving the accuracy of the analysis are also discussed. These improvements include a novel method to reduce the inherent cross polarization which is introduced mainly from grounded patch arrays. It has been checked that cross polarization in offset reflectarrays can be significantly reduced by properly adjusting the patch dimensions in the reflectarray in order to produce an overall cancellation of the cross-polarization. The dimensions of the patches are adjusted in order not only to provide the required phase-distribution to shape the beam, but also to exploit the crosses by zero of the cross-polarization components. The last part of the thesis deals with direct applications of the technique described. The technique presented is directly applicable to the design of contoured beam antennas for DBS applications, where the requirements of cross-polarisation are very stringent. The beam shaping is achieved by synthesithing the phase distribution on the main reflectarray while the sub-reflectarray emulates an equivalent hyperbolic subreflector. Dual-reflectarray antennas present also the ability to scan the beam over small angles about boresight. Two possible architectures for a Ku-band antenna are also described based on a dual planar reflectarray configuration that provides electronic beam scanning in a limited angular range. In the first architecture, the beam scanning is achieved by introducing a phase-control in the elements of the sub-reflectarray and the mainreflectarray is passive. A second alternative is also studied, in which the beam scanning is produced using 1-bit control on the main reflectarray, while a passive subreflectarray is designed to provide a large focal distance within a compact configuration. The system aims to develop a solution for bi-directional satellite links for emergency communications. In both proposed architectures, the objective is to provide a compact optics and simplicity to be folded and deployed.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This Doctoral Thesis entitled Contribution to the analysis, design and assessment of compact antenna test ranges at millimeter wavelengths aims to deepen the knowledge of a particular antenna measurement system: the compact range, operating in the frequency bands of millimeter wavelengths. The thesis has been developed at Radiation Group (GR), an antenna laboratory which belongs to the Signals, Systems and Radiocommunications department (SSR), from Technical University of Madrid (UPM). The Radiation Group owns an extensive experience on antenna measurements, running at present four facilities which operate in different configurations: Gregorian compact antenna test range, spherical near field, planar near field and semianechoic arch system. The research work performed in line with this thesis contributes the knowledge of the first measurement configuration at higher frequencies, beyond the microwaves region where Radiation Group features customer-level performance. To reach this high level purpose, a set of scientific tasks were sequentially carried out. Those are succinctly described in the subsequent paragraphs. A first step dealed with the State of Art review. The study of scientific literature dealed with the analysis of measurement practices in compact antenna test ranges in addition with the particularities of millimeter wavelength technologies. Joint study of both fields of knowledge converged, when this measurement facilities are of interest, in a series of technological challenges which become serious bottlenecks at different stages: analysis, design and assessment. Thirdly after the overview study, focus was set on Electromagnetic analysis algorithms. These formulations allow to approach certain electromagnetic features of interest, such as field distribution phase or stray signal analysis of particular structures when they interact with electromagnetic waves sources. Properly operated, a CATR facility features electromagnetic waves collimation optics which are large, in terms of wavelengths. Accordingly, the electromagnetic analysis tasks introduce an extense number of mathematic unknowns which grow with frequency, following different polynomic order laws depending on the used algorithmia. In particular, the optics configuration which was of our interest consisted on the reflection type serrated edge collimator. The analysis of these devices requires a flexible handling of almost arbitrary scattering geometries, becoming this flexibility the nucleus of the algorithmia’s ability to perform the subsequent design tasks. This thesis’ contribution to this field of knowledge consisted on reaching a formulation which was powerful at the same time when dealing with various analysis geometries and computationally speaking. Two algorithmia were developed. While based on the same principle of hybridization, they reached different order Physics performance at the cost of the computational efficiency. Inter-comparison of their CATR design capabilities was performed, reaching both qualitative as well as quantitative conclusions on their scope. In third place, interest was shifted from analysis - design tasks towards range assessment. Millimetre wavelengths imply strict mechanical tolerances and fine setup adjustment. In addition, the large number of unknowns issue already faced in the analysis stage appears as well in the on chamber field probing stage. Natural decrease of dynamic range available by semiconductor millimeter waves sources requires in addition larger integration times at each probing point. These peculiarities increase exponentially the difficulty of performing assessment processes in CATR facilities beyond microwaves. The bottleneck becomes so tight that it compromises the range characterization beyond a certain limit frequency which typically lies on the lowest segment of millimeter wavelength frequencies. However the value of range assessment moves, on the contrary, towards the highest segment. This thesis contributes this technological scenario developing quiet zone probing techniques which achieves substantial data reduction ratii. Collaterally, it increases the robustness of the results to noise, which is a virtual rise of the setup’s available dynamic range. In fourth place, the environmental sensitivity of millimeter wavelengths issue was approached. It is well known the drifts of electromagnetic experiments due to the dependance of the re sults with respect to the surrounding environment. This feature relegates many industrial practices of microwave frequencies to the experimental stage, at millimeter wavelengths. In particular, evolution of the atmosphere within acceptable conditioning bounds redounds in drift phenomena which completely mask the experimental results. The contribution of this thesis on this aspect consists on modeling electrically the indoor atmosphere existing in a CATR, as a function of environmental variables which affect the range’s performance. A simple model was developed, being able to handle high level phenomena, such as feed - probe phase drift as a function of low level magnitudes easy to be sampled: relative humidity and temperature. With this model, environmental compensation can be performed and chamber conditioning is automatically extended towards higher frequencies. Therefore, the purpose of this thesis is to go further into the knowledge of millimetre wavelengths involving compact antenna test ranges. This knowledge is dosified through the sequential stages of a CATR conception, form early low level electromagnetic analysis towards the assessment of an operative facility, stages for each one of which nowadays bottleneck phenomena exist and seriously compromise the antenna measurement practices at millimeter wavelengths.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper describes a case study in WCET analysis of an on-board spacecraft software system. The attitude control system of UPMSat-2, an experimental micro-satellite which is scheduled to be launched in 2013, is used for an experiment on analysing the worst-case execution time of code automatically generated from a Simulink model. In order to properly test the code, a hardware-in-the-loop configuration with a simulation model of the spacecraft environment has been used as a test bench. The code has been analysed with RapiTime, with some modifications to the original instrumentation routines, in order to take into account the particularities of the test configuration. Results from the experiment are described and commented in the paper.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

La hipótesis de esta tesis es: "La optimización de la ventana considerando simultáneamente aspectos energéticos y aspectos relativos a la calidad ambiental interior (confort higrotérmico, lumínico y acústico) es compatible, siempre que se conozcan y consideren las sinergias existentes entre ellos desde las primeras fases de diseño". En la actualidad se desconocen las implicaciones de muchas de las decisiones tomadas en torno a la ventana; para que su eficiencia en relación a todos los aspectos mencionados pueda hacerse efectiva es necesaria una herramienta que aporte más información de la actualmente disponible en el proceso de diseño, permitiendo así la optimización integral, en función de las circunstancias específicas de cada proyecto. En la fase inicial de esta investigación se realiza un primer acercamiento al tema, a través del estado del arte de la ventana; analizando la normativa existente, los componentes, las prestaciones, los elementos experimentales y la investigación. Se observa que, en ocasiones, altos requisitos de eficiencia energética pueden suponer una disminución de las prestaciones del sistema en relación con la calidad ambiental interior, por lo que surge el interés por integrar al análisis energético aspectos relativos a la calidad ambiental interior, como son las prestaciones lumínicas y acústicas y la renovación de aire. En este punto se detecta la necesidad de realizar un estudio integral que incorpore los distintos aspectos y evaluar las sinergias que se dan entre las distintas prestaciones que cumple la ventana. Además, del análisis de las soluciones innovadoras y experimentales se observa la dificultad de determinar en qué medida dichas soluciones son eficientes, ya que son soluciones complejas, no caracterizadas y que no están incorporadas en las metodologías de cálculo o en las bases de datos de los programas de simulación. Por lo tanto, se plantea una segunda necesidad, generar una metodología experimental para llevar a cabo la caracterización y el análisis de la eficiencia de sistemas innovadores. Para abordar esta doble necesidad se plantea la optimización mediante una evaluación del elemento acristalado que integre la eficiencia energética y la calidad ambiental interior, combinando la investigación teórica y la investigación experimental. En el ámbito teórico, se realizan simulaciones, cálculos y recopilación de información de distintas tipologías de hueco, en relación con cada prestación de forma independiente (acústica, iluminación, ventilación). A pesar de haber partido con un enfoque integrador, resulta difícil esa integración detectándose una carencia de herramientas disponible. En el ámbito experimental se desarrolla una metodología para la evaluación del rendimiento y de aspectos ambientales de aplicación a elementos innovadores de difícil valoración mediante la metodología teórica. Esta evaluación consiste en el análisis comparativo experimental entre el elemento innovador y un elemento estándar; para llevar a cabo este análisis se han diseñado dos espacios iguales, que denominamos módulos de experimentación, en los que se han incorporado los dos sistemas; estos espacios se han monitorizado, obteniéndose datos de consumo, temperatura, iluminancia y humedad relativa. Se ha realizado una medición durante un periodo de nueve meses y se han analizado y comparado los resultados, obteniendo así el comportamiento real del sistema. Tras el análisis teórico y el experimental, y como consecuencia de esa necesidad de integrar el conocimiento existente se propone una herramienta de evaluación integral del elemento acristalado. El desarrollo de esta herramienta se realiza en base al procedimiento de diagnóstico de calidad ambiental interior (CAI) de acuerdo con la norma UNE 171330 “Calidad ambiental en interiores”, incorporando el factor de eficiencia energética. De la primera parte del proceso, la parte teórica y el estado del arte, se obtendrán los parámetros que son determinantes y los valores de referencia de dichos parámetros. En base a los parámetros relevantes obtenidos se da forma a la herramienta, que consiste en un indicador de producto para ventanas que integra todos los factores analizados y que se desarrolla según la Norma UNE 21929 “Sostenibilidad en construcción de edificios. Indicadores de sostenibilidad”. ABSTRACT The hypothesis of this thesis is: "The optimization of windows considering energy and indoor environmental quality issues simultaneously (hydrothermal comfort, lighting comfort, and acoustic comfort) is compatible, provided that the synergies between these issues are known and considered from the early stages of design ". The implications of many of the decisions made on this item are currently unclear. So that savings can be made, an effective tool is needed to provide more information during the design process than the currently available, thus enabling optimization of the system according to the specific circumstances of each project. The initial phase deals with the study from an energy efficiency point of view, performing a qualitative and quantitative analysis of commercial, innovative and experimental windows. It is observed that sometimes, high-energy efficiency requirements may mean a reduction in the system's performance in relation to user comfort and health, that's why there is an interest in performing an integrated analysis of indoor environment aspects and energy efficiency. At this point a need for a comprehensive study incorporating the different aspects is detected, to evaluate the synergies that exist between the various benefits that meet the window. Moreover, from the analysis of experimental and innovative windows, a difficulty in establishing to what extent these solutions are efficient is observed; therefore, there is a need to generate a methodology for performing the analysis of the efficiency of the systems. Therefore, a second need arises, to generate an experimental methodology to perform characterization and analysis of the efficiency of innovative systems. To address this dual need, the optimization of windows by an integrated evaluation arises, considering energy efficiency and indoor environmental quality, combining theoretical and experimental research. In the theoretical field, simulations and calculations are performed; also information about the different aspects of indoor environment (acoustics, lighting, ventilation) is gathered independently. Despite having started with an integrative approach, this integration is difficult detecting lack available tools. In the experimental field, a methodology for evaluating energy efficiency and indoor environment quality is developed, to be implemented in innovative elements which are difficult to evaluate using a theoretical methodology This evaluation is an experimental comparative analysis between an innovative element and a standard element. To carry out this analysis, two equal spaces, called experimental cells, have been designed. These cells have been monitored, obtaining consumption, temperature, luminance and relative humidity data. Measurement has been performed during nine months and results have been analyzed and compared, obtaining results of actual system behavior. To advance this optimization, windows have been studied from the point of view of energy performance and performance in relation to user comfort and health: thermal comfort, acoustic comfort, lighting comfort and air quality; proposing the development of a methodology for an integrated analysis including energy efficiency and indoor environment quality. After theoretical and experimental analysis and as a result of the need to integrate existing knowledge, a comprehensive evaluation procedure for windows is proposed. This evaluation procedure is developed according to the UNE 171330 "Indoor Environmental Quality", also incorporating energy efficiency and cost as factors to evaluate. From the first part of the research process, outstanding parameters are chosen and reference values of these parameters are set. Finally, based on the parameters obtained, an indicator is proposed as windows product indicator. The indicator integrates all factors analyzed and is developed according to ISO 21929-1:2011"Sustainability in building construction. Sustainability indicators. Part 1: Framework for the development of indicators and a core set of indicators for buildings".

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The purpose of this report is to build a model that represents, as best as possible, the seismic behavior of a pile cap bridge foundation by a nonlinear static (analysis) procedure. It will consist of a reproduction of a specimen already built in the laboratory. This model will carry out a pseudo static lateral and horizontal pushover test that will be applied onto the pile cap until the failure of the structure, the formation of a plastic hinge in the piles due to the horizontal deformation, occurs. The pushover test consists of increasing the horizontal load over the pile cap until the horizontal displacement wanted at the height of the pile cap is reached. The output of this model will be a Skeleton curve that will plot the lateral load (kN) over the displacement (m), so that the maximum movement the pile cap foundation can reach before its failure can be calculated. This failure will be achieved when the load at that specific shift is equal to 85% of the maximum. The pile cap foundation finite element model was based on pile cap built for a laboratory experiment already carried out by the Master student Deming Zhang at Tongji University. Two different pile caps were tested with a difference in height above the ground level. While one has 0:3m, the other rises 0:8m above the ground level. The computer model was calibrated using the experimental results. The pile cap foundation will be programmed in a finite element environment called OpenSees (Open System for Earthquake Engineering Simulation [28]). This environment is a free software developed by Berkeley University specialized, as it name says, in the study of earthquakes and its effects on structures. This specialization is the main reason why it is being used for building this model as it makes it possible to build any finite element model, and perform several analysis in order to get the results wanted. The development of OpenSees is sponsored by the Pacific Earthquake Engineering Research Center through the National Science Foundation engineering and education centers program. OpenSees uses Tcl language to program it, which is a language similar to C++.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

NASA's tether experiment ProSEDS will be placed in orbit on board a Delta-II rocket in early 2003. ProSEDS will test bare-tether electron collection, deorbiting of the rocket second stage, and the system dynamic stability. ProSEDS performance will vary both because ambient conditions change along the orbit and because tether-circuit parameters follow a step by step sequence in the current operating cycle. In this work we discuss how measurements of tether current and bias, plasma density, and deorbiting rate can be used to check the OML law for current collection. We review circuit bulk elements; characteristic lengths and energies that determine collection (tether radius, electron thermal gyroradius and Debye length, particle temperatures, tether bias, ion ram energy); and lengths determining current and bias profiles along the tether (extent of magnetic self-field, a length gauging ohmic versus collection impedances, tether length). The analysis serves the purpose of estimating ProSEDS behavior in orbit and fostering our ability for extrapolating ProSEDS flight data to different tether and environmental conditions.