943 resultados para Power take-off optimization
Resumo:
The aim of this project was to carry out an investigastion into suitable alternatives to gasoline for use in modern automobiles. The fuel would provide the western world with a means of extending the natural gasoline resources and the third world a way of cutting down their dependence on the oil producing countries for their energy supply. Alcohols, namely methanol and ethanol, provide this solution. They can be used as gasoline extenders or as fuels on their own.In order to fulfil the aims of the project a literature study was carried out to investigate methods and costs of producing these fuels. An experimental programme was then set up in which the performance of the alcohols was studied on a conventional engine. The engine used for this purpose was the Fiat 127 930cc four cylinder engine. This engine was used because of its popularity in the European countries. The Weber fixed jet carburettor, since it was designed to be used with gasoline, was adapted so that the alcohol fuels and the blends could be used in the most efficient way. This was mainly to take account of the lower heat content of the alcohols. The adaptation of the carburettor was in the form of enlarging the main metering jet. Allowances for the alcohol's lower specfic gravity were made during fuel metering.Owing to the low front end volatility of methanol and ethanol, it was expected that `start up' problems would occur. An experimental programme was set up to determine the temperature range for a minimum required percentage `take off' that would ease start-up since it was determined that a `take off' of about 5% v/v liquid in the vapour phase would be sufficient for starting. Additions such as iso-pentane and n-pentane were used to improve the front end volatility. This proved to be successful.The lower heat content of the alcohol fuels also meant that a greater charge of fuel would be required. This was seen to pose further problems with fuel distribution from the carburettor to the individual cylinders on a multicylinder engine. Since it was not possible to modify the existing manifold on the Fiat 127 engine, experimental tests on manifold geometry were carried out using the Ricardo E6 single cylinder variable compression engine. Results from these tests showed that the length, shape and cross-sectional area of the manifold play an important part in the distribution of the fuel entering the cylinder, ie. vapour phase, vapour/small liquid droplet/liquid film phase, vapour/large liquid droplet/liquid film phase etc.The solvent properties of the alcohols and their greater electrical conductivity suggested that the materials used on the engine would be prone to chemical attack. In order to determine the type and rate of chemical attack, an experimental programme was set up whereby carburettor and other components were immersed in the alcohols and in blends of alcohol with gasoline. The test fuels were aerated and in some instances kept at temperatures ranging from 50oC to 90oC. Results from these tests suggest that not all materials used in the conventional engine are equally suitable for use with alcohols and alcohol/gasoline blends. Aluminium for instance was severely attacked by methanol causing pitting and pin-holing in the surface.In general this whole experimental programme gave valuable information on the acceptability of substitute fuels. While the long term effects of alcohol use merit further study, it is clear that methanol and ethanol will be increasingly used in place of gasoline.
Resumo:
A new instrument and method are described that allow the hydraulic conductivities of highly permeable porous materials, such as gravels in constructed wetlands, to be determined in the field. The instrument consists of a Mariotte siphon and a submersible permeameter cell with manometer take-off tubes, to recreate in-situ the constant head permeameter test typically used with excavated samples. It allows permeability to be measured at different depths and positions over the wetland. Repeatability obtained at fixed positions was good (normalised standard deviation of 1–4%), and results obtained for highly homogenous silica sand compared well when the sand was retested in a lab permeameter (0.32 mm.s–1 and 0.31 mm.s–1 respectively). Practical results have a ±30% associated degree of uncertainty because of the mixed effect of natural variation in gravel core profiles, and interstitial clogging disruption during insertion of the tube into the gravel. This error is small, however, compared to the orders of magnitude spatial variations detected. The technique was used to survey the hydraulic conductivity profile of two constructed wetlands in the UK, aged 1 and 15 years respectively. Measured values were high (up to 900 mm.s –1) and varied by three orders of magnitude, reflecting the immaturity of the wetland. Detailed profiling of the younger system suggested the existence of preferential flow paths at a depth of 200 mm, corresponding to the transition between more coarse and less coarse gravel layers (6–12 mm and 3–6 mm respectively), and transverse drift towards the outlet.
Resumo:
The extraordinary growth of the Irish economy since the mid-1990s - the 'Celtic Tiger' - has attracted a great deal of interest, commentary and research. Indeed, many countries look to Ireland as an economic development role model, and it has been suggested that Ireland might provide key lessons for other EU members as they seek to achieve the objectives set out in the Lisbon Agenda. Much of the discussion of Ireland's growth has focused on its possible triggers: the long term consequences of the late 1980s fiscal stabilisation; EU structural funds; education; wage moderation; and devaluation of the Irish punt. The industrial policy perspective has highlighted the importance of inflows of foreign direct investment, but a notable absence from the discourse on the 'Celtic Tiger' has been any mention of the role of new business venture creation and entrepreneurship. In this paper we use unpublished Irish VAT data for the years 1988 to 2004 to provide the first detailed look at national trends in business birth and death rates in Ireland over the 'take-off' period. We also use sub-national VAT data to shed light on spatial trends in new venture creation. Our overall conclusions are that new business formation made no detectable contribution to the acceleration of Ireland's growth in the late 1990s, although we do find evidence of spatial convergence in per capita business stocks.
Resumo:
The extraordinary growth of the Irish economy since the mid-1990s—the ‘Celtic Tiger’—has attracted a great deal of interest, commentary and research. Indeed, many countries look to Ireland as an economic development role model, and it has been suggested that Ireland might provide key lessons for other EU members as they seek to achieve the objectives set out in the Lisbon Agenda. Much of the discussion of Ireland’s growth has focused on its possible triggers: the long-term consequences of the late 1980s fiscal stabilisation, EU structural funds, education, wage moderation and devaluation of the Irish punt. The industrial policy perspective has highlighted the importance of inflows of foreign direct investment, but a notable absence from the discourse on the ‘Celtic Tiger’ has been any mention of the role of new business venture creation and entrepreneurship. In this paper we use unpublished Irish VAT data for the years 1988–2004 to provide the first detailed look at national trends in business birth and death rates in Ireland over the ‘take-off’ period. We also use sub-national VAT data to shed light on spatial trends in new venture creation. Our overall conclusions are that new business formation made no detectable contribution to the acceleration of Ireland’s growth in the late 1990s, although we do find evidence of spatial convergence in per capita business stocks.
Resumo:
Ground Delay Programs (GDP) are sometimes cancelled before their initial planned duration and for this reason aircraft are delayed when it is no longer needed. Recovering this delay usually leads to extra fuel consumption, since the aircraft will typically depart after having absorbed on ground their assigned delay and, therefore, they will need to cruise at more fuel consuming speeds. Past research has proposed speed reduction strategy aiming at splitting the GDP-assigned delay between ground and airborne delay, while using the same fuel as in nominal conditions. Being airborne earlier, an aircraft can speed up to nominal cruise speed and recover part of the GDP delay without incurring extra fuel consumption if the GDP is cancelled earlier than planned. In this paper, all GDP initiatives that occurred in San Francisco International Airport during 2006 are studied and characterised by a K-means algorithm into three different clusters. The centroids for these three clusters have been used to simulate three different GDPs at the airport by using a realistic set of inbound traffic and the Future Air Traffic Management Concepts Evaluation Tool (FACET). The amount of delay that can be recovered using this cruise speed reduction technique, as a function of the GDP cancellation time, has been computed and compared with the delay recovered with the current concept of operations. Simulations have been conducted in calm wind situation and without considering a radius of exemption. Results indicate that when aircraft depart early and fly at the slower speed they can recover additional delays, compared to current operations where all delays are absorbed prior to take-off, in the event the GDP cancels early. There is a variability of extra delay recovered, being more significant, in relative terms, for those GDPs with a relatively low amount of demand exceeding the airport capacity.
Resumo:
Set in 2008 Puerto Rico, this novel aims to explore the relationship between constructed masks of personal identity, the increasingly interconnected nature of community, and their confluence in the worlds of politics, media, social activism, and business through a narrative examination of the ways in which three primary characters affect the lives of those around them. Jaime, a meditative young man with a penchant for planes, comes home to find the power shut off and his drug-addict mother gone. His best friend, Yarique, a disaffected stoner with a false sense of machismo, becomes an overnight sensation after an escalating series of violent run-ins with his abuelo’s neighbor. Ravolo Soto, a reclusive pitorro distiller, drinks to keep The Other in check, but takes off into the jungles of Lares, hiding out in his father’s mountain shack, after a violent encounter with the police leaves one officer dead.
Resumo:
En la acuicultura, la producción de camarón depende de parámetros ambientales, y químicos en el agua. Usualmente, la medición y compilación de datos acerca de estos parámetros se realiza manualmente. En este trabajo se propone y evalúa una red de sensores cuyos nodos se interconectan inalámbricamente para recolectar datos automáticamente. El diseño de la red explota la topología de malla, misma que permite incrementar la fiabilidad en la transmisión de datos. Adicionalmente, los módulos de hardware utilizados se configuran para reducir el consumo de energía. Se realizaron pruebas en entornos reales (tanques y piscinas) con varios nodos colocados en plataformas flotantes para capturar, transmitir y acumular datos relativos a temperatura del agua. Los resultados obtenidos son alentadores y demuestran las posibilidades que existen para explotar componentes electrónicos de bajo costo en aplicaciones de acuicultura inteligente.
Resumo:
Electric vehicle (EV) batteries tend to have accelerated degradation due to high peak power and harsh charging/discharging cycles during acceleration and deceleration periods, particularly in urban driving conditions. An oversized energy storage system (ESS) can meet the high power demands; however, it suffers from increased size, volume and cost. In order to reduce the overall ESS size and extend battery cycle life, a battery-ultracapacitor (UC) hybrid energy storage system (HESS) has been considered as an alternative solution. In this work, we investigate the optimized configuration, design, and energy management of a battery-UC HESS. One of the major challenges in a HESS is to design an energy management controller for real-time implementation that can yield good power split performance. We present the methodologies and solutions to this problem in a battery-UC HESS with a DC-DC converter interfacing with the UC and the battery. In particular, a multi-objective optimization problem is formulated to optimize the power split in order to prolong the battery lifetime and to reduce the HESS power losses. This optimization problem is numerically solved for standard drive cycle datasets using Dynamic Programming (DP). Trained using the DP optimal results, an effective real-time implementation of the optimal power split is realized based on Neural Network (NN). This proposed online energy management controller is applied to a midsize EV model with a 360V/34kWh battery pack and a 270V/203Wh UC pack. The proposed online energy management controller effectively splits the load demand with high power efficiency and also effectively reduces the battery peak current. More importantly, a 38V-385Wh battery and a 16V-2.06Wh UC HESS hardware prototype and a real-time experiment platform has been developed. The real-time experiment results have successfully validated the real-time implementation feasibility and effectiveness of the real-time controller design for the battery-UC HESS. A battery State-of-Health (SoH) estimation model is developed as a performance metric to evaluate the battery cycle life extension effect. It is estimated that the proposed online energy management controller can extend the battery cycle life by over 60%.
Resumo:
The topic of this thesis is the design and the implementation of mathematical models and control system algorithms for rotary-wing unmanned aerial vehicles to be used in cooperative scenarios. The use of rotorcrafts has many attractive advantages, since these vehicles have the capability to take-off and land vertically, to hover and to move backward and laterally. Rotary-wing aircraft missions require precise control characteristics due to their unstable and heavy coupling aspects. As a matter of fact, flight test is the most accurate way to evaluate flying qualities and to test control systems. However, it may be very expensive and/or not feasible in case of early stage design and prototyping. A good compromise is made by a preliminary assessment performed by means of simulations and a reduced flight testing campaign. Consequently, having an analytical framework represents an important stage for simulations and control algorithm design. In this work mathematical models for various helicopter configurations are implemented. Different flight control techniques for helicopters are presented with theoretical background and tested via simulations and experimental flight tests on a small-scale unmanned helicopter. The same platform is used also in a cooperative scenario with a rover. Control strategies, algorithms and their implementation to perform missions are presented for two main scenarios. One of the main contributions of this thesis is to propose a suitable control system made by a classical PID baseline controller augmented with L1 adaptive contribution. In addition a complete analytical framework and the study of the dynamics and the stability of a synch-rotor are provided. At last, the implementation of cooperative control strategies for two main scenarios that include a small-scale unmanned helicopter and a rover.
Resumo:
The scope of this study is to design an automatic control system and create an automatic x-wire calibrator for a facility named Plane Air Tunnel; whose exit creates planar jet flow. The controlling power state as well as automatic speed adjustment of the inverter has been achieved. Thus, the wind tunnel can be run with respect to any desired speed and the x-wire can automatically be calibrated at that speed. To achieve that, VI programming using the LabView environment was learned, to acquire the pressure and temperature, and to calculate the velocity based on the acquisition data thanks to a pitot-static tube. Furthermore, communication with the inverter to give the commands for power on/off and speed control was also done using the LabView VI coding environment. The connection of the computer to the inverter was achieved by the proper cabling using DAQmx Analog/Digital (A/D) input/output (I/O). Moreover, the pressure profile along the streamwise direction of the plane air tunnel was studied. Pressure tappings and a multichannel pressure scanner were used to acquire the pressure values at different locations. Thanks to that, the aerodynamic efficiency of the contraction ratio was observed, and the pressure behavior was related to the velocity at the exit section. Furthermore, the control of the speed was accomplished by implementing a closed-loop PI controller on the LabView environment with and without using a pitot-static tube thanks to the pressure behavior information. The responses of the two controllers were analyzed and commented on by giving suggestions. In addition, hot wire experiments were performed to calibrate automatically and investigate the velocity profile of a turbulent planar jet. To be able to analyze the results, the physics of turbulent planar jet flow was studied. The fundamental terms, the methods used in the derivation of the equations, velocity profile, shear stress behavior, and the effect of vorticity were reviewed.
Resumo:
The increasing integration of wind energy in power systems can be responsible for the occurrence of over-generation, especially during the off-peak periods. This paper presents a dedicated methodology to identify and quantify the occurrence of this over-generation and to evaluate some of the solutions that can be adopted to mitigate this problem. The methodology is applied to the Portuguese power system, in which the wind energy is expected to represent more than 25% of the installed capacity in a near future. The results show that the pumped-hydro units will not provide enough energy storage capacity and, therefore, wind curtailments are expected to occur in the Portuguese system. Additional energy storage devices can be implemented to offset the wind energy curtailments. However, the investment analysis performed show that they are not economically viable, due to the present high capital costs involved.
Resumo:
10th Conference on Telecommunications (Conftele 2015), Aveiro, Portugal.
Resumo:
Wireless sensor networks (WSNs) are generally used to monitor hazardous events in inaccessible areas. Thus, on one hand, it is preferable to assure the adoption of the minimum transmission power in order to extend as much as possible the WSNs lifetime. On the other hand, it is crucial to guarantee that the transmitted data is correctly received by the other nodes. Thus, trading off power optimization and reliability insurance has become one of the most important concerns when dealing with modern systems based on WSN. In this context, we present a transmission power self-optimization (TPSO) technique for WSNs. The TPSO technique consists of an algorithm able to guarantee the connectivity as well as an equally high quality of service (QoS), concentrating on the WSNs efficiency (Ef), while optimizing the transmission power necessary for data communication. Thus, the main idea behind the proposed approach is to trade off WSNs Ef against energy consumption in an environment with inherent noise. Experimental results with different types of noise and electromagnetic interference (EMI) have been explored in order to demonstrate the effectiveness of the TPSO technique.
Resumo:
Nowadays computing platforms consist of a very large number of components that require to be supplied with diferent voltage levels and power requirements. Even a very small platform, like a handheld computer, may contain more than twenty diferent loads and voltage regulators. The power delivery designers of these systems are required to provide, in a very short time, the right power architecture that optimizes the performance, meets electrical specifications plus cost and size targets. The appropriate selection of the architecture and converters directly defines the performance of a given solution. Therefore, the designer needs to be able to evaluate a significant number of options in order to know with good certainty whether the selected solutions meet the size, energy eficiency and cost targets. The design dificulties of selecting the right solution arise due to the wide range of power conversion products provided by diferent manufacturers. These products range from discrete components (to build converters) to complete power conversion modules that employ diferent manufacturing technologies. Consequently, in most cases it is not possible to analyze all the alternatives (combinations of power architectures and converters) that can be built. The designer has to select a limited number of converters in order to simplify the analysis. In this thesis, in order to overcome the mentioned dificulties, a new design methodology for power supply systems is proposed. This methodology integrates evolutionary computation techniques in order to make possible analyzing a large number of possibilities. This exhaustive analysis helps the designer to quickly define a set of feasible solutions and select the best trade-off in performance according to each application. The proposed approach consists of two key steps, one for the automatic generation of architectures and other for the optimized selection of components. In this thesis are detailed the implementation of these two steps. The usefulness of the methodology is corroborated by contrasting the results using real problems and experiments designed to test the limits of the algorithms.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.