827 resultados para Efficient shelter
Resumo:
The aim of this thesis is to study the mechanisms of instability that occur in swept wings when the angle of attack increases. For this, a simplified model for the a simplified model for the non-orthogonal swept leading edge boundary layer has been used as well as different numerical techniques in order to solve the linear stability problem that describes the behavior of perturbations superposed upon this base flow. Two different approaches, matrix-free and matrix forming methods, have been validated using direct numerical simulations with spectral resolution. In this way, flow instability in the non-orthogonal swept attachment-line boundary layer is addressed in a linear analysis framework via the solution of the pertinent global (Bi-Global) PDE-based eigenvalue problem. Subsequently, a simple extension of the extended G¨ortler-H¨ammerlin ODEbased polynomial model proposed by Theofilis, Fedorov, Obrist & Dallmann (2003) for orthogonal flow, which includes previous models as particular cases and recovers global instability analysis results, is presented for non-orthogonal flow. Direct numerical simulations have been used to verify the stability results and unravel the limits of validity of the basic flow model analyzed. The effect of the angle of attack, AoA, on the critical conditions of the non-orthogonal problem has been documented; an increase of the angle of attack, from AoA = 0 (orthogonal flow) up to values close to _/2 which make the assumptions under which the basic flow is derived questionable, is found to systematically destabilize the flow. The critical conditions of non-orthogonal flows at 0 _ AoA _ _/2 are shown to be recoverable from those of orthogonal flow, via a simple analytical transformation involving AoA. These results can help to understand the mechanisms of destabilization that occurs in the attachment line of wings at finite angles of attack. Studies taking into account variations of the pressure field in the basic flow or the extension to compressible flows are issues that remain open. El objetivo de esta tesis es estudiar los mecanismos de la inestabilidad que se producen en ciertos dispositivos aerodinámicos cuando se aumenta el ángulo de ataque. Para ello se ha utilizado un modelo simplificado del flujo de base, así como diferentes técnicas numéricas, con el fin de resolver el problema de estabilidad lineal asociado que describe el comportamiento de las perturbaciones. Estos métodos; sin y con formación de matriz, se han validado utilizando simulaciones numéricas directas con resolución espectral. De esta manera, la inestabilidad del flujo de capa límite laminar oblicuo entorno a la línea de estancamiento se aborda en un marco de análisis lineal por medio del método Bi-Global de resolución del problema de valores propios en derivadas parciales. Posteriormente se propone una extensión simple para el flujo no-ortogonal del modelo polinomial de ecuaciones diferenciales ordinarias, G¨ortler-H¨ammerlin extendido, propuesto por Theofilis et al. (2003) para el flujo ortogonal, que incluye los modelos previos como casos particulares y recupera los resultados del analisis global de estabilidad lineal. Se han realizado simulaciones directas con el fin de verificar los resultados del análisis de estabilidad así como para investigar los límites de validez del modelo de flujo base utilizado. En este trabajo se ha documentado el efecto del ángulo de ataque AoA en las condiciones críticas del problema no ortogonal obteniendo que el incremento del ángulo de ataque, de AoA = 0 (flujo ortogonal) hasta valores próximos a _/2, en el cual las hipótesis sobre las que se basa el flujo base dejan de ser válidas, tiende sistemáticamente a desestabilizar el flujo. Las condiciones críticas del caso no ortogonal 0 _ AoA _ _/2 pueden recuperarse a partir del caso ortogonal mediante el uso de una transformación analítica simple que implica el ángulo de ataque AoA. Estos resultados pueden ayudar a comprender los mecanismos de desestabilización que se producen en el borde de ataque de las alas de los aviones a ángulos de ataque finitos. Como tareas pendientes quedaría realizar estudios que tengan en cuenta variaciones del campo de presión en el flujo base así como la extensión de éste al caso de flujos compresibles.
Resumo:
The term "Logic Programming" refers to a variety of computer languages and execution models which are based on the traditional concept of Symbolic Logic. The expressive power of these languages offers promise to be of great assistance in facing the programming challenges of present and future symbolic processing applications in Artificial Intelligence, Knowledge-based systems, and many other areas of computing. The sequential execution speed of logic programs has been greatly improved since the advent of the first interpreters. However, higher inference speeds are still required in order to meet the demands of applications such as those contemplated for next generation computer systems. The execution of logic programs in parallel is currently considered a promising strategy for attaining such inference speeds. Logic Programming in turn appears as a suitable programming paradigm for parallel architectures because of the many opportunities for parallel execution present in the implementation of logic programs. This dissertation presents an efficient parallel execution model for logic programs. The model is described from the source language level down to an "Abstract Machine" level suitable for direct implementation on existing parallel systems or for the design of special purpose parallel architectures. Few assumptions are made at the source language level and therefore the techniques developed and the general Abstract Machine design are applicable to a variety of logic (and also functional) languages. These techniques offer efficient solutions to several areas of parallel Logic Programming implementation previously considered problematic or a source of considerable overhead, such as the detection and handling of variable binding conflicts in AND-Parallelism, the specification of control and management of the execution tree, the treatment of distributed backtracking, and goal scheduling and memory management issues, etc. A parallel Abstract Machine design is offered, specifying data areas, operation, and a suitable instruction set. This design is based on extending to a parallel environment the techniques introduced by the Warren Abstract Machine, which have already made very fast and space efficient sequential systems a reality. Therefore, the model herein presented is capable of retaining sequential execution speed similar to that of high performance sequential systems, while extracting additional gains in speed by efficiently implementing parallel execution. These claims are supported by simulations of the Abstract Machine on sample programs.
Resumo:
Abstract interpretation has been widely used for the analysis of object-oriented languages and, more precisely, Java source and bytecode. However, while most of the existing work deals with the problem of finding expressive abstract domains that track accurately the characteristics of a particular concrete property, the underlying fixpoint algorithms have received comparatively less attention. In fact, many existing (abstract interpretation based) fixpoint algorithms rely on relatively inefficient techniques to solve inter-procedural call graphs or are specific and tied to particular analyses. We argue that the design of an efficient fixpoint algorithm is pivotal to support the analysis of large programs. In this paper we introduce a novel algorithm for analysis of Java bytecode which includes a number of optimizations in order to reduce the number of iterations. Also, the algorithm is parametric in the sense that it is independent of the abstract domain used and it can be applied to different domains as "plug-ins". It is also incremental in the sense that, if desired, analysis data can be saved so that only a reduced amount of reanalysis is needed after a small program change, which can be instrumental for large programs. The algorithm is also multivariant and flowsensitive. Finally, another interesting characteristic of the algorithm is that it is based on a program transformation, prior to the analysis, that results in a highly uniform representation of all the features in the language and therefore simplifies analysis. Detailed descriptions of decompilation solutions are provided and discussed with an example.
Resumo:
The paper resumes the results obtained applying various implementations of the direct boundary element method (BEM) to the solution of the Laplace Equation governing the potential flow problem during everyday service manoeuvres of high-speed trains. In particular the results of train passing events at three different speed combinations are presented. Some recommendations are given in order to reduce calculation times which as is demonstrated can be cut down to not exceed reasonable limits even when using nowadays office PCs. Thus the method is shown to be a very valuable tool for the design engineer.
Resumo:
The Boundary Element Method (BEM) is a discretisation technique for solving partial differential equations, which offers, for certain problems, important advantages over domain techniques. Despite the high CPU time reduction that can be achieved, some 3D problems remain today untreatable because the extremely large number of degrees of freedom—dof—involved in the boundary description. Model reduction seems to be an appealing choice for both, accurate and efficient numerical simulations. However, in the BEM the reduction in the number of degrees of freedom does not imply a significant reduction in the CPU time, because in this technique the more important part of the computing time is spent in the construction of the discrete system of equations. In this way, a reduction also in the number of weighting functions, seems to be a key point to render efficient boundary element simulations.
Resumo:
Transport is responsible for 41% of CO2 emissions in Spain, and around 65% of that figure is due to road traffic. Tolled motorways are currently managed according to economic criteria: minimizing operational costs and maximizing revenues from tolls. Within this framework, this paper develops a new methodology for managing motorways based on a target of maximum energy efficiency. It includes technological and demand-driven policies, which are applied to two case studies. Various conclusions emerge from this study. One is, that the use of intelligent payment systems is recommended; and another, is that the most sustainable policy would involve defining the most efficient strategy for each motorway section, including the maximum use of its capacity, the toll level which attracts the most vehicles, and the optimum speed limit for each type of vehicle.
Resumo:
We present a fast, highly sensitive, and efficient potentiometric glucose biosensor based on functionalized InN quantum-dots (QDs). The InN QDs are grown by molecular beam epitaxy. The InN QDs are bio-chemically functionalized through physical adsorption of glucose oxidase (GOD). GOD enzyme-coated InN QDs based biosensor exhibits excellent linear glucose concentration dependent electrochemical response against an Ag/AgCl reference electrode over a wide logarithmic glucose concentration range (1 × 10−5 M to 1 × 10−2 M) with a high sensitivity of 80 mV/decade. It exhibits a fast response time of less than 2 s with good stability and reusability and shows negligible response to common interferents such as ascorbic acid and uric acid. The fabricated biosensor has full potential to be an attractive candidate for blood sugar concentration detection in clinical diagnoses.
Resumo:
The photoluminescence efficiency of GaAsSb-capped InAs/GaAs type II quantum dots (QDs) can be greatly enhanced by rapid thermal annealing while preserving long radiative lifetimes which are ∼20 times larger than in standard GaAs-capped InAs/GaAs QDs. Despite the reduced electron-hole wavefunction overlap, the type-II samples are more efficient than the type-I counterparts in terms of luminescence, showing a great potential for device applications. Strain-driven In-Ga intermixing during annealing is found to modify the QD shape and composition, while As-Sb exchange is inhibited, allowing to keep the type-II structure. Sb is only redistributed within the capping layer giving rise to a more homogeneous composition.
Resumo:
We discuss several methods, based on coordinate transformations, for the evaluation of singular and quasisingular integrals in the direct Boundary Element Method. An intrinsec error of some of these methods is detected. Two new transformations are suggested which improve on those currently available.
Resumo:
Carbon management has gradually gained attention within the overall environmental management and corporate social responsibility agendas. The clean development mechanism, from Kyoto Protocol, was envisioned as connecting carbon market and sustainable development objectives in developing countries. Previous research has shown that this potential is rarely being achieved. The paper explores how the incorporation of the human side into carbon management reinforces its contribution to generate human development in local communities and to improve the company's image. A case study of a Brazilian company is presented, with the results of the application of an analytical model that incorporates the human side and human development. The selected project is an "efficient stoves" programme. "Efficient stoves" are recognised in Brazil as social technologies. Results suggest that the fact that social technologies value the human side of the technology plays a key role when it comes to analysing the co-benefits of the project implementation.
Resumo:
In previous works we demonstrated the benefits of using micro–nano patterning materials to be used as bio-photonic sensing cells (BICELLs), referred as micro–nano photonic structures having immobilized bioreceptors on its surface with the capability of recognizing the molecular binding by optical transduction. Gestrinone/anti-gestrinone and BSA/anti-BSA pairs were proven under different optical configurations to experimentally validate the biosensing capability of these bio-sensitive photonic architectures. Moreover, Three-Dimensional Finite Difference Time Domain (FDTD) models were employed for simulating the optical response of these structures. For this article, we have developed an effective analytical simulation methodology capable of simulating complex biophotonic sensing architectures. This simulation method has been tested and compared with previous experimental results and FDTD models. Moreover, this effective simulation methodology can be used for efficiently design and optimize any structure as BICELL. In particular for this article, six different BICELL's types have been optimized. To carry out this optimization we have considered three figures of merit: optical sensitivity, Q-factor and signal amplitude. The final objective of this paper is not only validating a suitable and efficient optical simulation methodology but also demonstrating the capability of this method for analyzing the performance of a given number of BICELLs for label-free biosensing.
Resumo:
Renewable energy hybrid systems and mini-grids for electrification of rural areas are known to be reliable and more cost efficient than grid extension or only-diesel based systems. However, there is still some uncertainty in some areas, for example, which is the most efficient way of coupling hybrid systems: AC, DC or AC-DC? With the use of Matlab/Simulink a mini-grid that connects a school, a small hospital and an ecotourism hostel has been modelled. This same mini grid has been coupled in the different possible ways and the system’s efficiency has been studied. In addition, while keeping the consumption constant, the generation sources and the consumption profile have been modified and the effect on the efficiency under each configuration has also been analysed. Finally different weather profiles have been introduced and, again, the effect on the efficiency of each system has been observed.
Resumo:
In SSL general illumination, there is a clear trend to high flux packages with higher efficiency and higher CRI addressed with the use of multiple color chips and phosphors. However, such light sources require the optics provide color mixing, both in the near-field and far-field. This design problem is specially challenging for collimated luminaries, in which diffusers (which dramatically reduce the brightness) cannot be applied without enlarging the exit aperture too much. In this work we present first injection molded prototypes of a novel primary shell-shaped optics that have microlenses on both sides to provide Köhler integration. This shell is design so when it is placed on top of an inhomogeneous multichip Lambertian LED, creates a highly homogeneous virtual source (i.e, spatially and angularly mixed), also Lambertian, which is located in the same position with only small increment of the size (about 10-20%, so the average brightness is similar to the brightness of the source). This shell-mixer device is very versatile and permits now to use a lens or a reflector secondary optics to collimate the light as desired, without color separation effects. Experimental measurements have shown optical efficiency of the shell of 95%, and highly homogeneous angular intensity distribution of collimated beams, in good agreement with the ray-tracing simulations.
Resumo:
LEDs are substituting fluorescent and incandescent bulbs as illumination sources due to their low power consumption and long lifetime. Visible Light Communications (VLC) makes use of the LEDs short switching times to transmit information. Although LEDs switching speed is around Mbps range, higher speeds (hundred of Mbps) can be reached by using high bandwidth-efficiency modulation techniques. However, the use of these techniques requires a more complex driver which elevates drastically its power consumption. In this work an energy efficiency analysis of the different VLC modulation techniques and drivers is presented. Besides, the design of new schemes of VLC drivers is described.
Resumo:
Modules are an important part of the CPV system. By pursing, in our objective of a 35% efficiency module, we need to look forward a significant improvement in the state of the art of CPV modules since no commercial module is capable of achieving that efficiency. Achieving this efficiency will require high efficiency cells, progress in the optics lenses that are implemented in these modules, and also integration into module. Basic design of 35% CPV module is presented considering for practical and rapid industry application. The output is 385 W while its weight is only 18 kg. In spite of its high concentration ratio reaching 1,000 X, it acceptance angle is as high as 1.1 degree.