993 resultados para ink reduction software


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: Based on a maxillary premolar restored with laminate veneer and using the 3-D finite element analysis (FEA) and mCT data, the aim of this study was to evaluate the influence of different types of buccal cusp reduction on the stress distribution in the porcelain laminate veneer and in the resin luting cement layer. Methods: Two 3-D FEA models (M) of a maxillary premolar were built from mCT data. The buccal cusp reduction followed two configurations: Mt-buccal cusp completely covered by porcelain laminate veneer; and Mp-buccal cusp partially covered by porcelain laminate veneer. The loading (150 N in 458) was performed on the top of the buccal cusp. The finite element software (Ansys Workbench 10.0) was used to obtain the maximum shear stress (σmax) and maximum principal stress (σmax). Results: The Mp showed reduced the stress (σmax) in porcelain laminate veneer (from-2.3 to 24.5 MPa) in comparison with Mt (from-5.3 to 27.4 MPa). The difference between the peak and lower stress values of σmax in Mp (-6.8 to 26.7 MPa) and Mt (-5.3 to 27.4 MPa) was similar for the resin luting cement layer. The structures not exceeded the ultimate tensile strength or the shear bond strength. Conclusions: Cusp reduction did not affect significant increase in σmax and τmax. The Mt showed better stress distribution (τmax) than Mp. © 2011 Published by Elsevier Ireland on behalf of Japan Prosthodontic Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transactional memory (TM) is a new synchronization mechanism devised to simplify parallel programming, thereby helping programmers to unleash the power of current multicore processors. Although software implementations of TM (STM) have been extensively analyzed in terms of runtime performance, little attention has been paid to an equally important constraint faced by nearly all computer systems: energy consumption. In this work we conduct a comprehensive study of energy and runtime tradeoff sin software transactional memory systems. We characterize the behavior of three state-of-the-art lock-based STM algorithms, along with three different conflict resolution schemes. As a result of this characterization, we propose a DVFS-based technique that can be integrated into the resolution policies so as to improve the energy-delay product (EDP). Experimental results show that our DVFS-enhanced policies are indeed beneficial for applications with high contention levels. Improvements of up to 59% in EDP can be observed in this scenario, with an average EDP reduction of 16% across the STAMP workloads. © 2012 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJETIVO: avaliar a diferença na percepção de ortodontistas e leigos quanto à redução da exposição dentogengival no sorriso. MÉTODOS: no total, 60 avaliadores de ambos os sexos (30 leigos e 30 ortodontistas) avaliaram fotografias do sorriso espontâneo de dois indivíduos, um do sexo masculino e um do feminino. A partir das imagens originais, a altura do sorriso foi modificada usando-se um programa de manipulação de imagens. Os examinadores emitiram notas de 0 a 10, conforme o nível de agradabilidade. A reprodutibilidade do método foi examinada através do teste de Wilcoxon, enquanto os testes de Friedman e Wilcoxon (P<0,05) foram utilizados para observar as diferenças intra e interexaminadores, respectivamente. RESULTADOS: os resultados demonstraram não haver diferença entre os grupos de avaliadores com relação à estética quando a altura de ambos os sorrisos foi modificada. Entretanto, o sorriso do indivíduo do sexo masculino teve menor aceitabilidade do que o sorriso feminino. Uma suave redução na exposição dentogengival no sorriso (2mm) não foi percebida por leigos ou ortodontistas (p>0,05). CONCLUSÃO: o sorriso do indivíduo do sexo feminino recebeu notas mais altas do que o do masculino; entretanto, amostras envolvendo um maior número de indivíduos em cada grupo são necessárias para confirmar se a observação estaria relacionada ao sexo do indivíduo examinado.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: The aim of this study was to assess the influence of curing time and power on the degree of conversion and surface microhardness of 3 orthodontic composites. Methods: One hundred eighty discs, 6 mm in diameter, were divided into 3 groups of 60 samples according to the composite used-Transbond XT (3M Unitek, Monrovia, Calif), Opal Bond MV (Ultradent, South Jordan, Utah), and Transbond Plus Color Change (3M Unitek)-and each group was further divided into 3 subgroups (n = 20). Five samples were used to measure conversion, and 15 were used to measure microhardness. A light-emitting diode curing unit with multiwavelength emission of broad light was used for curing at 3 power levels (530, 760, and 1520 mW) and 3 times (8.5, 6, and 3 seconds), always totaling 4.56 joules. Five specimens from each subgroup were ground and mixed with potassium bromide to produce 8-mm tablets to be compared with 5 others made similarly with the respective noncured composite. These were placed into a spectrometer, and software was used for analysis. A microhardness tester was used to take Knoop hardness (KHN) measurements in 15 discs of each subgroup. The data were analyzed with 2 analysis of variance tests at 2 levels. Results: Differences were found in the conversion degree of the composites cured at different times and powers (P < 0.01). The composites showed similar degrees of conversion when light cured at 8.5 seconds (80.7%) and 6 seconds (79.0%), but not at 3 seconds (75.0%). The conversion degrees of the composites were different, with group 3 (87.2%) higher than group 2 (83.5%), which was higher than group 1 (64.0%). Differences in microhardness were also found (P < 0.01), with lower microhardness at 8.5 seconds (35.2 KHN), but no difference was observed between 6 seconds (41.6 KHN) and 3 seconds (42.8 KHN). Group 3 had the highest surface microhardness (35.9 KHN) compared with group 2 (33.7 KHN) and group 1 (30.0 KHN). Conclusions: Curing time can be reduced up to 6 seconds by increasing the power, with a slight decrease in the degree of conversion at 3 seconds; the decrease has a positive effect on the surface microhardness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Distributed Software Development (DSD) is a development strategy that meets the globalization needs concerned with the increase productivity and cost reduction. However, the temporal distance, geographical dispersion and the socio-cultural differences, increased some challenges and, especially, added new requirements related with the communication, coordination and control of projects. Among these new demands there is the necessity of a software process that provides adequate support to the distributed software development. This paper presents an integrated approach of software development and test that considers distributed teams peculiarities. The approach purpose is to offer support to DSD, providing a better project visibility, improving the communication between the development and test teams, minimizing the ambiguity and difficulty to understand the artifacts and activities. This integrated approach was conceived based on four pillars: (i) to identify the DSD peculiarities concerned with development and test processes, (ii) to define the necessary elements to compose the integrated approach of development and test to support the distributed teams, (iii) to describe and specify the workflows, artifacts, and roles of the approach, and (iv) to represent appropriately the approach to enable the effective communication and understanding of it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dimensionality reduction is employed for visual data analysis as a way to obtaining reduced spaces for high dimensional data or to mapping data directly into 2D or 3D spaces. Although techniques have evolved to improve data segregation on reduced or visual spaces, they have limited capabilities for adjusting the results according to user's knowledge. In this paper, we propose a novel approach to handling both dimensionality reduction and visualization of high dimensional data, taking into account user's input. It employs Partial Least Squares (PLS), a statistical tool to perform retrieval of latent spaces focusing on the discriminability of the data. The method employs a training set for building a highly precise model that can then be applied to a much larger data set very effectively. The reduced data set can be exhibited using various existing visualization techniques. The training data is important to code user's knowledge into the loop. However, this work also devises a strategy for calculating PLS reduced spaces when no training data is available. The approach produces increasingly precise visual mappings as the user feeds back his or her knowledge and is capable of working with small and unbalanced training sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study deals with the reduction of the stiffness in precast concrete structural elements of multi-storey buildings to analyze global stability. Having reviewed the technical literature, this paper present indications of stiffness reduction in different codes, standards, and recommendations and compare these to the values found in the present study. The structural model analyzed in this study was constructed with finite elements using ANSYS® software. Physical Non-Linearity (PNL) was considered in relation to the diagrams M x N x 1/r, and Geometric Non-Linearity (GNL) was calculated following the Newton-Raphson method. Using a typical precast concrete structure with multiple floors and a semi-rigid beam-to-column connection, expressions for a stiffness reduction coefficient are presented. The main conclusions of the study are as follows: the reduction coefficients obtained from the diagram M x N x 1/r differ from standards that use a simplified consideration of PNL; the stiffness reduction coefficient for columns in the arrangements analyzed were approximately 0.5 to 0.6; and the variation of values found for stiffness reduction coefficient in concrete beams, which were subjected to the effects of creep with linear coefficients from 0 to 3, ranged from 0.45 to 0.2 for positive bending moments and 0.3 to 0.2 for negative bending moments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of a multibody model of a motorbike engine cranktrain is presented in this work, with an emphasis on flexible component model reduction. A modelling methodology based upon the adoption of non-ideal joints at interface locations, and the inclusion of component flexibility, is developed: both are necessary tasks if one wants to capture dynamic effects which arise in lightweight, high-speed applications. With regard to the first topic, both a ball bearing model and a journal bearing model are implemented, in order to properly capture the dynamic effects of the main connections in the system: angular contact ball bearings are modelled according to a five-DOF nonlinear scheme in order to grasp the crankshaft main bearings behaviour, while an impedance-based hydrodynamic bearing model is implemented providing an enhanced operation prediction at the conrod big end locations. Concerning the second matter, flexible models of the crankshaft and the connecting rod are produced. The well-established Craig-Bampton reduction technique is adopted as a general framework to obtain reduced model representations which are suitable for the subsequent multibody analyses. A particular component mode selection procedure is implemented, based on the concept of Effective Interface Mass, allowing an assessment of the accuracy of the reduced models prior to the nonlinear simulation phase. In addition, a procedure to alleviate the effects of modal truncation, based on the Modal Truncation Augmentation approach, is developed. In order to assess the performances of the proposed modal reduction schemes, numerical tests are performed onto the crankshaft and the conrod models in both frequency and modal domains. A multibody model of the cranktrain is eventually assembled and simulated using a commercial software. Numerical results are presented, demonstrating the effectiveness of the implemented flexible model reduction techniques. The advantages over the conventional frequency-based truncation approach are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data sets describing the state of the earth's atmosphere are of great importance in the atmospheric sciences. Over the last decades, the quality and sheer amount of the available data increased significantly, resulting in a rising demand for new tools capable of handling and analysing these large, multidimensional sets of atmospheric data. The interdisciplinary work presented in this thesis covers the development and the application of practical software tools and efficient algorithms from the field of computer science, aiming at the goal of enabling atmospheric scientists to analyse and to gain new insights from these large data sets. For this purpose, our tools combine novel techniques with well-established methods from different areas such as scientific visualization and data segmentation. In this thesis, three practical tools are presented. Two of these tools are software systems (Insight and IWAL) for different types of processing and interactive visualization of data, the third tool is an efficient algorithm for data segmentation implemented as part of Insight.Insight is a toolkit for the interactive, three-dimensional visualization and processing of large sets of atmospheric data, originally developed as a testing environment for the novel segmentation algorithm. It provides a dynamic system for combining at runtime data from different sources, a variety of different data processing algorithms, and several visualization techniques. Its modular architecture and flexible scripting support led to additional applications of the software, from which two examples are presented: the usage of Insight as a WMS (web map service) server, and the automatic production of a sequence of images for the visualization of cyclone simulations. The core application of Insight is the provision of the novel segmentation algorithm for the efficient detection and tracking of 3D features in large sets of atmospheric data, as well as for the precise localization of the occurring genesis, lysis, merging and splitting events. Data segmentation usually leads to a significant reduction of the size of the considered data. This enables a practical visualization of the data, statistical analyses of the features and their events, and the manual or automatic detection of interesting situations for subsequent detailed investigation. The concepts of the novel algorithm, its technical realization, and several extensions for avoiding under- and over-segmentation are discussed. As example applications, this thesis covers the setup and the results of the segmentation of upper-tropospheric jet streams and cyclones as full 3D objects. Finally, IWAL is presented, which is a web application for providing an easy interactive access to meteorological data visualizations, primarily aimed at students. As a web application, the needs to retrieve all input data sets and to install and handle complex visualization tools on a local machine are avoided. The main challenge in the provision of customizable visualizations to large numbers of simultaneous users was to find an acceptable trade-off between the available visualization options and the performance of the application. Besides the implementational details, benchmarks and the results of a user survey are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE Computed tomography (CT) accounts for more than half of the total radiation exposure from medical procedures, which makes dose reduction in CT an effective means of reducing radiation exposure. We analysed the dose reduction that can be achieved with a new CT scanner [Somatom Edge (E)] that incorporates new developments in hardware (detector) and software (iterative reconstruction). METHODS We compared weighted volume CT dose index (CTDIvol) and dose length product (DLP) values of 25 consecutive patients studied with non-enhanced standard brain CT with the new scanner and with two previous models each, a 64-slice 64-row multi-detector CT (MDCT) scanner with 64 rows (S64) and a 16-slice 16-row MDCT scanner with 16 rows (S16). We analysed signal-to-noise and contrast-to-noise ratios in images from the three scanners and performed a quality rating by three neuroradiologists to analyse whether dose reduction techniques still yield sufficient diagnostic quality. RESULTS CTDIVol of scanner E was 41.5 and 36.4 % less than the values of scanners S16 and S64, respectively; the DLP values were 40 and 38.3 % less. All differences were statistically significant (p < 0.0001). Signal-to-noise and contrast-to-noise ratios were best in S64; these differences also reached statistical significance. Image analysis, however, showed "non-inferiority" of scanner E regarding image quality. CONCLUSIONS The first experience with the new scanner shows that new dose reduction techniques allow for up to 40 % dose reduction while still maintaining image quality at a diagnostically usable level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE To evaluate the role of an ultra-low-dose dual-source CT coronary angiography (CTCA) scan with high pitch for delimiting the range of the subsequent standard CTCA scan. METHODS 30 patients with an indication for CTCA were prospectively examined using a two-scan dual-source CTCA protocol (2.0 × 64.0 × 0.6 mm; pitch, 3.4; rotation time of 280 ms; 100 kV): Scan 1 was acquired with one-fifth of the tube current suggested by the automatic exposure control software [CareDose 4D™ (Siemens Healthcare, Erlangen, Germany) using 100 kV and 370 mAs as a reference] with the scan length from the tracheal bifurcation to the diaphragmatic border. Scan 2 was acquired with standard tube current extending with reduced scan length based on Scan 1. Nine central coronary artery segments were analysed qualitatively on both scans. RESULTS Scan 2 (105.1 ± 10.1 mm) was significantly shorter than Scan 1 (127.0 ± 8.7 mm). Image quality scores were significantly better for Scan 2. However, in 5 of 6 (83%) patients with stenotic coronary artery disease, a stenosis was already detected in Scan 1 and in 13 of 24 (54%) patients with non-stenotic coronary arteries, a stenosis was already excluded by Scan 1. Using Scan 2 as reference, the positive- and negative-predictive value of Scan 1 was 83% (5 of 6 patients) and 100% (13 of 13 patients), respectively. CONCLUSION An ultra-low-dose CTCA planning scan enables a reliable scan length reduction of the following standard CTCA scan and allows for correct diagnosis in a substantial proportion of patients. ADVANCES IN KNOWLEDGE Further dose reductions are possible owing to a change in the individual patient's imaging strategy as a prior ultra-low-dose CTCA scan may already rule out the presence of a stenosis or may lead to a direct transferal to an invasive catheter procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En la actualidad existe una gran expectación ante la introducción de nuevas herramientas y métodos para el desarrollo de productos software, que permitirán en un futuro próximo un planteamiento de ingeniería del proceso de producción software. Las nuevas metodologías que empiezan a esbozarse suponen un enfoque integral del problema abarcando todas las fases del esquema productivo. Sin embargo el grado de automatización conseguido en el proceso de construcción de sistemas es muy bajo y éste está centrado en las últimas fases del ciclo de vida del software, consiguiéndose así una reducción poco significativa de sus costes y, lo que es aún más importante, sin garantizar la calidad de los productos software obtenidos. Esta tesis define una metodología de desarrollo software estructurada que se puede automatizar, es decir una metodología CASE. La metodología que se presenta se ajusta al modelo de ciclo de desarrollo CASE, que consta de las fases de análisis, diseño y pruebas; siendo su ámbito de aplicación los sistemas de información. Se establecen inicialmente los principios básicos sobre los que la metodología CASE se asienta. Posteriormente, y puesto que la metodología se inicia con la fijación de los objetivos de la empresa que demanda un sistema informático, se emplean técnicas que sirvan de recogida y validación de la información, que proporcionan a la vez un lenguaje de comunicación fácil entre usuarios finales e informáticos. Además, estas mismas técnicas detallarán de una manera completa, consistente y sin ambigüedad todos los requisitos del sistema. Asimismo, se presentan un conjunto de técnicas y algoritmos para conseguir que desde la especificación de requisitos del sistema se logre una automatización tanto del diseño lógico del Modelo de Procesos como del Modelo de Datos, validados ambos conforme a la especificación de requisitos previa. Por último se definen unos procedimientos formales que indican el conjunto de actividades a realizar en el proceso de construcción y cómo llevarlas a cabo, consiguiendo de esta manera una integridad en las distintas etapas del proceso de desarrollo.---ABSTRACT---Nowdays there is a great expectation with regard to the introduction of new tools and methods for the software products development that, in the very near future will allow, an engineering approach in the software development process. New methodologies, just emerging, imply an integral approach to the problem, including all the productive scheme stages. However, the automatization degree obtained in the systems construction process is very low and focused on the last phases of the software lifecycle, which means that the costs reduction obtained is irrelevant and, which is more important, the quality of the software products is not guaranteed. This thesis defines an structured software development methodology that can be automated, that is a CASE methodology. Such a methodology is adapted to the CASE development cycle-model, which consists in analysis, design and testing phases, being the information systems its field of application. Firstly, we present the basic principies on which CASE methodology is based. Secondly, since the methodology starts from fixing the objectives of the company demanding the automatization system, we use some techniques that are useful for gathering and validating the information, being at the same time an easy communication language between end-users and developers. Indeed, these same techniques will detail completely, consistently and non ambiguously all the system requirements. Likewise, a set of techniques and algorithms are shown in order to obtain, from the system requirements specification, an automatization of the Process Model logical design, and of the Data Model logical design. Those two models are validated according to the previous requirement specification. Finally, we define several formal procedures that suggest which set of activities to be accomplished in the construction process, and how to carry them out, getting in this way integrity and completness for the different stages of the development process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Energía termosolar (de concentración) es uno de los nombres que hacen referencia en español al término inglés “concentrating solar power”. Se trata de una tecnología basada en la captura de la potencia térmica de la radiación solar, de forma que permita alcanzar temperaturas capaces de alimentar un ciclo termodinámico convencional (o avanzado); el futuro de esta tecnología depende principalmente de su capacidad para concentrar la radiación solar de manera eficiente y económica. La presente tesis está orientada hacia la resolución de ciertos problemas importantes relacionados con este objetivo. La mencionada necesidad de reducir costes en la concentración de radiación solar directa, asegurando el objetivo termodinámico de calentar un fluido hasta una determinada temperatura, es de vital importancia. Los colectores lineales Fresnel han sido identificados en la literatura científica como una tecnología con gran potencial para alcanzar esta reducción de costes. Dicha tecnología ha sido seleccionada por numerosas razones, entre las que destacan su gran libertad de diseño y su actual estado inmaduro. Con el objetivo de responder a este desafío se desarrollado un detallado estudio de las propiedades ópticas de los colectores lineales Fresnel, para lo cual se han utilizado métodos analíticos y numéricos de manera combinada. En primer lugar, se han usado unos modelos para la predicción de la localización y la irradiación normal directa del sol junto a unas relaciones analíticas desarrolladas para estudiar el efecto de múltiples variables de diseño en la energía incidente sobre los espejos. Del mismo modo, se han obtenido analíticamente los errores debidos al llamado “off-axis aberration”, a la apertura de los rayos reflejados en los espejos y a las sombras y bloqueos entre espejos. Esto ha permitido la comparación de diferentes formas de espejo –planos, circulares o parabólicos–, así como el diseño preliminar de la localización y anchura de los espejos y receptor sin necesidad de costosos métodos numéricos. En segundo lugar, se ha desarrollado un modelo de trazado de rayos de Monte Carlo con el objetivo de comprobar la validez del estudio analítico, pero sobre todo porque este no es preciso en el estudio de la reflexión en espejos. El código desarrollado está específicamente ideado para colectores lineales Fresnel, lo que ha permitido la reducción del tiempo de cálculo en varios órdenes de magnitud en comparación con un programa comercial más general. Esto justifica el desarrollo de un nuevo código en lugar de la compra de una licencia de otro programa. El modelo ha sido usado primeramente para comparar la intensidad de flujo térmico y rendimiento de colectores Fresnel, con y sin reflector secundario, con los colectores cilíndrico parabólicos. Finalmente, la conjunción de los resultados obtenidos en el estudio analítico con el programa numérico ha sido usada para optimizar el campo solar para diferentes orientaciones –Norte-Sur y Este-Oeste–, diferentes localizaciones –Almería y Aswan–, diferentes inclinaciones hacia el Trópico –desde 0 deg hasta 32 deg– y diferentes mínimos de intensidad del flujo en el centro del receptor –10 kW/m2 y 25 kW/m2–. La presente tesis ha conducido a importantes descubrimientos que deben ser considerados a la hora de diseñar un campo solar Fresnel. En primer lugar, los espejos utilizados no deben ser plano, sino cilíndricos o parabólicos, ya que los espejos curvos implican mayores concentraciones y rendimiento. Por otro lado, se ha llegado a la conclusión de que la orientación Este-Oeste es más propicia para localizaciones con altas latitudes, como Almería, mientras que en zonas más cercanas a los trópicos como Aswan los campos Norte-Sur conducen a mayores rendimientos. Es de destacar que la orientación Este-Oeste requiere aproximadamente la mitad de espejos que los campos Norte-Sur, puediendo estar inclinados hacia los Trópicos para mejorar el rendimiento, y que alcanzan parecidos valores de intensidad térmica en el receptor todos los días a mediodía. Sin embargo, los campos con orientación Norte-Sur permiten un flujo más constante a lo largo de un día. Por último, ha sido demostrado que el uso de diseños pre-optimizados analíticamente, con anchura de espejos y espaciado entre espejos variables a lo ancho del campo, pueden implicar aumentos de la energía generada por metro cuadrado de espejos de hasta el 6%. El rendimiento óptico anual de los colectores cilíndrico parabólicos es 23 % mayor que el rendimiento de los campos Fresnel en Almería, mientras que la diferencia es de solo 9 % en Aswan. Ello implica que, para alcanzar el mismo precio de electricidad que la tecnología de referencia, la reducción de costes de instalación por metro cuadrado de espejo debe estar entre el 10 % y el 25 %, y que los colectores lineales Fresnel tienen más posibilidades de ser desarrollados en zonas de bajas latitudes. Como consecuencia de los estudios desarrollados en esta tesis se ha patentado un sistema de almacenamiento que tiene en cuenta la variación del flujo térmico en el receptor a lo largo del día, especialmente para campos con orientación Este-Oeste. Este invento permitiría el aprovechamiento de la energía incidente durante más parte del año, aumentando de manera apreciable los rendimientos óptico y térmico. Abstract Concentrating solar power is the common name of a technology based on capturing the thermal power of solar radiation, in a suitable way to reach temperatures able to activate a conventional (or advanced) thermodynamic cycle to generate electricity; this quest mainly depends on our ability to concentrate solar radiation in a cheap and efficient way. The present thesis is focused to highlight and help solving some of the important issues related to this problem. The need of reducing costs in concentrating the direct solar radiation, but without jeopardizing the thermodynamic objective of heating a fluid up to the required temperature, is of prime importance. Linear Fresnel collectors have been identified in the scientific literature as a technology with high potential to reach this cost reduction. This technology has been selected because of a number of reasons, particularly the degrees of freedom of this type of concentrating configuration and its current immature state. In order to respond to this challenge, a very detailed exercise has been carried out on the optical properties of linear Fresnel collectors. This has been done combining analytic and numerical methods. First, the effect of the design variables on the ratio of energy impinging onto the reflecting surface has been studied using analytically developed equations, together with models that predict the location and direct normal irradiance of the sun at any moment. Similarly, errors due to off-axis aberration, to the aperture of the reflected energy beam and to shading and blocking effects have been obtained analytically. This has allowed the comparison of different shapes of mirrors –flat, cylindrical or parabolic–, as well as a preliminary optimization of the location and width of mirrors and receiver with no need of time-consuming numerical models. Second, in order to prove the validity of the analytic results, but also due to the fact that the study of the reflection process is not precise enough when using analytic equations, a Monte Carlo Ray Trace model has been developed. The developed code is designed specifically for linear Fresnel collectors, which has reduced the computing time by several orders of magnitude compared to a wider commercial software. This justifies the development of the new code. The model has been first used to compare radiation flux intensities and efficiencies of linear Fresnel collectors, both multitube receiver and secondary reflector receiver technologies, with parabolic trough collectors. Finally, the results obtained in the analytic study together with the numeric model have used in order to optimize the solar field for different orientations –North-South and East-West–, different locations –Almería and Aswan–, different tilts of the field towards the Tropic –from 0 deg to 32 deg– and different flux intensity minimum requirements –10 kW/m2 and 25 kW/m2. This thesis work has led to several important findings that should be considered in the design of Fresnel solar fields. First, flat mirrors should not be used in any case, as cylindrical and parabolic mirrors lead to higher flux intensities and efficiencies. Second, it has been concluded that, in locations relatively far from the Tropics such as Almería, East-West embodiments are more efficient, while in Aswan North- South orientation leads to a higher annual efficiency. It must be noted that East-West oriented solar fields require approximately half the number of mirrors than NS oriented fields, can be tilted towards the Equator in order to increase the efficiency and attain similar values of flux intensity at the receiver every day at midday. On the other hand, in NS embodiments the flux intensity is more even during each single day. Finally, it has been proved that the use of analytic designs with variable shift between mirrors and variable width of mirrors across the field can lead to improvements in the electricity generated per reflecting surface square meter up to 6%. The annual optical efficiency of parabolic troughs has been found to be 23% higher than the efficiency of Fresnel fields in Almería, but it is only around 9% higher in Aswan. This implies that, in order to attain the same levelized cost of electricity than parabolic troughs, the required reduction of installation costs per mirror square meter is in the range of 10-25%. Also, it is concluded that linear Fresnel collectors are more suitable for low latitude areas. As a consequence of the studies carried out in this thesis, an innovative storage system has been patented. This system takes into account the variation of the flux intensity along the day, especially for East-West oriented solar fields. As a result, the invention would allow to exploit the impinging radiation along longer time every day, increasing appreciably the optical and thermal efficiencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A tecnologização que a sociedade experimenta nas últimas décadas trouxe a profusão de máquinas informatizadas e seus sistemas de operação. Neste período a indústria desenvolveu sofisticados e caros softwares aplicativos proprietários para o pleno uso destas máquinas, o que colocou boa parte do mercado social nas mãos de poucas empresas multinacionais, entre elas, a Microsoft, e outras. Mas, o espírito libertário de membros das comunidades científicas e hackers promoveu o desenvolvimento do software livre e aberto, que pode ser usado como bem social mais amplo e, principalmente, evoluir no melhor do espírito colaborativo. O presente trabalho estuda os dois modelos de produção de software, os compara visando tornar evidentes as qualidades de cada um, seus custos, rendimentos e possibilidades de adoção. Projeta a possibilidade de que as habilitações da área da comunicação possam migrar para o modelo de software livre, dadas as plenas qualidades deste sistema, a radical redução de custos e as constatações que amplos segmentos da produção audiovisual os está adotando. Para tanto, compara as experiências aplicadas com ambos os sistemas em dois cursos de comunicação, em sua habilitação de Rádio e Televisão.(AU)