937 resultados para Advanced mathematical thinking
Resumo:
This action research study of my 8th grade classroom investigated the use of mathematical communication, through oral homework presentations and written journals entries, and its impact on conceptual understanding of mathematics. This change in expectation and its impact on students’ attitudes towards mathematics was also investigated. Challenging my students to communicate mathematics both orally and in writing deepened the students’ understanding of the mathematics. Levels of understanding deepened when a variety of instructional methods were presented and discussed where students could comprehend the ideas that best suited their learning styles. Increased understanding occurred through probing questions causing students to reflect on their learning and reevaluate their reasoning. This transpired when students were expected to write more than one draft to math journals. By making students aware of their understanding through communicating orally and in writing, students realized that true understanding did not come from mere homework completion, but from evaluating and assessing their own and other’s ideas and reasoning. I discovered that when students were challenged to communicate their reasoning both orally and in writing, students enjoyed math more and thought math was more fun. As a result of this research, I will continue to require students to communicate their thinking and reasoning both orally and in writing.
Resumo:
In this action research study of my classroom of sixth grade mathematics, I investigated the use of communication of mathematics through both written and oral expression. Giving my students the opportunity to communicate mathematics both in writing and orally helped to deepen the students’ understanding of mathematics. The students’ levels of comprehension were increased when they were presented with a variety of instructional methods. Through discussion and reflection the students were able to find methods that worked best for them and their learning ability. Students’ understanding increased from probing questions that made the students reflect and re-evaluate their solutions. This learning took place when students were made aware of different solutions or ways of doing things from the class discussions that were held. I discovered that when students are challenged to express their thinking both in writing and orally, the students found that they could communicate their thinking in a new way. Some of my students were only comfortable expressing their thoughts in one of the two ways but by the time the project was completed, they all expressed that they enjoyed both ways, and maybe changed the original way they preferred doing mathematics. As a result of this research, I will continue to require students to communicate their thinking and reasoning both in writing and orally.
Resumo:
Abstract Background The criteria for organ sharing has developed a system that prioritizes liver transplantation (LT) for patients with hepatocellular carcinoma (HCC) who have the highest risk of wait-list mortality. In some countries this model allows patients only within the Milan Criteria (MC, defined by the presence of a single nodule up to 5 cm, up to three nodules none larger than 3 cm, with no evidence of extrahepatic spread or macrovascular invasion) to be evaluated for liver transplantation. This police implies that some patients with HCC slightly more advanced than those allowed by the current strict selection criteria will be excluded, even though LT for these patients might be associated with acceptable long-term outcomes. Methods We propose a mathematical approach to study the consequences of relaxing the MC for patients with HCC that do not comply with the current rules for inclusion in the transplantation candidate list. We consider overall 5-years survival rates compatible with the ones reported in the literature. We calculate the best strategy that would minimize the total mortality of the affected population, that is, the total number of people in both groups of HCC patients that die after 5 years of the implementation of the strategy, either by post-transplantation death or by death due to the basic HCC. We illustrate the above analysis with a simulation of a theoretical population of 1,500 HCC patients with tumor size exponentially. The parameter λ obtained from the literature was equal to 0.3. As the total number of patients in these real samples was 327 patients, this implied in an average size of 3.3 cm and a 95% confidence interval of [2.9; 3.7]. The total number of available livers to be grafted was assumed to be 500. Results With 1500 patients in the waiting list and 500 grafts available we simulated the total number of deaths in both transplanted and non-transplanted HCC patients after 5 years as a function of the tumor size of transplanted patients. The total number of deaths drops down monotonically with tumor size, reaching a minimum at size equals to 7 cm, increasing from thereafter. With tumor size equals to 10 cm the total mortality is equal to the 5 cm threshold of the Milan criteria. Conclusion We concluded that it is possible to include patients with tumor size up to 10 cm without increasing the total mortality of this population.
Resumo:
[EN] Rigorous Mathematical Analysis in the Cauchy style was not accepted in a straightforward manner by the European mathematical community of the central years of the 19th Century. In average, only around forty years after the 1821 Cours d'Analyse did Cauchy's treatment become a standard in the more mathematically advanced countries, as a paradigm that remained in use until the arithmetisation of Analysis by Weierstrass replaced it before the end of the century. ln this paper the authors show how rigorous Mathematical Analysis à la Cauchy was adopted in Spain quite late -around 1880- and how in sorne more forty years, the Weierstrassian formulation became the usual presentation in Spanish texts
Resumo:
[EN] Background: DNA-damage assays, quantifying the initial number of DNA double-strand breaks induced by radiation, have been proposed as a predictive test for radiation-induced toxicity. Determination of radiation-induced apoptosis in peripheral blood lymphocytes by flow cytometry analysis has also been proposed as an approach for predicting normal tissue responses following radiotherapy. The aim of the present study was to explore the association between initial DNA damage, estimated by the number of double-strand breaks induced by a given radiation dose, and the radio-induced apoptosis rates observed. Methods: Peripheral blood lymphocytes were taken from 26 consecutive patients with locally advanced breast carcinoma. Radiosensitivity of lymphocytes was quantified as the initial number of DNA double-strand breaks induced per Gy and per DNA unit (200 Mbp). Radio-induced apoptosis at 1, 2 and 8 Gy was measured by flow cytometry using annexin V/propidium iodide. Results: Radiation-induced apoptosis increased in order to radiation dose and data fitted to a semi logarithmic mathematical model. A positive correlation was found among radio-induced apoptosis values at different radiation doses: 1, 2 and 8 Gy (p < 0.0001 in all cases). Mean DSB/Gy/DNA unit obtained was 1.70 ± 0.83 (range 0.63-4.08; median, 1.46). A statistically significant inverse correlation was found between initial damage to DNA and radio-induced apoptosis at 1 Gy (p = 0.034). A trend toward 2 Gy (p = 0.057) and 8 Gy (p = 0.067) was observed after 24 hours of incubation. Conclusions: An inverse association was observed for the first time between these variables, both considered as predictive factors to radiation toxicity.
Resumo:
In the last years of research, I focused my studies on different physiological problems. Together with my supervisors, I developed/improved different mathematical models in order to create valid tools useful for a better understanding of important clinical issues. The aim of all this work is to develop tools for learning and understanding cardiac and cerebrovascular physiology as well as pathology, generating research questions and developing clinical decision support systems useful for intensive care unit patients. I. ICP-model Designed for Medical Education We developed a comprehensive cerebral blood flow and intracranial pressure model to simulate and study the complex interactions in cerebrovascular dynamics caused by multiple simultaneous alterations, including normal and abnormal functional states of auto-regulation of the brain. Individual published equations (derived from prior animal and human studies) were implemented into a comprehensive simulation program. Included in the normal physiological modelling was: intracranial pressure, cerebral blood flow, blood pressure, and carbon dioxide (CO2) partial pressure. We also added external and pathological perturbations, such as head up position and intracranial haemorrhage. The model performed clinically realistically given inputs of published traumatized patients, and cases encountered by clinicians. The pulsatile nature of the output graphics was easy for clinicians to interpret. The manoeuvres simulated include changes of basic physiological inputs (e.g. blood pressure, central venous pressure, CO2 tension, head up position, and respiratory effects on vascular pressures) as well as pathological inputs (e.g. acute intracranial bleeding, and obstruction of cerebrospinal outflow). Based on the results, we believe the model would be useful to teach complex relationships of brain haemodynamics and study clinical research questions such as the optimal head-up position, the effects of intracranial haemorrhage on cerebral haemodynamics, as well as the best CO2 concentration to reach the optimal compromise between intracranial pressure and perfusion. We believe this model would be useful for both beginners and advanced learners. It could be used by practicing clinicians to model individual patients (entering the effects of needed clinical manipulations, and then running the model to test for optimal combinations of therapeutic manoeuvres). II. A Heterogeneous Cerebrovascular Mathematical Model Cerebrovascular pathologies are extremely complex, due to the multitude of factors acting simultaneously on cerebral haemodynamics. In this work, the mathematical model of cerebral haemodynamics and intracranial pressure dynamics, described in the point I, is extended to account for heterogeneity in cerebral blood flow. The model includes the Circle of Willis, six regional districts independently regulated by autoregulation and CO2 reactivity, distal cortical anastomoses, venous circulation, the cerebrospinal fluid circulation, and the intracranial pressure-volume relationship. Results agree with data in the literature and highlight the existence of a monotonic relationship between transient hyperemic response and the autoregulation gain. During unilateral internal carotid artery stenosis, local blood flow regulation is progressively lost in the ipsilateral territory with the presence of a steal phenomenon, while the anterior communicating artery plays the major role to redistribute the available blood flow. Conversely, distal collateral circulation plays a major role during unilateral occlusion of the middle cerebral artery. In conclusion, the model is able to reproduce several different pathological conditions characterized by heterogeneity in cerebrovascular haemodynamics and can not only explain generalized results in terms of physiological mechanisms involved, but also, by individualizing parameters, may represent a valuable tool to help with difficult clinical decisions. III. Effect of Cushing Response on Systemic Arterial Pressure. During cerebral hypoxic conditions, the sympathetic system causes an increase in arterial pressure (Cushing response), creating a link between the cerebral and the systemic circulation. This work investigates the complex relationships among cerebrovascular dynamics, intracranial pressure, Cushing response, and short-term systemic regulation, during plateau waves, by means of an original mathematical model. The model incorporates the pulsating heart, the pulmonary circulation and the systemic circulation, with an accurate description of the cerebral circulation and the intracranial pressure dynamics (same model as in the first paragraph). Various regulatory mechanisms are included: cerebral autoregulation, local blood flow control by oxygen (O2) and/or CO2 changes, sympathetic and vagal regulation of cardiovascular parameters by several reflex mechanisms (chemoreceptors, lung-stretch receptors, baroreceptors). The Cushing response has been described assuming a dramatic increase in sympathetic activity to vessels during a fall in brain O2 delivery. With this assumption, the model is able to simulate the cardiovascular effects experimentally observed when intracranial pressure is artificially elevated and maintained at constant level (arterial pressure increase and bradicardia). According to the model, these effects arise from the interaction between the Cushing response and the baroreflex response (secondary to arterial pressure increase). Then, patients with severe head injury have been simulated by reducing intracranial compliance and cerebrospinal fluid reabsorption. With these changes, oscillations with plateau waves developed. In these conditions, model results indicate that the Cushing response may have both positive effects, reducing the duration of the plateau phase via an increase in cerebral perfusion pressure, and negative effects, increasing the intracranial pressure plateau level, with a risk of greater compression of the cerebral vessels. This model may be of value to assist clinicians in finding the balance between clinical benefits of the Cushing response and its shortcomings. IV. Comprehensive Cardiopulmonary Simulation Model for the Analysis of Hypercapnic Respiratory Failure We developed a new comprehensive cardiopulmonary model that takes into account the mutual interactions between the cardiovascular and the respiratory systems along with their short-term regulatory mechanisms. The model includes the heart, systemic and pulmonary circulations, lung mechanics, gas exchange and transport equations, and cardio-ventilatory control. Results show good agreement with published patient data in case of normoxic and hyperoxic hypercapnia simulations. In particular, simulations predict a moderate increase in mean systemic arterial pressure and heart rate, with almost no change in cardiac output, paralleled by a relevant increase in minute ventilation, tidal volume and respiratory rate. The model can represent a valid tool for clinical practice and medical research, providing an alternative way to experience-based clinical decisions. In conclusion, models are not only capable of summarizing current knowledge, but also identifying missing knowledge. In the former case they can serve as training aids for teaching the operation of complex systems, especially if the model can be used to demonstrate the outcome of experiments. In the latter case they generate experiments to be performed to gather the missing data.
Resumo:
Many of developing countries are facing crisis in water management due to increasing of population, water scarcity, water contaminations and effects of world economic crisis. Water distribution systems in developing countries are facing many challenges of efficient repair and rehabilitation since the information of water network is very limited, which makes the rehabilitation assessment plans very difficult. Sufficient information with high technology in developed countries makes the assessment for rehabilitation easy. Developing countries have many difficulties to assess the water network causing system failure, deterioration of mains and bad water quality in the network due to pipe corrosion and deterioration. The limited information brought into focus the urgent need to develop economical assessment for rehabilitation of water distribution systems adapted to water utilities. Gaza Strip is subject to a first case study, suffering from severe shortage in the water supply and environmental problems and contamination of underground water resources. This research focuses on improvement of water supply network to reduce the water losses in water network based on limited database using techniques of ArcGIS and commercial water network software (WaterCAD). A new approach for rehabilitation water pipes has been presented in Gaza city case study. Integrated rehabilitation assessment model has been developed for rehabilitation water pipes including three components; hydraulic assessment model, Physical assessment model and Structural assessment model. WaterCAD model has been developed with integrated in ArcGIS to produce the hydraulic assessment model for water network. The model have been designed based on pipe condition assessment with 100 score points as a maximum points for pipe condition. As results from this model, we can indicate that 40% of water pipeline have score points less than 50 points and about 10% of total pipes length have less than 30 score points. By using this model, the rehabilitation plans for each region in Gaza city can be achieved based on available budget and condition of pipes. The second case study is Kuala Lumpur Case from semi-developed countries, which has been used to develop an approach to improve the water network under crucial conditions using, advanced statistical and GIS techniques. Kuala Lumpur (KL) has water losses about 40% and high failure rate, which make severe problem. This case can represent cases in South Asia countries. Kuala Lumpur faced big challenges to reduce the water losses in water network during last 5 years. One of these challenges is high deterioration of asbestos cement (AC) pipes. They need to replace more than 6500 km of AC pipes, which need a huge budget to be achieved. Asbestos cement is subject to deterioration due to various chemical processes that either leach out the cement material or penetrate the concrete to form products that weaken the cement matrix. This case presents an approach for geo-statistical model for modelling pipe failures in a water distribution network. Database of Syabas Company (Kuala Lumpur water company) has been used in developing the model. The statistical models have been calibrated, verified and used to predict failures for both networks and individual pipes. The mathematical formulation developed for failure frequency in Kuala Lumpur was based on different pipeline characteristics, reflecting several factors such as pipe diameter, length, pressure and failure history. Generalized linear model have been applied to predict pipe failures based on District Meter Zone (DMZ) and individual pipe levels. Based on Kuala Lumpur case study, several outputs and implications have been achieved. Correlations between spatial and temporal intervals of pipe failures also have been done using ArcGIS software. Water Pipe Assessment Model (WPAM) has been developed using the analysis of historical pipe failure in Kuala Lumpur which prioritizing the pipe rehabilitation candidates based on ranking system. Frankfurt Water Network in Germany is the third main case study. This case makes an overview for Survival analysis and neural network methods used in water network. Rehabilitation strategies of water pipes have been developed for Frankfurt water network in cooperation with Mainova (Frankfurt Water Company). This thesis also presents a methodology of technical condition assessment of plastic pipes based on simple analysis. This thesis aims to make contribution to improve the prediction of pipe failures in water networks using Geographic Information System (GIS) and Decision Support System (DSS). The output from the technical condition assessment model can be used to estimate future budget needs for rehabilitation and to define pipes with high priority for replacement based on poor condition. rn
Resumo:
Wind energy has been one of the most growing sectors of the nation’s renewable energy portfolio for the past decade, and the same tendency is being projected for the upcoming years given the aggressive governmental policies for the reduction of fossil fuel dependency. Great technological expectation and outstanding commercial penetration has shown the so called Horizontal Axis Wind Turbines (HAWT) technologies. Given its great acceptance, size evolution of wind turbines over time has increased exponentially. However, safety and economical concerns have emerged as a result of the newly design tendencies for massive scale wind turbine structures presenting high slenderness ratios and complex shapes, typically located in remote areas (e.g. offshore wind farms). In this regard, safety operation requires not only having first-hand information regarding actual structural dynamic conditions under aerodynamic action, but also a deep understanding of the environmental factors in which these multibody rotating structures operate. Given the cyclo-stochastic patterns of the wind loading exerting pressure on a HAWT, a probabilistic framework is appropriate to characterize the risk of failure in terms of resistance and serviceability conditions, at any given time. Furthermore, sources of uncertainty such as material imperfections, buffeting and flutter, aeroelastic damping, gyroscopic effects, turbulence, among others, have pleaded for the use of a more sophisticated mathematical framework that could properly handle all these sources of indetermination. The attainable modeling complexity that arises as a result of these characterizations demands a data-driven experimental validation methodology to calibrate and corroborate the model. For this aim, System Identification (SI) techniques offer a spectrum of well-established numerical methods appropriated for stationary, deterministic, and data-driven numerical schemes, capable of predicting actual dynamic states (eigenrealizations) of traditional time-invariant dynamic systems. As a consequence, it is proposed a modified data-driven SI metric based on the so called Subspace Realization Theory, now adapted for stochastic non-stationary and timevarying systems, as is the case of HAWT’s complex aerodynamics. Simultaneously, this investigation explores the characterization of the turbine loading and response envelopes for critical failure modes of the structural components the wind turbine is made of. In the long run, both aerodynamic framework (theoretical model) and system identification (experimental model) will be merged in a numerical engine formulated as a search algorithm for model updating, also known as Adaptive Simulated Annealing (ASA) process. This iterative engine is based on a set of function minimizations computed by a metric called Modal Assurance Criterion (MAC). In summary, the Thesis is composed of four major parts: (1) development of an analytical aerodynamic framework that predicts interacted wind-structure stochastic loads on wind turbine components; (2) development of a novel tapered-swept-corved Spinning Finite Element (SFE) that includes dampedgyroscopic effects and axial-flexural-torsional coupling; (3) a novel data-driven structural health monitoring (SHM) algorithm via stochastic subspace identification methods; and (4) a numerical search (optimization) engine based on ASA and MAC capable of updating the SFE aerodynamic model.
Resumo:
This study assessed the effectiveness of an online mathematical problem solving course designed using a social constructivist approach for pre-service teachers. Thirty-seven pre-service teachers at the Batu Lintang Teacher Institute, Sarawak, Malaysia were randomly selected to participate in the study. The participants were required to complete the course online without the typical face-to-face classes and they were also required to solve authentic mathematical problems in small groups of 4-5 participants based on the Polya’s Problem Solving Model via asynchronous online discussions. Quantitative and qualitative methods such as questionnaires and interviews were used to evaluate the effects of the online learning course. Findings showed that a majority of the participants were satisfied with their learning experiences in the course. There were no significant changes in the participants’ attitudes toward mathematics, while the participants’ skills in problem solving for “understand the problem” and “devise a plan” steps based on the Polya’s Model were significantly enhanced, though no improvement was apparent for “carry out the plan” and “review”. The results also showed that there were significant improvements in the participants’ critical thinking skills. Furthermore, participants with higher initial computer skills were also found to show higher performance in mathematical problem solving as compared to those with lower computer skills. However, there were no significant differences in the participants’ achievements in the course based on gender. Generally, the online social constructivist mathematical problem solving course is beneficial to the participants and ought to be given the attention it deserves as an alternative to traditional classes. Nonetheless, careful considerations need to be made in the designing and implementing of online courses to minimize problems that participants might encounter while participating in such courses.
Resumo:
In spite of the movement to turn political science into a real science, various mathematical methods that are now the staples of physics, biology, and even economics are thoroughly uncommon in political science, especially the study of civil war. This study seeks to apply such methods - specifically, ordinary differential equations (ODEs) - to model civil war based on what one might dub the capabilities school of thought, which roughly states that civil wars end only when one side’s ability to make war falls far enough to make peace truly attractive. I construct several different ODE-based models and then test them all to see which best predicts the instantaneous capabilities of both sides of the Sri Lankan civil war in the period from 1990 to 1994 given parameters and initial conditions. The model that the tests declare most accurate gives very accurate predictions of state military capabilities and reasonable short term predictions of cumulative deaths. Analysis of the model reveals the scale of the importance of rebel finances to the sustainability of insurgency, most notably that the number of troops required to put down the Tamil Tigers is reduced by nearly a full order of magnitude when Tiger foreign funding is stopped. The study thus demonstrates that accurate foresight may come of relatively simple dynamical models, and implies the great potential of advanced and currently unconventional non-statistical mathematical methods in political science.
Resumo:
La concentración fotovoltaica (CPV) es una de las formas más prometedoras de reducir el coste de la energía proveniente del sol. Esto es posible gracias a células solares de alta eficiencia y a una significativa reducción del tamaño de la misma, que está fabricada con costosos materiales semiconductores. Ambos aspectos están íntimamente ligados ya que las altas eficiencias solamente son posibles con materiales y tecnologías de célula caros, lo que forzosamente conlleva una reducción del tamaño de la célula si se quiere lograr un sistema rentable. La reducción en el tamaño de las células requiere que la luz proveniente del sol ha de ser redirigida (es decir, concentrada) hacia la posición de la célula. Esto se logra colocando un concentrador óptico encima de la célula. Estos concentradores para CPV están formados por diferentes elementos ópticos fabricados en materiales baratos, con el fin de reducir los costes de producción. El marco óptimo para el diseño de concentradores es la óptica anidólica u óptica nonimaging. La óptica nonimaging fue desarrollada por primera vez en la década de los años sesenta y ha ido evolucionando significativamente desde entonces. El objetivo de los diseños nonimaging es la transferencia eficiente de energía entre la fuente y el receptor (sol y célula respectivamente, en el caso de la CPV), sin tener en cuenta la formación de imagen. Los sistemas nonimaging suelen ser simples, están compuestos de un menor número de superficies que los sistemas formadores de imagen y son más tolerantes a errores de fabricación. Esto hace de los sistemas nonimaging una herramienta fundamental, no sólo en el diseño de concentradores fotovoltaicos, sino también en el diseño de otras aplicaciones como iluminación, proyección y comunicaciones inalámbricas ópticas. Los concentradores ópticos nonimaging son adecuados para aplicaciones CPV porque el objetivo no es la reproducción de una imagen exacta del sol (como sería el caso de las ópticas formadoras de imagen), sino simplemente la colección de su energía sobre la célula solar. Los concentradores para CPV pueden presentar muy diferentes arquitecturas y elementos ópticos, dando lugar a una gran variedad de posibles diseños. El primer elemento óptico que es atravesado por la luz del sol se llama Elemento Óptico Primario (POE en su nomenclatura anglosajona) y es el elemento más determinante a la hora de definir la forma y las propiedades del concentrador. El POE puede ser refractivo (lente) o reflexivo (espejo). Esta tesis se centra en los sistemas CPV que presentan lentes de Fresnel como POE, que son lentes refractivas delgadas y de bajo coste de producción que son capaces de concentrar la luz solar. El capítulo 1 expone una breve introducción a la óptica geométrica y no formadora de imagen (nonimaging), explicando sus fundamentos y conceptos básicos. Tras ello, la integración Köhler es presentada en detalle, explicando sus principios, válidos tanto para aplicaciones CPV como para iluminación. Una introducción a los conceptos fundamentales de CPV también ha sido incluida en este capítulo, donde se analizan las propiedades de las células solares multiunión y de los concentradores ópticos empleados en los sistemas CPV. El capítulo se cierra con una descripción de las tecnologías existentes empleadas para la fabricación de elementos ópticos que componen los concentradores. El capítulo 2 se centra principalmente en el diseño y desarrollo de los tres concentradores ópticos avanzados Fresnel Köhler que se presentan en esta tesis: Fresnel-Köhler (FK), Fresnel-Köhler curvo (DFK) y Fresnel-Köhler con cavidad (CFK). Todos ellos llevan a cabo integración Köhler y presentan una lente de Fresnel como su elemento óptico primario. Cada uno de estos concentradores CPV presenta sus propias propiedades y su propio procedimiento de diseño. Además, presentan todas las características que todo concentrador ha de tener: elevado factor de concentración, alta tolerancia de fabricación, alta eficiencia óptica, irradiancia uniforme sobre la superficie de la célula y bajo coste de producción. Los concentradores FK y DFK presentan una configuración de cuatro sectores para lograr la integración Köhler. Esto quiere decir que POE y SOE se dividen en cuatro sectores simétricos cada uno, y cada sector del POE trabaja conjuntamente con su correspondiente sector de SOE. La principal diferencia entre los dos concentradores es que el POE del FK es una lente de Fresnel plana, mientras que una lente curva de Fresnel es empleada como POE del DFK. El concentrador CFK incluye una cavidad de confinamiento externo integrada, que es un elemento óptico capaz de recuperar los rayos reflejados por la superficie de la célula con el fin de ser reabsorbidos por la misma. Por tanto, se aumenta la absorción de la luz, lo que implica un aumento en la eficiencia del módulo. Además, este capítulo también explica un método de diseño alternativo para los elementos faceteados, especialmente adecuado para las lentes curvas como el POE del DFK. El capítulo 3 se centra en la caracterización y medidas experimentales de los concentradores ópticos presentados en el capítulo 2, y describe sus procedimientos. Estos procedimientos son en general aplicables a cualquier concentrador basado en una lente de Fresnel, e incluyen tres tipos principales de medidas experimentales: eficiencia eléctrica, ángulo de aceptancia y uniformidad de la irradiancia en el plano de la célula. Los resultados que se muestran a lo largo de este capítulo validarán a través de medidas a sol real las características avanzadas que presentan los concentradores Köhler, y que se demuestran en el capítulo 2 mediante simulaciones de rayos. Cada concentrador (FK, DFK y CFK) está diseñado y optimizado teniendo en cuenta condiciones de operación realistas. Su rendimiento se modela de forma exhaustiva mediante el trazado de rayos en combinación con modelos distribuidos para la célula. La tolerancia es un asunto crítico de cara al proceso de fabricación, y ha de ser máxima para obtener sistemas de producción en masa rentables. Concentradores con tolerancias limitadas generan bajadas significativas de eficiencia a nivel de array, causadas por el desajuste de corrientes entre los diferentes módulos (principalmente debido a errores de alineación en la fabricación). En este sentido, la sección 3.5 presenta dos métodos matemáticos que estiman estas pérdidas por desajuste a nivel de array mediante un análisis de sus curvas I-V, y por tanto siendo innecesarias las medidas a nivel de mono-módulo. El capítulo 3 también describe la caracterización indoor de los elementos ópticos que componen los concentradores, es decir, de las lentes de Fresnel que actúan como POE y de los secundarios free-form. El objetivo de esta caracterización es el de evaluar los adecuados perfiles de las superficies y las transmisiones ópticas de los diferentes elementos analizados, y así hacer que el rendimiento del módulo sea el esperado. Esta tesis la cierra el capítulo 4, en el que la integración Köhler se presenta como una buena alternativa para obtener distribuciones uniformes en aplicaciones de iluminación de estado sólido (iluminación con LED), siendo particularmente eficaz cuando se requiere adicionalmente una buena mezcla de colores. En este capítulo esto se muestra a través del ejemplo particular de un concentrador DFK, el cual se ha utilizado para aplicaciones CPV en los capítulos anteriores. Otra alternativa para lograr mezclas cromáticas apropiadas está basada en un método ya conocido (deflexiones anómalas), y también se ha utilizado aquí para diseñar una lente TIR aplanética delgada. Esta lente cumple la conservación de étendue, asegurando así que no hay bloqueo ni dilución de luz simultáneamente. Ambos enfoques presentan claras ventajas sobre las técnicas clásicas empleadas en iluminación para obtener distribuciones de iluminación uniforme: difusores y mezcla caleidoscópica mediante guías de luz. ABSTRACT Concentrating Photovoltaics (CPV) is one of the most promising ways of reducing the cost of energy collected from the sun. This is possible thanks to both, very high-efficiency solar cells and a large decrease in the size of cells, which are made of costly semiconductor materials. Both issues are closely linked since high efficiency values are only possible with expensive cell materials and technologies, implying a compulsory area reduction if cost-effectiveness is desired. The reduction in the cell size requires that light coming from the sun must be redirected (i.e. concentrated) towards the cell position. This is achieved by placing an optical concentrator system on top of the cell. These CPV concentrators consist of different optical elements manufactured on cheap materials in order to maintain low production costs. The optimal framework for the design of concentrators is nonimaging optics. Nonimaging optics was first developed in the 60s decade and has been largely developed ever since. The aim of nonimaging devices is the efficient transfer of light power between the source and the receiver (sun and cell respectively in the case of CPV), disregarding image formation. Nonimaging systems are usually simple, comprised of fewer surfaces than imaging systems and are more tolerant to manufacturing errors. This renders nonimaging optics a fundamental tool, not only in the design of photovoltaic concentrators, but also in the design of other applications as illumination, projection and wireless optical communications. Nonimaging optical concentrators are well suited for CPV applications because the goal is not the reproduction of an exact image of the sun (as imaging optics would provide), but simply the collection of its energy on the solar cell. Concentrators for CPV may present very different architectures and optical elements, resulting in a vast variety of possible designs. The first optical element that sunlight goes through is called the Primary Optical Element (POE) and is the most determinant element in order to define the shape and properties of the whole concentrator. The POE can be either refractive (lens) or reflective (mirror). This thesis focuses on CPV systems based on Fresnel lenses as POE, which are thin and inexpensive refractive lenses able to concentrate sunlight. Chapter 1 exposes a short introduction to geometrical and nonimaging optics, explaining their fundamentals and basic concepts. Then, the Köhler integration is presented in detail, explaining its principles, valid for both applications: CPV and illumination. An introduction to CPV fundamental concepts is also included in this chapter, analyzing the properties of multijunction solar cells and optical concentrators employed in CPV systems. The chapter is closed with a description of the existing technologies employed for the manufacture of optical elements composing the concentrator. Chapter 2 is mainly devoted to the design and development of the three advanced Fresnel Köhler optical concentrators presented in this thesis work: Fresnel-Köhler (FK), Dome-shaped Fresnel-Köhler (DFK) and Cavity Fresnel-Köhler (CFK). They all perform Köhler integration and comprise a Fresnel lens as their Primary Optical Element. Each one of these CPV concentrators presents its own characteristics, properties and its own design procedure. Their performances include all the key issues in a concentrator: high concentration factor, large tolerances, high optical efficiency, uniform irradiance on the cell surface and low production cost. The FK and DFK concentrators present a 4-fold configuration in order to perform the Köhler integration. This means that POE and SOE are divided into four symmetric sectors each one, working each POE sector with its corresponding SOE sector by pairs. The main difference between both concentrators is that the POE of the FK is a flat Fresnel lens, while a dome-shaped (curved) Fresnel lens performs as the DFK’s POE. The CFK concentrator includes an integrated external confinement cavity, which is an optical element able to recover rays reflected by the cell surface in order to be re-absorbed by the cell. It increases the light absorption, entailing an increase in the efficiency of the module. Additionally, an alternative design method for faceted elements will also be explained, especially suitable for dome-shaped lenses as the POE of the DFK. Chapter 3 focuses on the characterization and experimental measurements of the optical concentrators presented in Chapter 2, describing their procedures. These procedures are in general applicable to any Fresnel-based concentrator as well and include three main types of experimental measurements: electrical efficiency, acceptance angle and irradiance uniformity at the solar cell plane. The results shown along this chapter will validate through outdoor measurements under real sun operation the advanced characteristics presented by the Köhler concentrators, which are demonstrated in Chapter 2 through raytrace simulation: high optical efficiency, large acceptance angle, insensitivity to manufacturing tolerances and very good irradiance uniformity on the cell surface. Each concentrator (FK, DFK and CFK) is designed and optimized looking at realistic performance characteristics. Their performances are modeled exhaustively using ray tracing combined with cell modeling, taking into account the major relevant factors. The tolerance is a critical issue when coming to the manufacturing process in order to obtain cost-effective mass-production systems. Concentrators with tight tolerances result in significant efficiency drops at array level caused by current mismatch among different modules (mainly due to manufacturing alignment errors). In this sense, Section 3.5 presents two mathematical methods that estimate these mismatch losses for a given array just by analyzing its full-array I-V curve, hence being unnecessary any single mono-module measurement. Chapter 3 also describes the indoor characterization of the optical elements composing the concentrators, i.e. the Fresnel lenses acting as POEs and the free-form SOEs. The aim of this characterization is to assess the proper surface profiles and optical transmissions of the different elements analyzed, so they will allow for the expected module performance. This thesis is closed by Chapter 4, in which Köhler integration is presented as a good approach to obtain uniform distributions in Solid State Lighting applications (i.e. illumination with LEDs), being particularly effective when dealing with color mixing requirements. This chapter shows it through the particular example of a DFK concentrator, which has been used for CPV applications in the previous chapters. An alternative known method for color mixing purposes (anomalous deflections) has also been used to design a thin aplanatic TIR lens. This lens fulfills conservation of étendue, thus ensuring no light blocking and no light dilution at the same time. Both approaches present clear advantages over the classical techniques employed in lighting to obtain uniform illumination distributions: diffusers and kaleidoscopic lightpipe mixing.
Resumo:
The present thesis is focused on the development of a thorough mathematical modelling and computational solution framework aimed at the numerical simulation of journal and sliding bearing systems operating under a wide range of lubrication regimes (mixed, elastohydrodynamic and full film lubrication regimes) and working conditions (static, quasi-static and transient conditions). The fluid flow effects have been considered in terms of the Isothermal Generalized Equation of the Mechanics of the Viscous Thin Films (Reynolds equation), along with the massconserving p-Ø Elrod-Adams cavitation model that accordingly ensures the so-called JFO complementary boundary conditions for fluid film rupture. The variation of the lubricant rheological properties due to the viscous-pressure (Barus and Roelands equations), viscous-shear-thinning (Eyring and Carreau-Yasuda equations) and density-pressure (Dowson-Higginson equation) relationships have also been taken into account in the overall modelling. Generic models have been derived for the aforementioned bearing components in order to enable their applications in general multibody dynamic systems (MDS), and by including the effects of angular misalignments, superficial geometric defects (form/waviness deviations, EHL deformations, etc.) and axial motion. The bearing exibility (conformal EHL) has been incorporated by means of FEM model reduction (or condensation) techniques. The macroscopic in fluence of the mixedlubrication phenomena have been included into the modelling by the stochastic Patir and Cheng average ow model and the Greenwood-Williamson/Greenwood-Tripp formulations for rough contacts. Furthermore, a deterministic mixed-lubrication model with inter-asperity cavitation has also been proposed for full-scale simulations in the microscopic (roughness) level. According to the extensive mathematical modelling background established, three significant contributions have been accomplished. Firstly, a general numerical solution for the Reynolds lubrication equation with the mass-conserving p - Ø cavitation model has been developed based on the hybridtype Element-Based Finite Volume Method (EbFVM). This new solution scheme allows solving lubrication problems with complex geometries to be discretized by unstructured grids. The numerical method was validated in agreement with several example cases from the literature, and further used in numerical experiments to explore its exibility in coping with irregular meshes for reducing the number of nodes required in the solution of textured sliding bearings. Secondly, novel robust partitioned techniques, namely: Fixed Point Gauss-Seidel Method (PGMF), Point Gauss-Seidel Method with Aitken Acceleration (PGMA) and Interface Quasi-Newton Method with Inverse Jacobian from Least-Squares approximation (IQN-ILS), commonly adopted for solving uid-structure interaction problems have been introduced in the context of tribological simulations, particularly for the coupled calculation of dynamic conformal EHL contacts. The performance of such partitioned methods was evaluated according to simulations of dynamically loaded connecting-rod big-end bearings of both heavy-duty and high-speed engines. Finally, the proposed deterministic mixed-lubrication modelling was applied to investigate the in fluence of the cylinder liner wear after a 100h dynamometer engine test on the hydrodynamic pressure generation and friction of Twin-Land Oil Control Rings.
Resumo:
Mode of access: Internet.
Resumo:
Mode of access: Internet.
Resumo:
A calibration methodology based on an efficient and stable mathematical regularization scheme is described. This scheme is a variant of so-called Tikhonov regularization in which the parameter estimation process is formulated as a constrained minimization problem. Use of the methodology eliminates the need for a modeler to formulate a parsimonious inverse problem in which a handful of parameters are designated for estimation prior to initiating the calibration process. Instead, the level of parameter parsimony required to achieve a stable solution to the inverse problem is determined by the inversion algorithm itself. Where parameters, or combinations of parameters, cannot be uniquely estimated, they are provided with values, or assigned relationships with other parameters, that are decreed to be realistic by the modeler. Conversely, where the information content of a calibration dataset is sufficient to allow estimates to be made of the values of many parameters, the making of such estimates is not precluded by preemptive parsimonizing ahead of the calibration process. White Tikhonov schemes are very attractive and hence widely used, problems with numerical stability can sometimes arise because the strength with which regularization constraints are applied throughout the regularized inversion process cannot be guaranteed to exactly complement inadequacies in the information content of a given calibration dataset. A new technique overcomes this problem by allowing relative regularization weights to be estimated as parameters through the calibration process itself. The technique is applied to the simultaneous calibration of five subwatershed models, and it is demonstrated that the new scheme results in a more efficient inversion, and better enforcement of regularization constraints than traditional Tikhonov regularization methodologies. Moreover, it is argued that a joint calibration exercise of this type results in a more meaningful set of parameters than can be achieved by individual subwatershed model calibration. (c) 2005 Elsevier B.V. All rights reserved.