963 resultados para indicators’ improvement method
Resumo:
Background: Cognitive–behavioural therapy is efficacious in the treatment of major depressive disorder but response rates are still far from satisfactory. Aims: To better understand brain responses to individualised emotional stimuli and their association with outcome, to enhance treatment. Method: Functional magnetic resonance imaging data were collected prior to individual psychotherapy. Differences in brain activity during passive viewing of individualised self-critical material in 23 unmedicated out-patients with depression and 28 healthy controls were assessed. The associations between brain activity, cognitive and emotional change, and outcome were analysed in 21 patients. Results: Patients showed enhanced activity in the amygdala and ventral striatum compared with the control group. Non-response to therapy was associated with enhanced activity in the right amygdala compared with those who responded, and activity in this region was negatively associated with outcome. Emotional but not cognitive changes mediated this association. Conclusions: Amygdala hyperactivity may lessen symptom improvement in psychotherapy for depression through attenuating emotional skill acquisition.
Resumo:
OBJECTIVE To assess the reliability of the cervical vertebrae maturation method (CVM). BACKGROUND Skeletal maturity estimation can influence the manner and time of orthodontic treatment. The CVM method evaluates skeletal growth on the basis of the changes in the morphology of cervical vertebrae C2, C3, C4 during growth. These vertebrae are visible on a lateral cephalogram, so the method does not require an additional radiograph. METHODS In this website based study, 10 orthodontists with a long clinical practice (3 routinely using the method - "Routine user - RU" and 7 with less experience in the CVM method - "Non-Routine user - nonRU") rated twice cervical vertebrae maturation with the CVM method on 50 cropped scans of lateral cephalograms of children in circumpubertal age (for boys: 11.5 to 15.5 years; for girls: 10 to 14 years). Kappa statistics (with lower limits of 95% confidence intervals (CI)) and proportion of complete agreement on staging was used to evaluate intra- and inter-assessor agreement. RESULTS The mean weighted kappa for intra-assessor agreement was 0.44 (range: 0.30-0.64; range of lower limits of 95% CI: 0.12-0.48) and for inter-assessor agreement was 0.28 (range: -0.01-0.58; range of lower limits of 95% CI: -0.14-0.42). The mean proportion of identical scores assigned by the same assessor was 55.2 %(range: 44-74 %) and for different pairs of assessors was 42 % (range: 16-68 %). CONCLUSIONS The reliability of the CVM method is questionable and if orthodontic treatment should be initiated relative to the maximum growth, the use of additional biologic indicators should be considered (Tab. 4, Fig. 1, Ref. 24).
Resumo:
In the Practice Change Model, physicians act as key stakeholders, people who have both an investment in the practice and the capacity to influence how the practice performs. This leadership role is critical to the development and change of the practice. Leadership roles and effectiveness are an important factor in quality improvement in primary care practices.^ The study conducted involved a comparative case study analysis to identify leadership roles and the relationship between leadership roles and the number and type of quality improvement strategies adopted during a Practice Change Model-based intervention study. The research utilized secondary data from four primary care practices with various leadership styles. The practices are located in the San Antonio region and serve a large Hispanic population. The data was collected by two ABC Project Facilitators from each practice during a 12-month period including Key Informant Interviews (all staff members), MAP (Multi-method Assessment Process), and Practice Facilitation field notes. This data was used to evaluate leadership styles, management within the practice, and intervention tools that were implemented. The chief steps will be (1) to analyze if the leader-member relations contribute to the type of quality improvement strategy or strategies selected (2) to investigate if leader-position power contributes to the number of strategies selected and the type of strategy selected (3) and to explore whether the task structure varies across the four primary care practices.^ The research found that involving more members of the clinic staff in decision-making, building bridges between organizational staff and clinical staff, and task structure are all associated with the direct influence on the number and type of quality improvement strategies implemented in primary care practice.^ Although this research only investigated leadership styles of four different practices, it will offer future guidance on how to establish the priorities and implementation of quality improvement strategies that will have the greatest impact on patient care improvement. ^
Resumo:
Since its introduction into the United States in the 1980s, crack cocaine has been a harsh epidemic that has taken its toll on a countless number of people. This highly addictive, cheap and readily available drug of abuse has permeated many demographic sectors, mostly in low income, lesser educated, and urban communities. This epidemic of crack cocaine use in inner city areas across the Unites States has been described as an expression of economic marginality and “social suffering” coupled with the local and international forces of drug market economies (Agar 2003). As crack cocaine is a derivative of cocaine, it utilizes the psychoactive component of the drug, but delivers it in a much stronger, quicker, and more addictive fashion. This, coupled with its ready availability and cheap price has allowed for users to not only become very addicted very quickly, but to be subject to the stringent and sometimes unequal or inconsistent punishments for possession and distribution of crack-cocaine. ^ There are many public health and social ramifications from the abuse of crack-cocaine, and these epidemics appear to target low income and minority groups. Public health issues relating to the physical, mental, and economic strain will be addressed, as well as the direct and indirect effects of the punishments that come as a result of the disparity in penalties for cocaine and crack-cocaine possession and distribution. ^ Three new policies have recently been introduced into the United Stated Congress that actively address the disparity in sentencing for drug and criminal activities. They are, (1) Powder-Crack Cocaine Penalty Equalization Act of 2009, (HR 18, 111th Cong. 2009), (2) The Drug Sentencing Reform and Cocaine Kingpin Trafficking Act of 2009, (HR 265, 111th Cong. 2009) and (3) The Justice Integrity Act of 2009, (111th Cong. 2009). ^ Although they have only been initiated, if passed, they have potential to not only eliminate the crack-cocaine disparity, but to enact laws that help those affected by this epidemic. The final and overarching goal of this paper is to analyze and ultimately choose the ideal policy that would not only eliminate the cocaine and crack disparity regardless of current or future state statutes, but will provide the best method of rehabilitation, prevention, and justice. ^
Resumo:
The Centers for Disease Control estimates that foodborne diseases cause approximately 76 million illnesses, 325,000 hospitalizations, and 5,000 deaths in the United States each year. The American public is becoming more health conscious and there has been an increase in the dietary intake of fresh fruits and vegetables. Affluence and demand for convenience has allowed consumers to opt for pre-processed packaged fresh fruits and vegetables. These pre-processed foods are considered Ready-to-Eat. They have many of the advantages of fresh produce without the inconvenience of processing at home. After seeing a decline in food-related illnesses between 1996 and 2004, due to an improvement in meat and poultry safety, tainted produce has tilted the numbers back. This has resulted in none of the Healthy People 2010 targets for food-related illness reduction being reached. Irradiation has been shown to be effective in eliminating many of the foodborne pathogens. The application of irradiation as a food safety treatment has been widely endorsed by many of the major associations involved with food safety and public health. Despite these endorsements there has been very little use of this technology to date for reducing the disease burden associated with the consumption of these products. A review of the available literature since the passage of the 1996 Food Quality Protection Act was conducted on the barriers to implementing irradiation as a food safety process for fresh fruits and vegetables. The impediments to adopting widespread utilization of irradiation food processing as a food safety measure involve a complex array of legislative, regulatory, industry, and consumer issues. The FDA’s approval process limits the expansion of the list of foods approved for the application of irradiation as a food safety process. There is also a lack of capacity within the industry to meet the needs of a geographically dispersed industry.^
Resumo:
Background and Objective. Ever since the human development index was published in 1990 by the United Nations Development Programme (UNDP), many researchers started searching and corporative studying for more effective methods to measure the human development. Published in 1999, Lai’s “Temporal analysis of human development indicators: principal component approach” provided a valuable statistical way on human developmental analysis. This study presented in the thesis is the extension of Lai’s 1999 research. ^ Methods. I used the weighted principal component method on the human development indicators to measure and analyze the progress of human development in about 180 countries around the world from the year 1999 to 2010. The association of the main principal component obtained from the study and the human development index reported by the UNDP was estimated by the Spearman’s rank correlation coefficient. The main principal component was then further applied to quantify the temporal changes of the human development of selected countries by the proposed Z-test. ^ Results. The weighted means of all three human development indicators, health, knowledge, and standard of living, were increased from 1999 to 2010. The weighted standard deviation for GDP per capita was also increased across years indicated the rising inequality of standard of living among countries. The ranking of low development countries by the main principal component (MPC) is very similar to that by the human development index (HDI). Considerable discrepancy between MPC and HDI ranking was found among high development countries with high GDP per capita shifted to higher ranks. The Spearman’s rank correlation coefficient between the main principal component and the human development index were all around 0.99. All the above results were very close to outcomes in Lai’s 1999 report. The Z test result on temporal analysis of main principal components from 1999 to 2010 on Qatar was statistically significant, but not on other selected countries, such as Brazil, Russia, India, China, and U.S.A.^ Conclusion. To synthesize the multi-dimensional measurement of human development into a single index, the weighted principal component method provides a good model by using the statistical tool on a comprehensive ranking and measurement. Since the weighted main principle component index is more objective because of using population of nations as weight, more effective when the analysis is across time and space, and more flexible when the countries reported to the system has been changed year after year. Thus, in conclusion, the index generated by using weighted main principle component has some advantage over the human development index created in UNDP reports.^
Resumo:
The objectives of this dissertation were to evaluate health outcomes, quality improvement measures, and the long-term cost-effectiveness and impact on diabetes-related microvascular and macrovascular complications of a community health worker-led culturally tailored diabetes education and management intervention provided to uninsured Mexican Americans in an urban faith-based clinic. A prospective, randomized controlled repeated measures design was employed to compare the intervention effects between: (1) an intervention group (n=90) that participated in the Community Diabetes Education (CoDE) program along with usual medical care; and (2) a wait-listed comparison group (n=90) that received only usual medical care. Changes in hemoglobin A1c (HbA1c) and secondary outcomes (lipid status, blood pressure and body mass index) were assessed using linear mixed-models and an intention-to-treat approach. The CoDE group experienced greater reduction in HbA1c (-1.6%, p<.001) than the control group (-.9%, p<.001) over the 12 month study period. After adjusting for group-by-time interaction, antidiabetic medication use at baseline, changes made to the antidiabetic regime over the study period, duration of diabetes and baseline HbA1c, a statistically significant intervention effect on HbA1c (-.7%, p=.02) was observed for CoDE participants. Process and outcome quality measures were evaluated using multiple mixed-effects logistic regression models. Assessment of quality indicators revealed that the CoDE intervention group was significantly more likely to have received a dilated retinal examination than the control group, and 53% achieved a HbA1c below 7% compared with 38% of control group subjects. Long-term cost-effectiveness and impact on diabetes-related health outcomes were estimated through simulation modeling using the rigorously validated Archimedes Model. Over a 20 year time horizon, CoDE participants were forecasted to have less proliferative diabetic retinopathy, fewer foot ulcers, and reduced numbers of foot amputations than control group subjects who received usual medical care. An incremental cost-effectiveness ratio of $355 per quality-adjusted life-year gained was estimated for CoDE intervention participants over the same time period. The results from the three areas of program evaluation: impact on short-term health outcomes, quantification of improvement in quality of diabetes care, and projection of long-term cost-effectiveness and impact on diabetes-related health outcomes provide evidence that a community health worker can be a valuable resource to reduce diabetes disparities for uninsured Mexican Americans. This evidence supports formal integration of community health workers as members of the diabetes care team.^
Resumo:
La gestión del agua de riego en la zona regable del Genil-Cabra, situada en la provincia de Córdoba, sur de España, se ha estudiado usando tres indicadores de riego: el Suministro Relativo de Agua de Riego (RIS); el Suministro Relativo de Agua (RWS) y el Suministro Relativo de Agua por Precipitaciones (RRS). Estos tres indicadores se han calculado tanto de forma global como agrupando los datos según el tipo de cultivo, el método de riego, la textura del suelo y el tamaño de la parcela. Toda la información relativa a variables agronómicas e hidráulicas se ha incluido en un Sistema de Información Geográfica (SIG) para facilitar su manejo. Los resultados muestran que los riegos son deficitarios ya que el valor del indicador RIS es relativamente bajo. No obstante, dado que el indicador RWS alcanza valores más altos, la demanda evaporativa puede ser satisfecha a lo largo del ciclo de desarrollo del cultivo. El indicador RRS oscila menos y junto al RWS permite conocer la fracción de evapotranspiración cubierta por el agua de lluvia. Los valores medios de los indicadores calculados son muy útiles para conocer el comportamiento del regante y la tendencia general, aunque la muestra usada es aún insuficiente para poder caracterizar una gran área de riego en su conjunto.
Resumo:
The past climate evolution of southwestern Africa is poorly understood and interpretations of past hydrological changes are sometimes contradictory. Here we present a record of leaf-wax dD and View the MathML source taken from a marine sediment core at 23°S off the coast of Namibia to reconstruct the hydrology and C3 versus C4 vegetation of southwestern Africa over the last 140 000 years (140 ka). We find lower leaf-wax dD and higher View the MathML source (more C4 grasses), which we interpret to indicate wetter Southern Hemisphere (SH) summer conditions and increased seasonality, during SH insolation maxima relative to minima and during the last glacial period relative to the Holocene and the last interglacial period. Nonetheless, the dominance of C4 grasses throughout the record indicates that the wet season remained brief and that this region has remained semi-arid. Our data suggest that past precipitation increases were derived from the tropics rather than from the winter westerlies. Comparison with a record from the Congo Basin indicates that hydroclimate in southwestern Africa has evolved in antiphase with that of central Africa over the last 140 ka.
Resumo:
A method to reduce the noise power in far-field pattern without modifying the desired signal is proposed. Therefore, an important signal-to-noise ratio improvement may be achieved. The method is used when the antenna measurement is performed in planar near-field, where the recorded data are assumed to be corrupted with white Gaussian and space-stationary noise, because of the receiver additive noise. Back-propagating the measured field from the scan plane to the antenna under test (AUT) plane, the noise remains white Gaussian and space-stationary, whereas the desired field is theoretically concentrated in the aperture antenna. Thanks to this fact, a spatial filtering may be applied, cancelling the field which is located out of the AUT dimensions and which is only composed by noise. Next, a planar field to far-field transformation is carried out, achieving a great improvement compared to the pattern obtained directly from the measurement. To verify the effectiveness of the method, two examples will be presented using both simulated and measured near-field data.
Resumo:
Fish communities are a key element in fluvial ecosystems Their position in the top of the food chain and their sensitivity to a whole range of impacts make them a clear objective for ecosystem conservation and a sound indicator of biological integrity. The UE Water Framework Directive includes fish community composition, abundance and structure as relevant elements for the evaluation os biological condition. Several approaches have been proposed for the evaluation of the condition of fish communities, from the bio-indicator concept to the IBI (Index of biotic integrity) proposals. However, the complexity of fish communities and their ecological responses make this evaluation difficult, and we must avoid both oversimplified and extreme analytical procedures. In this work we present a new proposal to define reference conditions in fish communities, discussing them from an ecological viewpoint. This method is a synthetic approach called SYNTHETIC OPEN METHODOLOGICAL FRAMEWORK (SOMF) that has been applied to the rivers of Navarra. As a result, it is recommended the integration of all the available information from spatial, modelling, historical and expert sources, providing the better approach to fish reference conditions, keeping the highest level of information and meeting the legal requirements of the WFD.
Resumo:
A new method for detecting microcalcifications in regions of interest (ROIs) extracted from digitized mammograms is proposed. The top-hat transform is a technique based on mathematical morphology operations and, in this paper, is used to perform contrast enhancement of the mi-crocalcifications. To improve microcalcification detection, a novel image sub-segmentation approach based on the possibilistic fuzzy c-means algorithm is used. From the original ROIs, window-based features, such as the mean and standard deviation, were extracted; these features were used as an input vector in a classifier. The classifier is based on an artificial neural network to identify patterns belonging to microcalcifications and healthy tissue. Our results show that the proposed method is a good alternative for automatically detecting microcalcifications, because this stage is an important part of early breast cancer detection
Resumo:
En estos tiempos de crisis se hace imperativo lograr un consumo de recursos públicos lo más racional posible. El transporte público urbano es un sector al que se dedican grandes inversiones y cuya prestación de servicios está fuertemente subvencionada. El incremento de la eficiencia técnica del sector, entendida como la relación entre producción de servicios y consumo de recursos, puede ayudar a conseguir una mejor gestión de los fondos públicos. Un primer paso para que se produzca una mejora es el desarrollo de una metodología de evaluación de la eficiencia técnica de las compañías de transporte público. Existen diferentes métodos para la evaluación técnica de un conjunto de compañías pertenecientes a un sector. Uno de los más utilizados es el método frontera, en el que se encuentra el análisis envolvente de datos (Data Envelopment Analysis, DEA, por sus siglas en inglés). Este método permite establecer una frontera de eficiencia técnica relativa a un determinado grupo de compañías, en función de un número limitado de variables. Las variables deben cuantificar, por un lado, la prestación de servicios de las distintas compañías (outputs), y por el otro, los recursos consumidos en la producción de dichos servicios (inputs). El objetivo de esta tesis es analizar, mediante el uso del método DEA, la eficiencia técnica de los servicios de autobuses urbanos en España. Para ello, se estudia el número de variables más adecuado para conformar los modelos con los que se obtienen las fronteras de eficiencia. En el desarrollo de la metodología se utilizan indicadores de los servicios de autobús urbano de las principales ciudades de las áreas metropolitanas españolas, para el periodo 2004-2009. In times of crisis it is imperative achieve a consumption of public resources as rational as possible. Urban public transport is a sector devoted to large investments and whose services are heavily subsidized. Increase the technical efficiency of the sector, defined as the ratio of service output and resource consumption, can help achieve a better management of public funds. One step to produce an improvement is the development of a methodology for evaluating the technical efficiency of the public transport companies. There are different methods for the technical evaluation of a set of companies within an industry. One of the most widely used methods is the frontier method, in particular the Data Envelopment Analysis (DEA). This method allows the calculation of a technical efficiency frontier on a specific group of companies, based on a limited number of variables. Variables must quantify, on the one hand, the provision of services of different companies (outputs), and on the other hand, the resources consumed in the production of such services (inputs). The objective of this thesis is to analyze, using the DEA method, the technical efficiency of urban bus services in Spain. For this purpose, it is studied the more suitable variables that can be used in the models to obtain the efficiency frontiers. In developing the methodology are used indicators of urban bus services in major cities of the Spanish metropolitan areas for the period 2004-2009.
Resumo:
One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.
Resumo:
Classical imaging optics has been developed over centuries in many areas, such as its paraxial imaging theory and practical design methods like multi-parametric optimization techniques. Although these imaging optical design methods can provide elegant solutions to many traditional optical problems, there are more and more new design problems, like solar concentrator, illumination system, ultra-compact camera, etc., that require maximum energy transfer efficiency, or ultra-compact optical structure. These problems do not have simple solutions from classical imaging design methods, because not only paraxial rays, but also non-paraxial rays should be well considered in the design process. Non-imaging optics is a newly developed optical discipline, which does not aim to form images, but to maximize energy transfer efficiency. One important concept developed from non-imaging optics is the “edge-ray principle”, which states that the energy flow contained in a bundle of rays will be transferred to the target, if all its edge rays are transferred to the target. Based on that concept, many CPC solar concentrators have been developed with efficiency close to the thermodynamic limit. When more than one bundle of edge-rays needs to be considered in the design, one way to obtain solutions is to use SMS method. SMS stands for Simultaneous Multiple Surface, which means several optical surfaces are constructed simultaneously. The SMS method was developed as a design method in Non-imaging optics during the 90s. The method can be considered as an extension to the Cartesian Oval calculation. In the traditional Cartesian Oval calculation, one optical surface is built to transform an input wave-front to an out-put wave-front. The SMS method however, is dedicated to solve more than 1 wave-fronts transformation problem. In the beginning, only 2 input wave-fronts and 2 output wave-fronts transformation problem was considered in the SMS design process for rotational optical systems or free-form optical systems. Usually “SMS 2D” method stands for the SMS procedure developed for rotational optical system, and “SMS 3D” method for the procedure for free-form optical system. Although the SMS method was originally employed in non-imaging optical system designs, it has been found during this thesis that with the improved capability to design more surfaces and control more input and output wave-fronts, the SMS method can also be applied to imaging system designs and possesses great advantage over traditional design methods. In this thesis, one of the main goals to achieve is to further develop the existing SMS-2D method to design with more surfaces and improve the stability of the SMS-2D and SMS-3D algorithms, so that further optimization process can be combined with SMS algorithms. The benefits of SMS plus optimization strategy over traditional optimization strategy will be explained in details for both rotational and free-form imaging optical system designs. Another main goal is to develop novel design concepts and methods suitable for challenging non-imaging applications, e.g. solar concentrator and solar tracker. This thesis comprises 9 chapters and can be grouped into two parts: the first part (chapter 2-5) contains research works in the imaging field, and the second part (chapter 6-8) contains works in the non-imaging field. In the first chapter, an introduction to basic imaging and non-imaging design concepts and theories is given. Chapter 2 presents a basic SMS-2D imaging design procedure using meridian rays. In this chapter, we will set the imaging design problem from the SMS point of view, and try to solve the problem numerically. The stability of this SMS-2D design procedure will also be discussed. The design concepts and procedures developed in this chapter lay the path for further improvement. Chapter 3 presents two improved SMS 3 surfaces’ design procedures using meridian rays (SMS-3M) and skew rays (SMS-1M2S) respectively. The major improvement has been made to the central segments selections, so that the whole SMS procedures become more stable compared to procedures described in Chapter 2. Since these two algorithms represent two types of phase space sampling, their image forming capabilities are compared in a simple objective design. Chapter 4 deals with an ultra-compact SWIR camera design with the SMS-3M method. The difficulties in this wide band camera design is how to maintain high image quality meanwhile reduce the overall system length. This interesting camera design provides a playground for the classical design method and SMS design methods. We will show designs and optical performance from both classical design method and the SMS design method. Tolerance study is also given as the end of the chapter. Chapter 5 develops a two-stage SMS-3D based optimization strategy for a 2 freeform mirrors imaging system. In the first optimization phase, the SMS-3D method is integrated into the optimization process to construct the two mirrors in an accurate way, drastically reducing the unknown parameters to only few system configuration parameters. In the second optimization phase, previous optimized mirrors are parameterized into Qbfs type polynomials and set up in code V. Code V optimization results demonstrates the effectiveness of this design strategy in this 2-mirror system design. Chapter 6 shows an etendue-squeezing condenser optics, which were prepared for the 2010 IODC illumination contest. This interesting design employs many non-imaging techniques such as the SMS method, etendue-squeezing tessellation, and groove surface design. This device has theoretical efficiency limit as high as 91.9%. Chapter 7 presents a freeform mirror-type solar concentrator with uniform irradiance on the solar cell. Traditional parabolic mirror concentrator has many drawbacks like hot-pot irradiance on the center of the cell, insufficient use of active cell area due to its rotational irradiance pattern and small acceptance angle. In order to conquer these limitations, a novel irradiance homogenization concept is developed, which lead to a free-form mirror design. Simulation results show that the free-form mirror reflector has rectangular irradiance pattern, uniform irradiance distribution and large acceptance angle, which confirm the viability of the design concept. Chapter 8 presents a novel beam-steering array optics design strategy. The goal of the design is to track large angle parallel rays by only moving optical arrays laterally, and convert it to small angle parallel output rays. The design concept is developed as an extended SMS method. Potential applications of this beam-steering device are: skylights to provide steerable natural illumination, building integrated CPV systems, and steerable LED illumination. Conclusion and future lines of work are given in Chapter 9. Resumen La óptica de formación de imagen clásica se ha ido desarrollando durante siglos, dando lugar tanto a la teoría de óptica paraxial y los métodos de diseño prácticos como a técnicas de optimización multiparamétricas. Aunque estos métodos de diseño óptico para formación de imagen puede aportar soluciones elegantes a muchos problemas convencionales, siguen apareciendo nuevos problemas de diseño óptico, concentradores solares, sistemas de iluminación, cámaras ultracompactas, etc. que requieren máxima transferencia de energía o dimensiones ultracompactas. Este tipo de problemas no se pueden resolver fácilmente con métodos clásicos de diseño porque durante el proceso de diseño no solamente se deben considerar los rayos paraxiales sino también los rayos no paraxiales. La óptica anidólica o no formadora de imagen es una disciplina que ha evolucionado en gran medida recientemente. Su objetivo no es formar imagen, es maximazar la eficiencia de transferencia de energía. Un concepto importante de la óptica anidólica son los “rayos marginales”, que se pueden utilizar para el diseño de sistemas ya que si todos los rayos marginales llegan a nuestra área del receptor, todos los rayos interiores también llegarán al receptor. Haciendo uso de este principio, se han diseñado muchos concentradores solares que funcionan cerca del límite teórico que marca la termodinámica. Cuando consideramos más de un haz de rayos marginales en nuestro diseño, una posible solución es usar el método SMS (Simultaneous Multiple Surface), el cuál diseña simultáneamente varias superficies ópticas. El SMS nació como un método de diseño para óptica anidólica durante los años 90. El método puede ser considerado como una extensión del cálculo del óvalo cartesiano. En el método del óvalo cartesiano convencional, se calcula una superficie para transformar un frente de onda entrante a otro frente de onda saliente. El método SMS permite transformar varios frentes de onda de entrada en frentes de onda de salida. Inicialmente, sólo era posible transformar dos frentes de onda con dos superficies con simetría de rotación y sin simetría de rotación, pero esta limitación ha sido superada recientemente. Nos referimos a “SMS 2D” como el método orientado a construir superficies con simetría de rotación y llamamos “SMS 3D” al método para construir superficies sin simetría de rotación o free-form. Aunque el método originalmente fue aplicado en el diseño de sistemas anidólicos, se ha observado que gracias a su capacidad para diseñar más superficies y controlar más frentes de onda de entrada y de salida, el SMS también es posible aplicarlo a sistemas de formación de imagen proporcionando una gran ventaja sobre los métodos de diseño tradicionales. Uno de los principales objetivos de la presente tesis es extender el método SMS-2D para permitir el diseño de sistemas con mayor número de superficies y mejorar la estabilidad de los algoritmos del SMS-2D y SMS-3D, haciendo posible combinar la optimización con los algoritmos. Los beneficios de combinar SMS y optimización comparado con el proceso de optimización tradicional se explican en detalle para sistemas con simetría de rotación y sin simetría de rotación. Otro objetivo importante de la tesis es el desarrollo de nuevos conceptos de diseño y nuevos métodos en el área de la concentración solar fotovoltaica. La tesis está estructurada en 9 capítulos que están agrupados en dos partes: la primera de ellas (capítulos 2-5) se centra en la óptica formadora de imagen mientras que en la segunda parte (capítulos 6-8) se presenta el trabajo del área de la óptica anidólica. El primer capítulo consta de una breve introducción de los conceptos básicos de la óptica anidólica y la óptica en formación de imagen. El capítulo 2 describe un proceso de diseño SMS-2D sencillo basado en los rayos meridianos. En este capítulo se presenta el problema de diseñar un sistema formador de imagen desde el punto de vista del SMS y se intenta obtener una solución de manera numérica. La estabilidad de este proceso se analiza con detalle. Los conceptos de diseño y los algoritmos desarrollados en este capítulo sientan la base sobre la cual se realizarán mejoras. El capítulo 3 presenta dos procedimientos para el diseño de un sistema con 3 superficies SMS, el primero basado en rayos meridianos (SMS-3M) y el segundo basado en rayos oblicuos (SMS-1M2S). La mejora más destacable recae en la selección de los segmentos centrales, que hacen más estable todo el proceso de diseño comparado con el presentado en el capítulo 2. Estos dos algoritmos representan dos tipos de muestreo del espacio de fases, su capacidad para formar imagen se compara diseñando un objetivo simple con cada uno de ellos. En el capítulo 4 se presenta un diseño ultra-compacto de una cámara SWIR diseñada usando el método SMS-3M. La dificultad del diseño de esta cámara de espectro ancho radica en mantener una alta calidad de imagen y al mismo tiempo reducir drásticamente sus dimensiones. Esta cámara es muy interesante para comparar el método de diseño clásico y el método de SMS. En este capítulo se presentan ambos diseños y se analizan sus características ópticas. En el capítulo 5 se describe la estrategia de optimización basada en el método SMS-3D. El método SMS-3D calcula las superficies ópticas de manera precisa, dejando sólo unos pocos parámetros libres para decidir la configuración del sistema. Modificando el valor de estos parámetros se genera cada vez mediante SMS-3D un sistema completo diferente. La optimización se lleva a cabo variando los mencionados parámetros y analizando el sistema generado. Los resultados muestran que esta estrategia de diseño es muy eficaz y eficiente para un sistema formado por dos espejos. En el capítulo 6 se describe un sistema de compresión de la Etendue, que fue presentado en el concurso de iluminación del IODC en 2010. Este interesante diseño hace uso de técnicas propias de la óptica anidólica, como el método SMS, el teselado de las lentes y el diseño mediante grooves. Este dispositivo tiene un límite teórica en la eficiencia del 91.9%. El capítulo 7 presenta un concentrador solar basado en un espejo free-form con irradiancia uniforme sobre la célula. Los concentradores parabólicos tienen numerosas desventajas como los puntos calientes en la zona central de la célula, uso no eficiente del área de la célula al ser ésta cuadrada y además tienen ángulos de aceptancia de reducido. Para poder superar estas limitaciones se propone un novedoso concepto de homogeneización de la irrandancia que se materializa en un diseño con espejo free-form. El análisis mediante simulación demuestra que la irradiancia es homogénea en una región rectangular y con mayor ángulo de aceptancia, lo que confirma la viabilidad del concepto de diseño. En el capítulo 8 se presenta un novedoso concepto para el diseño de sistemas afocales dinámicos. El objetivo del diseño es realizar un sistema cuyo haz de rayos de entrada pueda llegar con ángulos entre ±45º mientras que el haz de rayos a la salida sea siempre perpendicular al sistema, variando únicamente la posición de los elementos ópticos lateralmente. Las aplicaciones potenciales de este dispositivo son varias: tragaluces que proporcionan iluminación natural, sistemas de concentración fotovoltaica integrados en los edificios o iluminación direccionable con LEDs. Finalmente, el último capítulo contiene las conclusiones y las líneas de investigación futura.