48 resultados para Optics in computing
em CentAUR: Central Archive University of Reading - UK
Resumo:
Modal filtering is based on the capability of single-mode waveguides to transmit only one complex amplitude function to eliminate virtually any perturbation of the interfering wavefronts, thus making very high rejection ratios possible in a nulling interferometer. In the present paper we focus on the progress of Integrated Optics in the thermal infrared [6-20 mu m] range, one of the two candidate technologies for the fabrication of Modal Filters, together with fiber optics. In conclusion of the European Space Agency's (ESA) "Integrated Optics for Darwin" activity, etched layers of clialcogenide material deposited on chalcogenide glass substrates was selected among four candidates as the technology with the best potential to simultaneously meet the filtering efficiency, absolute and spectral transmission, and beam coupling requirements. ESA's new "Integrated Optics" activity started at mid-2007 with the purpose of improving the technology until compliant prototypes can be manufactured and validated, expectedly by the end of 2009. The present paper aims at introducing the project and the components requirements and functions. The selected materials and preliminary designs, as well as the experimental validation logic and test benches are presented. More details are provided on the progress of the main technology: vacuum deposition in the co-evaporation mode and subsequent etching of chalcogenide layers. In addition., preliminary investigations of an alternative technology based on burying a chalcogenide optical fiber core into a chalcogenide substrate are presented. Specific developments of anti-reflective solutions designed for the mitigation of Fresnel losses at the input and output surface of the components are also introduced.
Resumo:
The search for Earth-like exoplanets, orbiting in the habitable zone of stars other than our Sun and showing biological activity, is one of the most exciting and challenging quests of the present time. Nulling interferometry from space, in the thermal infrared, appears as a promising candidate technique for the task of directly observing extra-solar planets. It has been studied for about 10 years by ESA and NASA in the framework of the Darwin and TPF-I missions respectively. Nevertheless, nulling interferometry in the thermal infrared remains a technological challenge at several levels. Among them, the development of the "modal filter" function is mandatory for the filtering of the wavefronts in adequacy with the objective of rejecting the central star flux to an efficiency of about 105. Modal filtering takes benefit of the capability of single-mode waveguides to transmit a single amplitude function, to eliminate virtually any perturbation of the interfering wavefronts, thus making very high rejection ratios possible. The modal filter may either be based on single-mode Integrated Optics (IO) and/or Fiber Optics. In this paper, we focus on IO, and more specifically on the progress of the on-going "Integrated Optics" activity of the European Space Agency.
Resumo:
The extensive use of cloud computing in educational institutes around the world brings unique challenges for universities. Some of these challenges are due to clear differences between Europe and Middle East universities. These differences stem from the natural variation between people. Cloud computing has created a new concept to deal with software services and hardware infrastructure. Some benefits are immediately gained, for instance, to allow students to share their information easily and to discover new experiences of the education system. However, this introduces more challenges, such as security and configuration of resources in shared environments. Educational institutes cannot escape from these challenges. Yet some differences occur between universities which use cloud computing as an educational tool or a form of social connection. This paper discusses some benefits and limitations of using cloud computing and major differences in using cloud computing at universities in Europe and the Middle East, based on the social perspective, security and economics concepts, and personal responsibility.
Resumo:
A synthesis method is outlined for the design of broadband anti-reflection coatings for use in spaceborne infrared optics. The Golden Section optimisation routine is used to make a search, using designated non-absorptive dielectric thin film combinations, for the coating design which fulfils the required spectral requirements using the least number of layers and different materials. Three examples are given of coatings designed by this method : (I) 1µm to 12µm anti-reflection coating on Zinc Sulphide using Zinc Sulphide and Yttrium Fluoride thin film materials. (ii) 2µm to 14µm anti-reflection coating on Germanium using Germanium and Ytterbium Fluoride thin film materials. (iii) 6µm to 17µm anti-reflection coating on Germanium using Lead Telluride, Zinc Selenide and Barium Fluoride. The measured spectral performance of the manufactured 6µm to 17µm coating on Germanium is given. This is the anti-reflection coating for the germanium optics in the NASA Cassini Orbiter CIRS instrument.
Resumo:
We consider reshaping an obstacle virtually by using transformation optics in acoustic and electromagnetic scattering. Among the general virtual reshaping results, the virtual minification and virtual magnification in particular are studied. Stability estimates are derived for scattering amplitude in terms of the diameter of a small obstacle, which implies that the limiting case for minification corresponds to a perfect cloaking, i.e., the obstacle is invisible to detection.
Resumo:
The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.
Resumo:
The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.
Resumo:
Monitoring nutritional intake is an important aspect of the care of older people, particularly for those at risk of malnutrition. Current practice for monitoring food intake relies on hand written food charts that have several inadequacies. We describe the design and validation of a tool for computer-assisted visual assessment of patient food and nutrient intake. To estimate food consumption, the application compares the pixels the user rubbed out against predefined graphical masks. Weight of food consumed is calculated as a percentage of pixels rubbed out against pixels in the mask. Results suggest that the application may be a useful tool for the conservative assessment of nutritional intake in hospitals.
Resumo:
The IEEE 754 standard for oating-point arithmetic is widely used in computing. It is based on real arithmetic and is made total by adding both a positive and a negative infinity, a negative zero, and many Not-a-Number (NaN) states. The IEEE infinities are said to have the behaviour of limits. Transreal arithmetic is total. It also has a positive and a negative infinity but no negative zero, and it has a single, unordered number, nullity. We elucidate the transreal tangent and extend real limits to transreal limits. Arguing from this firm foundation, we maintain that there are three category errors in the IEEE 754 standard. Firstly the claim that IEEE infinities are limits of real arithmetic confuses limiting processes with arithmetic. Secondly a defence of IEEE negative zero confuses the limit of a function with the value of a function. Thirdly the definition of IEEE NaNs confuses undefined with unordered. Furthermore we prove that the tangent function, with the infinities given by geometrical con- struction, has a period of an entire rotation, not half a rotation as is commonly understood. This illustrates a category error, confusing the limit with the value of a function, in an important area of applied mathe- matics { trigonometry. We brie y consider the wider implications of this category error. Another paper proposes transreal arithmetic as a basis for floating- point arithmetic; here we take the profound step of proposing transreal arithmetic as a replacement for real arithmetic to remove the possibility of certain category errors in mathematics. Thus we propose both theo- retical and practical advantages of transmathematics. In particular we argue that implementing transreal analysis in trans- floating-point arith- metic would extend the coverage, accuracy and reliability of almost all computer programs that exploit real analysis { essentially all programs in science and engineering and many in finance, medicine and other socially beneficial applications.
Resumo:
In multi-tasking systems when it is not possible to guarantee completion of all activities by specified times, the scheduling problem is not straightforward. Examples of this situation in real-time programming include the occurrence of alarm conditions and the buffering of output to peripherals in on-line facilities. The latter case is studied here with the hope of indicating one solution to the general problem.
Resumo:
In this paper we consider the impedance boundary value problem for the Helmholtz equation in a half-plane with piecewise constant boundary data, a problem which models, for example, outdoor sound propagation over inhomogeneous. at terrain. To achieve good approximation at high frequencies with a relatively low number of degrees of freedom, we propose a novel Galerkin boundary element method, using a graded mesh with smaller elements adjacent to discontinuities in impedance and a special set of basis functions so that, on each element, the approximation space contains polynomials ( of degree.) multiplied by traces of plane waves on the boundary. We prove stability and convergence and show that the error in computing the total acoustic field is O( N-(v+1) log(1/2) N), where the number of degrees of freedom is proportional to N logN. This error estimate is independent of the wavenumber, and thus the number of degrees of freedom required to achieve a prescribed level of accuracy does not increase as the wavenumber tends to infinity.
Resumo:
In this paper we show stability and convergence for a novel Galerkin boundary element method approach to the impedance boundary value problem for the Helmholtz equation in a half-plane with piecewise constant boundary data. This problem models, for example, outdoor sound propagation over inhomogeneous flat terrain. To achieve a good approximation with a relatively low number of degrees of freedom we employ a graded mesh with smaller elements adjacent to discontinuities in impedance, and a special set of basis functions for the Galerkin method so that, on each element, the approximation space consists of polynomials (of degree $\nu$) multiplied by traces of plane waves on the boundary. In the case where the impedance is constant outside an interval $[a,b]$, which only requires the discretization of $[a,b]$, we show theoretically and experimentally that the $L_2$ error in computing the acoustic field on $[a,b]$ is ${\cal O}(\log^{\nu+3/2}|k(b-a)| M^{-(\nu+1)})$, where $M$ is the number of degrees of freedom and $k$ is the wavenumber. This indicates that the proposed method is especially commendable for large intervals or a high wavenumber. In a final section we sketch how the same methodology extends to more general scattering problems.
Resumo:
Studies in the literature have proposed techniques to facilitate pointing in graphical user interfaces through the use of proxy targets. Proxy targets effectively bring the target to the cursor, thereby reducing the distance that the cursor must travel. This paper describes a study which aims to provide an initial understanding of how older adults respond to proxy targets, and compares older with younger users. We found that users in both age groups adjusted to the proxy targets without difficulty, and there was no indication in the cursor trajectories that users were confused about which target, i.e. the original versus the proxy, was to be selected. In terms of times, preliminary results show that for younger users, proxies did not provide any benefits over direct selection, while for older users, times were increased with proxy targets. A full analysis of the movement times, error rates, throughput and subjective feedback is currently underway.