912 resultados para Optics in computing
Resumo:
The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.
Resumo:
The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.
Resumo:
Monitoring nutritional intake is an important aspect of the care of older people, particularly for those at risk of malnutrition. Current practice for monitoring food intake relies on hand written food charts that have several inadequacies. We describe the design and validation of a tool for computer-assisted visual assessment of patient food and nutrient intake. To estimate food consumption, the application compares the pixels the user rubbed out against predefined graphical masks. Weight of food consumed is calculated as a percentage of pixels rubbed out against pixels in the mask. Results suggest that the application may be a useful tool for the conservative assessment of nutritional intake in hospitals.
Resumo:
The IEEE 754 standard for oating-point arithmetic is widely used in computing. It is based on real arithmetic and is made total by adding both a positive and a negative infinity, a negative zero, and many Not-a-Number (NaN) states. The IEEE infinities are said to have the behaviour of limits. Transreal arithmetic is total. It also has a positive and a negative infinity but no negative zero, and it has a single, unordered number, nullity. We elucidate the transreal tangent and extend real limits to transreal limits. Arguing from this firm foundation, we maintain that there are three category errors in the IEEE 754 standard. Firstly the claim that IEEE infinities are limits of real arithmetic confuses limiting processes with arithmetic. Secondly a defence of IEEE negative zero confuses the limit of a function with the value of a function. Thirdly the definition of IEEE NaNs confuses undefined with unordered. Furthermore we prove that the tangent function, with the infinities given by geometrical con- struction, has a period of an entire rotation, not half a rotation as is commonly understood. This illustrates a category error, confusing the limit with the value of a function, in an important area of applied mathe- matics { trigonometry. We brie y consider the wider implications of this category error. Another paper proposes transreal arithmetic as a basis for floating- point arithmetic; here we take the profound step of proposing transreal arithmetic as a replacement for real arithmetic to remove the possibility of certain category errors in mathematics. Thus we propose both theo- retical and practical advantages of transmathematics. In particular we argue that implementing transreal analysis in trans- floating-point arith- metic would extend the coverage, accuracy and reliability of almost all computer programs that exploit real analysis { essentially all programs in science and engineering and many in finance, medicine and other socially beneficial applications.
Resumo:
The purpose of my tour to Czechoslovakia was to participate the Third International Conference Applied Optics in Solar Energy, which was held in Prague, Octoher 2-6, 1989, and then visit some scientific institutes and solar collector plants as guest of the Czechoslovakian Academy of Science. This was made possihle hy an exchange researcher grant from the Royal Swedish Academy of Engineering Sciences.
Resumo:
Lucas (1987) has shown a surprising result in business-cycle research: the welfare cost of business cycles are very small. Our paper has several original contributions. First, in computing welfare costs, we propose a novel setup that separates the effects of uncertainty stemming from business-cycle fluctuations and economic-growth variation. Second, we extend the sample from which to compute the moments of consumption: the whole of the literature chose primarily to work with post-WWII data. For this period, actual consumption is already a result of counter-cyclical policies, and is potentially smoother than what it otherwise have been in their absence. So, we employ also pre-WWII data. Third, we take an econometric approach and compute explicitly the asymptotic standard deviation of welfare costs using the Delta Method. Estimates of welfare costs show major differences for the pre-WWII and the post-WWII era. They can reach up to 15 times for reasonable parameter values -β=0.985, and ∅=5. For example, in the pre-WWII period (1901-1941), welfare cost estimates are 0.31% of consumption if we consider only permanent shocks and 0.61% of consumption if we consider only transitory shocks. In comparison, the post-WWII era is much quieter: welfare costs of economic growth are 0.11% and welfare costs of business cycles are 0.037% - the latter being very close to the estimate in Lucas (0.040%). Estimates of marginal welfare costs are roughly twice the size of the total welfare costs. For the pre-WWII era, marginal welfare costs of economic-growth and business- cycle fluctuations are respectively 0.63% and 1.17% of per-capita consumption. The same figures for the post-WWII era are, respectively, 0.21% and 0.07% of per-capita consumption.
Resumo:
Lucas(1987) has shown a surprising result in business-cycle research: the welfare cost of business cycles are very small. Our paper has several original contributions. First, in computing welfare costs, we propose a novel setup that separates the effects of uncertainty stemming from business-cycle uctuations and economic-growth variation. Second, we extend the sample from which to compute the moments of consumption: the whole of the literature chose primarily to work with post-WWII data. For this period, actual consumption is already a result of counter-cyclical policies, and is potentially smoother than what it otherwise have been in their absence. So, we employ also pre-WWII data. Third, we take an econometric approach and compute explicitly the asymptotic standard deviation of welfare costs using the Delta Method. Estimates of welfare costs show major diferences for the pre-WWII and the post-WWII era. They can reach up to 15 times for reasonable parameter values = 0:985, and = 5. For example, in the pre-WWII period (1901-1941), welfare cost estimates are 0.31% of consumption if we consider only permanent shocks and 0.61% of consumption if we consider only transitory shocks. In comparison, the post-WWII era is much quieter: welfare costs of economic growth are 0.11% and welfare costs of business cycles are 0.037% the latter being very close to the estimate in Lucas (0.040%). Estimates of marginal welfare costs are roughly twice the size of the total welfare costs. For the pre-WWII era, marginal welfare costs of economic-growth and business-cycle uctuations are respectively 0.63% and 1.17% of per-capita consumption. The same gures for the post-WWII era are, respectively, 0.21% and 0.07% of per-capita consumption.
Resumo:
Using variational and numerical solutions of the mean-field Gross-Pitaevskii equation we show that a bright soliton can be stabilized in a trapless three-dimensional attractive Bose-Einstein condensate (BEC) by a rapid periodic temporal modulation of scattering length alone by using a Feshbach resonance. This scheme also stabilizes a rotating vortex soliton in two dimensions. Apart from possible experimental application in BEC, the present study suggests that the spatiotemporal solitons of nonlinear optics in three dimensions can also be stabilized in a layered Kerr medium with sign-changing nonlinearity along the propagation direction.
Resumo:
Four-fermion operators have been utilized in the past to link the quarkexchange processes in the interaction of hadrons with the effective mesonexchange amplitudes. In this paper, we apply the similar idea of Fierz rearrangement to the electromagnetic processes and focus on the electromagnetic form factors of nucleon and electron. We explain the motivation of using four-fermion operators and discuss the advantage of this method in computing electromagnetic processes.
Resumo:
Four-fermion operators have been used in the past to link the quark-exchange processes in the interaction of hadrons with the effective meson-exchange amplitudes. In this paper, we apply the similar idea of a Fierz rearrangement to the self-energy and electromagnetic processes and focus on the electromagnetic form factors of the nucleon and the electron. We explain the motivation of using four-fermion operators and discuss the advantage of this method in computing electromagnetic processes. © 2013 American Physical Society.
Resumo:
This paper deals with topology optimization in plane elastic-linear problems considering the influence of the self weight in efforts in structural elements. For this purpose it is used a numerical technique called SESO (Smooth ESO), which is based on the procedure for progressive decrease of the inefficient stiffness element contribution at lower stresses until he has no more influence. The SESO is applied with the finite element method and is utilized a triangular finite element and high order. This paper extends the technique SESO for application its self weight where the program, in computing the volume and specific weight, automatically generates a concentrated equivalent force to each node of the element. The evaluation is finalized with the definition of a model of strut-and-tie resulting in regions of stress concentration. Examples are presented with optimum topology structures obtaining optimal settings. (C) 2012 CIMNE (Universitat Politecnica de Catalunya). Published by Elsevier Espana, S.L.U. All rights reserved.
Resumo:
Open web steel joists are designed in the United States following the governing specification published by the Steel Joist Institute. For compression members in joists, this specification employs an effective length factor, or K-factor, in confirming their adequacy. In most cases, these K-factors have been conservatively assumed equal to 1.0 for compression web members, regardless of the fact that intuition and limited experimental work indicate that smaller values could be justified. Given that smaller K-factors could result in more economical designs without a loss in safety, the research presented in this thesis aims to suggest procedures for obtaining more rational values. Three different methods for computing in-plane and out-of-plane K-factors are investigated, including (1) a hand calculation method based on the use of alignment charts, (2) computational critical load (eigenvalue) analyses using uniformly distributed loads, and (3) computational analyses using a compressive strain approach. The latter method is novel and allows for computing the individual buckling load of a specific member within a system, such as a joist. Four different joist configurations are investigated, including an 18K3, 28K10, and two variations of a 32LH06. Based on these methods and the very limited number of joists studied, it appears promising that in-plane and out-of-plane K-factors of 0.75 and 0.85, respectively, could be used in computing the flexural buckling strength of web members in routine steel joist design. Recommendations for future work, which include systematically investigating a wider range of joist configurations and connection restraint, are provided.
Resumo:
Turbulence affects traditional free space optical communication by causing speckle to appear in the received beam profile. This occurs due to changes in the refractive index of the atmosphere that are caused by fluctuations in temperature and pressure, resulting in an inhomogeneous medium. The Gaussian-Schell model of partial coherence has been suggested as a means of mitigating these atmospheric inhomogeneities on the transmission side. This dissertation analyzed the Gaussian-Schell model of partial coherence by verifying the Gaussian-Schell model in the far-field, investigated the number of independent phase control screens necessary to approach the ideal Gaussian-Schell model, and showed experimentally that the Gaussian-Schell model of partial coherence is achievable in the far-field using a liquid crystal spatial light modulator. A method for optimizing the statistical properties of the Gaussian-Schell model was developed to maximize the coherence of the field while ensuring that it does not exhibit the same statistics as a fully coherent source. Finally a technique to estimate the minimum spatial resolution necessary in a spatial light modulator was developed to effectively propagate the Gaussian-Schell model through a range of atmospheric turbulence strengths. This work showed that regardless of turbulence strength or receiver aperture, transmitting the Gaussian-Schell model of partial coherence instead of a fully coherent source will yield a reduction in the intensity fluctuations of the received field. By measuring the variance of the intensity fluctuations and the received mean, it is shown through the scintillation index that using the Gaussian-Schell model of partial coherence is a simple and straight forward method to mitigate atmospheric turbulence instead of traditional adaptive optics in free space optical communications.