867 resultados para Measurement-based quantum computing
Resumo:
The high complexity of cloud parameterizations now held in models puts more pressure on observational studies to provide useful means to evaluate them. One approach to the problem put forth in the modelling community is to evaluate under what atmospheric conditions the parameterizations fail to simulate the cloud properties and under what conditions they do a good job. It is the ambition of this paper to characterize the variability of the statistical properties of tropical ice clouds in different tropical "regimes" recently identified in the literature to aid the development of better process-oriented parameterizations in models. For this purpose, the statistical properties of non-precipitating tropical ice clouds over Darwin, Australia are characterized using ground-based radar-lidar observations from the Atmospheric Radiation Measurement (ARM) Program. The ice cloud properties analysed are the frequency of ice cloud occurrence, the morphological properties (cloud top height and thickness), and the microphysical and radiative properties (ice water content, visible extinction, effective radius, and total concentration). The variability of these tropical ice cloud properties is then studied as a function of the large-scale cloud regimes derived from the International Satellite Cloud Climatology Project (ISCCP), the amplitude and phase of the Madden-Julian Oscillation (MJO), and the large-scale atmospheric regime as derived from a long-term record of radiosonde observations over Darwin. The vertical variability of ice cloud occurrence and microphysical properties is largest in all regimes (1.5 order of magnitude for ice water content and extinction, a factor 3 in effective radius, and three orders of magnitude in concentration, typically). 98 % of ice clouds in our dataset are characterized by either a small cloud fraction (smaller than 0.3) or a very large cloud fraction (larger than 0.9). In the ice part of the troposphere three distinct layers characterized by different statistically-dominant microphysical processes are identified. The variability of the ice cloud properties as a function of the large-scale atmospheric regime, cloud regime, and MJO phase is large, producing mean differences of up to a factor 8 in the frequency of ice cloud occurrence between large-scale atmospheric regimes and mean differences of a factor 2 typically in all microphysical properties. Finally, the diurnal cycle of the frequency of occurrence of ice clouds is also very different between regimes and MJO phases, with diurnal amplitudes of the vertically-integrated frequency of ice cloud occurrence ranging from as low as 0.2 (weak diurnal amplitude) to values in excess of 2.0 (very large diurnal amplitude). Modellers should now use these results to check if their model cloud parameterizations are capable of translating a given atmospheric forcing into the correct statistical ice cloud properties.
Resumo:
A new PID tuning and controller approach is introduced for Hammerstein systems based on input/output data. A B-spline neural network is used to model the nonlinear static function in the Hammerstein system. The control signal is composed of a PID controller together with a correction term. In order to update the control signal, the multistep ahead predictions of the Hammerstein system based on the B-spline neural networks and the associated Jacobians matrix are calculated using the De Boor algorithms including both the functional and derivative recursions. A numerical example is utilized to demonstrate the efficacy of the proposed approaches.
Resumo:
The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.
Resumo:
The ground-based Atmospheric Radiation Measurement Program (ARM) and NASA Aerosol Robotic Net- work (AERONET) routinely monitor clouds using zenith ra- diances at visible and near-infrared wavelengths. Using the transmittance calculated from such measurements, we have developed a new retrieval method for cloud effective droplet size and conducted extensive tests for non-precipitating liquid water clouds. The underlying principle is to combine a liquid-water-absorbing wavelength (i.e., 1640 nm) with a non-water-absorbing wavelength for acquiring information on cloud droplet size and optical depth. For simulated stratocumulus clouds with liquid water path less than 300 g m−2 and horizontal resolution of 201 m, the retrieval method underestimates the mean effective radius by 0.8μm, with a root-mean-squared error of 1.7 μm and a relative deviation of 13%. For actual observations with a liquid water path less than 450 g m−2 at the ARM Oklahoma site during 2007– 2008, our 1.5-min-averaged retrievals are generally larger by around 1 μm than those from combined ground-based cloud radar and microwave radiometer at a 5-min temporal resolution. We also compared our retrievals to those from combined shortwave flux and microwave observations for relatively homogeneous clouds, showing that the bias between these two retrieval sets is negligible, but the error of 2.6 μm and the relative deviation of 22 % are larger than those found in our simulation case. Finally, the transmittance-based cloud effective droplet radii agree to better than 11 % with satellite observations and have a negative bias of 1 μm. Overall, the retrieval method provides reasonable cloud effective radius estimates, which can enhance the cloud products of both ARM and AERONET.
Resumo:
It has long been supposed that preference judgments between sets of to-be-considered possibilities are made by means of initially winnowing down the most promising-looking alternatives to form smaller “consideration sets” (Howard, 1963; Wright & Barbour, 1977). In preference choices with >2 options, it is standard to assume that a “consideration set”, based upon some simple criterion, is established to reduce the options available. Inferential judgments, in contrast, have more frequently been investigated in situations in which only two possibilities need to be considered (e.g., which of these two cities is the larger?) Proponents of the “fast and frugal” approach to decision-making suggest that such judgments are also made on the basis of limited, simple criteria. For example, if only one of two cities is recognized and the task is to judge which city has the larger population, the recognition heuristic states that the recognized city should be selected. A multinomial processing tree model is outlined which provides the basis for estimating the extent to which recognition is used as a criterion in establishing a consideration set for inferential judgments between three possible options.
Resumo:
The effect of increasing salinity and freezing stress singly and in combination on a range of chlorophyll fluorescence parameters in foliar tissue of six Crataegus genotypes was examined. In general, increased stress reduced fluorescence values and absorption, trapping and electron transport energy fluxes per leaf reaction center and cross section, with decreased sigmoidicity of OJIP curves as a measure of the plastoquinone pool, reflecting decreased energy fluxes. Based on percentage reduction in a performance index from controls compared to stress-treated values, plants were ranked in order of tolerant > intermediate > sensitive. Use of this PIp ranking criteria enabled the distinguishing of marked differences in foliar salt/freezing hardiness between the Crataegus species used. Interpretation of the photochemical data showed that salinity and freezing affects both the acceptor and donor side of Photosystem II, while OJIP observations provided information regarding structural and functional changes in the leaf photosynthetic apparatus of the test species. It is concluded that chlorophyll fluorescence offers a rapid screening technique for assessing foliar salinity and freezing tolerance of woody perennials
Resumo:
Mesospheric temperature inversions are well established observed phenomena, yet their properties remain the subject of ongoing research. Comparisons between Rayleigh-scatter lidar temperature measurements obtained by the University of Western Ontario's Purple Crow Lidar (42.9°N, 81.4°W) and the Canadian Middle Atmosphere Model are used to quantify the statistics of inversions. In both model and measurements, inversions occur most frequently in the winter and exhibit an average amplitude of ∼10 K. The model exhibits virtually no inversions in the summer, while the measurements show a strongly reduced frequency of occurrence with an amplitude about half that in the winter. A simple theory of mesospheric inversions based on wave saturation is developed, with no adjustable parameters. It predicts that the environmental lapse rate must be less than half the adiabatic lapse rate for an inversion to form, and it predicts the ratio of the inversion amplitude and thickness as a function of environmental lapse rate. Comparison of this prediction to the actual amplitude/thickness ratio using the lidar measurements shows good agreement between theory and measurements.
Resumo:
Purpose: This paper aims to design an evaluation method that enables an organization to assess its current IT landscape and provide readiness assessment prior to Software as a Service (SaaS) adoption. Design/methodology/approach: The research employs a mixed of quantitative and qualitative approaches for conducting an IT application assessment. Quantitative data such as end user’s feedback on the IT applications contribute to the technical impact on efficiency and productivity. Qualitative data such as business domain, business services and IT application cost drivers are used to determine the business value of the IT applications in an organization. Findings: The assessment of IT applications leads to decisions on suitability of each IT application that can be migrated to cloud environment. Research limitations/implications: The evaluation of how a particular IT application impacts on a business service is done based on the logical interpretation. Data mining method is suggested in order to derive the patterns of the IT application capabilities. Practical implications: This method has been applied in a local council in UK. This helps the council to decide the future status of the IT applications for cost saving purpose.
Resumo:
Current methods and techniques used in designing organisational performance measurement systems do not consider the multiple aspects of business processes or the semantics of data generated during the lifecycle of a product. In this paper, we propose an organisational performance measurement systems design model that is based on the semantics of an organisation, business process and products lifecycle. Organisational performance measurement is examined from academic and practice disciplines. The multi-discipline approach is used as a research tool to explore the weaknesses of current models that are used to design organisational performance measurement systems. This helped in identifying the gaps in research and practice concerning the issues and challenges in designing information systems for measuring the performance of an organisation. The knowledge sources investigated include on-going and completed research project reports; scientific and management literature; and practitioners’ magazines.
Resumo:
Time-resolved kinetic studies of silylene, SiH2, generated by laser flash photolysis of 1-silacyclopent-3-ene and phenylsilane, have been carried out to obtain rate constants for its bimolecular reactions with methanol, ethanol, 1-propanol, 1-butanol and 2-methyl-1-butanol. The reactions were studied in the gas phase over the pressure range 1-100 Torr in SF6 bath gas, at room temperature. In the study with methanol several buffer gases were used. All five reactions showed pressure dependences characteristic of third body assisted association reactions. The rate constant pressure dependences were modelled using RRKM theory, based on Eo values of the association complexes obtained by ab initio calculation (G3 level). Transition state models were adjusted to fit experimental fall-off curves and extrapolated to obtain k∞ values in the range 1.9 to 4.5 × 10-10 cm3 molecule-1 s-1. These numbers, corresponding to the true bimolecular rate constants, indicate efficiencies of between 16 and 67% of the collision rates for these reactions. In the reaction of SiH2 + MeOH there is a small kinetic component to the rate which is second order in MeOH (at low total pressures). This suggests an additional catalysed reaction pathway, which is supported by the ab initio calculations. These calculations have been used to define specific MeOH-for-H2O substitution effects on this catalytic pathway. Where possible our experimental and theoretical results are compared with those of previous studies.
Resumo:
We present an efficient graph-based algorithm for quantifying the similarity of household-level energy use profiles, using a notion of similarity that allows for small time–shifts when comparing profiles. Experimental results on a real smart meter data set demonstrate that in cases of practical interest our technique is far faster than the existing method for computing the same similarity measure. Having a fast algorithm for measuring profile similarity improves the efficiency of tasks such as clustering of customers and cross-validation of forecasting methods using historical data. Furthermore, we apply a generalisation of our algorithm to produce substantially better household-level energy use forecasts from historical smart meter data.
Resumo:
We have optimised the atmospheric radiation algorithm of the FAMOUS climate model on several hardware platforms. The optimisation involved translating the Fortran code to C and restructuring the algorithm around the computation of a single air column. Instead of the existing MPI-based domain decomposition, we used a task queue and a thread pool to schedule the computation of individual columns on the available processors. Finally, four air columns are packed together in a single data structure and computed simultaneously using Single Instruction Multiple Data operations. The modified algorithm runs more than 50 times faster on the CELL’s Synergistic Processing Elements than on its main PowerPC processing element. On Intel-compatible processors, the new radiation code runs 4 times faster. On the tested graphics processor, using OpenCL, we find a speed-up of more than 2.5 times as compared to the original code on the main CPU. Because the radiation code takes more than 60% of the total CPU time, FAMOUS executes more than twice as fast. Our version of the algorithm returns bit-wise identical results, which demonstrates the robustness of our approach. We estimate that this project required around two and a half man-years of work.
Resumo:
In their contribution to PNAS, Penner et al. (1) used a climate model to estimate the radiative forcing by the aerosol first indirect effect (cloud albedo effect) in two different ways: first, by deriving a statistical relationship between the logarithm of cloud droplet number concentration, ln Nc, and the logarithm of aerosol optical depth, ln AOD (or the logarithm of the aerosol index, ln AI) for present-day and preindustrial aerosol fields, a method that was applied earlier to satellite data (2), and, second, by computing the radiative flux perturbation between two simulations with and without anthropogenic aerosol sources. They find a radiative forcing that is a factor of 3 lower in the former approach than in the latter [as Penner et al. (1) correctly noted, only their “inline” results are useful for the comparison]. This study is a very interesting contribution, but we believe it deserves several clarifications.
Resumo:
Using a simple and accessible Teflon AF-2400 based tube-intube reactor, a series of pyrroles were synthesised in flow using the Paal–Knorr reaction of 1,4-diketones with gaseous ammonia. An inline flow titration technique allowed measurement of the ammonia concentration and its relationship to residence time and temperature.
Resumo:
Beginning with an extensive literature review, this new research summary report analyses the process of rental depreciation, its effect on the performance of rents, and the impacts of capital expenditure on depreciation. It is based on an analysis of more than 700 commercial and industrial properties and newer property types and it is the most comprehensive national study of rental depreciation since the 1986 CALUS report. The report will be of interest to valuers, fund managers, institutional investors and their advisors and researchers.