867 resultados para Measurement-based quantum computing
Resumo:
The concentrations of sulfate, black carbon (BC) and other aerosols in the Arctic are characterized by high values in late winter and spring (so-called Arctic Haze) and low values in summer. Models have long been struggling to capture this seasonality and especially the high concentrations associated with Arctic Haze. In this study, we evaluate sulfate and BC concentrations from eleven different models driven with the same emission inventory against a comprehensive pan-Arctic measurement data set over a time period of 2 years (2008–2009). The set of models consisted of one Lagrangian particle dispersion model, four chemistry transport models (CTMs), one atmospheric chemistry-weather forecast model and five chemistry climate models (CCMs), of which two were nudged to meteorological analyses and three were running freely. The measurement data set consisted of surface measurements of equivalent BC (eBC) from five stations (Alert, Barrow, Pallas, Tiksi and Zeppelin), elemental carbon (EC) from Station Nord and Alert and aircraft measurements of refractory BC (rBC) from six different campaigns. We find that the models generally captured the measured eBC or rBC and sulfate concentrations quite well, compared to previous comparisons. However, the aerosol seasonality at the surface is still too weak in most models. Concentrations of eBC and sulfate averaged over three surface sites are underestimated in winter/spring in all but one model (model means for January–March underestimated by 59 and 37 % for BC and sulfate, respectively), whereas concentrations in summer are overestimated in the model mean (by 88 and 44 % for July–September), but with overestimates as well as underestimates present in individual models. The most pronounced eBC underestimates, not included in the above multi-site average, are found for the station Tiksi in Siberia where the measured annual mean eBC concentration is 3 times higher than the average annual mean for all other stations. This suggests an underestimate of BC sources in Russia in the emission inventory used. Based on the campaign data, biomass burning was identified as another cause of the modeling problems. For sulfate, very large differences were found in the model ensemble, with an apparent anti-correlation between modeled surface concentrations and total atmospheric columns. There is a strong correlation between observed sulfate and eBC concentrations with consistent sulfate/eBC slopes found for all Arctic stations, indicating that the sources contributing to sulfate and BC are similar throughout the Arctic and that the aerosols are internally mixed and undergo similar removal. However, only three models reproduced this finding, whereas sulfate and BC are weakly correlated in the other models. Overall, no class of models (e.g., CTMs, CCMs) performed better than the others and differences are independent of model resolution.
Resumo:
Collocations between two satellite sensors are occasions where both sensors observe the same place at roughly the same time. We study collocations between the Microwave Humidity Sounder (MHS) on-board NOAA-18 and the Cloud Profiling Radar (CPR) on-board CloudSat. First, a simple method is presented to obtain those collocations and this method is compared with a more complicated approach found in literature. We present the statistical properties of the collocations, with particular attention to the effects of the differences in footprint size. For 2007, we find approximately two and a half million MHS measurements with CPR pixels close to their centrepoints. Most of those collocations contain at least ten CloudSat pixels and image relatively homogeneous scenes. In the second part, we present three possible applications for the collocations. Firstly, we use the collocations to validate an operational Ice Water Path (IWP) product from MHS measurements, produced by the National Environment Satellite, Data and Information System (NESDIS) in the Microwave Surface and Precipitation Products System (MSPPS). IWP values from the CloudSat CPR are found to be significantly larger than those from the MSPPS. Secondly, we compare the relation between IWP and MHS channel 5 (190.311 GHz) brightness temperature for two datasets: the collocated dataset, and an artificial dataset. We find a larger variability in the collocated dataset. Finally, we use the collocations to train an Artificial Neural Network and describe how we can use it to develop a new MHS-based IWP product. We also study the effect of adding measurements from the High Resolution Infrared Radiation Sounder (HIRS), channels 8 (11.11 μm) and 11 (8.33 μm). This shows a small improvement in the retrieval quality. The collocations described in the article are available for public use.
Resumo:
We present cross-validation of remote sensing measurements of methane profiles in the Canadian high Arctic. Accurate and precise measurements of methane are essential to understand quantitatively its role in the climate system and in global change. Here, we show a cross-validation between three datasets: two from spaceborne instruments and one from a ground-based instrument. All are Fourier Transform Spectrometers (FTSs). We consider the Canadian SCISAT Atmospheric Chemistry Experiment (ACE)-FTS, a solar occultation infrared spectrometer operating since 2004, and the thermal infrared band of the Japanese Greenhouse Gases Observing Satellite (GOSAT) Thermal And Near infrared Sensor for carbon Observation (TANSO)-FTS, a nadir/off-nadir scanning FTS instrument operating at solar and terrestrial infrared wavelengths, since 2009. The ground-based instrument is a Bruker 125HR Fourier Transform Infrared (FTIR) spectrometer, measuring mid-infrared solar absorption spectra at the Polar Environment Atmospheric Research Laboratory (PEARL) Ridge Lab at Eureka, Nunavut (80° N, 86° W) since 2006. For each pair of instruments, measurements are collocated within 500 km and 24 h. An additional criterion based on potential vorticity values was found not to significantly affect differences between measurements. Profiles are regridded to a common vertical grid for each comparison set. To account for differing vertical resolutions, ACE-FTS measurements are smoothed to the resolution of either PEARL-FTS or TANSO-FTS, and PEARL-FTS measurements are smoothed to the TANSO-FTS resolution. Differences for each pair are examined in terms of profile and partial columns. During the period considered, the number of collocations for each pair is large enough to obtain a good sample size (from several hundred to tens of thousands depending on pair and configuration). Considering full profiles, the degrees of freedom for signal (DOFS) are between 0.2 and 0.7 for TANSO-FTS and between 1.5 and 3 for PEARL-FTS, while ACE-FTS has considerably more information (roughly 1° of freedom per altitude level). We take partial columns between roughly 5 and 30 km for the ACE-FTS–PEARL-FTS comparison, and between 5 and 10 km for the other pairs. The DOFS for the partial columns are between 1.2 and 2 for PEARL-FTS collocated with ACE-FTS, between 0.1 and 0.5 for PEARL-FTS collocated with TANSO-FTS or for TANSO-FTS collocated with either other instrument, while ACE-FTS has much higher information content. For all pairs, the partial column differences are within ± 3 × 1022 molecules cm−2. Expressed as median ± median absolute deviation (expressed in absolute or relative terms), these differences are 0.11 ± 9.60 × 10^20 molecules cm−2 (0.012 ± 1.018 %) for TANSO-FTS–PEARL-FTS, −2.6 ± 2.6 × 10^21 molecules cm−2 (−1.6 ± 1.6 %) for ACE-FTS–PEARL-FTS, and 7.4 ± 6.0 × 10^20 molecules cm−2 (0.78 ± 0.64 %) for TANSO-FTS–ACE-FTS. The differences for ACE-FTS–PEARL-FTS and TANSO-FTS–PEARL-FTS partial columns decrease significantly as a function of PEARL partial columns, whereas the range of partial column values for TANSO-FTS–ACE-FTS collocations is too small to draw any conclusion on its dependence on ACE-FTS partial columns.
An LDA and probability-based classifier for the diagnosis of Alzheimer's Disease from structural MRI
Resumo:
In this paper a custom classification algorithm based on linear discriminant analysis and probability-based weights is implemented and applied to the hippocampus measurements of structural magnetic resonance images from healthy subjects and Alzheimer’s Disease sufferers; and then attempts to diagnose them as accurately as possible. The classifier works by classifying each measurement of a hippocampal volume as healthy controlsized or Alzheimer’s Disease-sized, these new features are then weighted and used to classify the subject as a healthy control or suffering from Alzheimer’s Disease. The preliminary results obtained reach an accuracy of 85.8% and this is a similar accuracy to state-of-the-art methods such as a Naive Bayes classifier and a Support Vector Machine. An advantage of the method proposed in this paper over the aforementioned state of the art classifiers is the descriptive ability of the classifications it produces. The descriptive model can be of great help to aid a doctor in the diagnosis of Alzheimer’s Disease, or even further the understand of how Alzheimer’s Disease affects the hippocampus.
Resumo:
Ruminant husbandry is a major source of anthropogenic greenhouse gases (GHG). Filling knowledge gaps and providing expert recommendation are important for defining future research priorities, improving methodologies and establishing science-based GHG mitigation solutions to government and non-governmental organisations, advisory/extension networks, and the ruminant livestock sector. The objectives of this review is to summarize published literature to provide a detailed assessment of the methodologies currently in use for measuring enteric methane (CH4) emission from individual animals under specific conditions, and give recommendations regarding their application. The methods described include respiration chambers and enclosures, sulphur hexafluoride tracer (SF6) technique, and techniques based on short-term measurements of gas concentrations in samples of exhaled air. This includes automated head chambers (e.g. the GreenFeed system), the use of carbon dioxide (CO2) as a marker, and (handheld) laser CH4 detection. Each of the techniques are compared and assessed on their capability and limitations, followed by methodology recommendations. It is concluded that there is no ‘one size fits all’ method for measuring CH4 emission by individual animals. Ultimately, the decision as to which method to use should be based on the experimental objectives and resources available. However, the need for high throughput methodology e.g. for screening large numbers of animals for genomic studies, does not justify the use of methods that are inaccurate. All CH4 measurement techniques are subject to experimental variation and random errors. Many sources of variation must be considered when measuring CH4 concentration in exhaled air samples without a quantitative or at least regular collection rate, or use of a marker to indicate (or adjust) for the proportion of exhaled CH4 sampled. Consideration of the number and timing of measurements relative to diurnal patterns of CH4 emission and respiratory exchange are important, as well as consideration of feeding patterns and associated patterns of rumen fermentation rate and other aspects of animal behaviour. Regardless of the method chosen, appropriate calibrations and recovery tests are required for both method establishment and routine operation. Successful and correct use of methods requires careful attention to detail, rigour, and routine self-assessment of the quality of the data they provide.
Resumo:
Measurements of down-welling microwave radiation from raining clouds performed with the Advanced Microwave Radiometer for Rain Identification (ADMIRARI) radiometer at 10.7-21-36.5 GHz during the Global Precipitation Measurement Ground Validation ""Cloud processes of the main precipitation systems in Brazil: A contribution to cloud resolving modeling and to the Global Precipitation Measurement"" (CHUVA) campaign held in Brazil in March 2010 represent a unique test bed for understanding three-dimensional (3D) effects in microwave radiative transfer processes. While the necessity of accounting for geometric effects is trivial given the slant observation geometry (ADMIRARI was pointing at a fixed 30 elevation angle), the polarization signal (i.e., the difference between the vertical and horizontal brightness temperatures) shows ubiquitousness of positive values both at 21.0 and 36.5 GHz in coincidence with high brightness temperatures. This signature is a genuine and unique microwave signature of radiation side leakage which cannot be explained in a 1D radiative transfer frame but necessitates the inclusion of three-dimensional scattering effects. We demonstrate these effects and interdependencies by analyzing two campaign case studies and by exploiting a sophisticated 3D radiative transfer suited for dichroic media like precipitating clouds.
Resumo:
The evolution of commodity computing lead to the possibility of efficient usage of interconnected machines to solve computationally-intensive tasks, which were previously solvable only by using expensive supercomputers. This, however, required new methods for process scheduling and distribution, considering the network latency, communication cost, heterogeneous environments and distributed computing constraints. An efficient distribution of processes over such environments requires an adequate scheduling strategy, as the cost of inefficient process allocation is unacceptably high. Therefore, a knowledge and prediction of application behavior is essential to perform effective scheduling. In this paper, we overview the evolution of scheduling approaches, focusing on distributed environments. We also evaluate the current approaches for process behavior extraction and prediction, aiming at selecting an adequate technique for online prediction of application execution. Based on this evaluation, we propose a novel model for application behavior prediction, considering chaotic properties of such behavior and the automatic detection of critical execution points. The proposed model is applied and evaluated for process scheduling in cluster and grid computing environments. The obtained results demonstrate that prediction of the process behavior is essential for efficient scheduling in large-scale and heterogeneous distributed environments, outperforming conventional scheduling policies by a factor of 10, and even more in some cases. Furthermore, the proposed approach proves to be efficient for online predictions due to its low computational cost and good precision. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This paper presents an automatic method to detect and classify weathered aggregates by assessing changes of colors and textures. The method allows the extraction of aggregate features from images and the automatic classification of them based on surface characteristics. The concept of entropy is used to extract features from digital images. An analysis of the use of this concept is presented and two classification approaches, based on neural networks architectures, are proposed. The classification performance of the proposed approaches is compared to the results obtained by other algorithms (commonly considered for classification purposes). The obtained results confirm that the presented method strongly supports the detection of weathered aggregates.
Resumo:
ZnO nanocrystals are studied using theoretical calculations based on the density functional theory. The two main effects related to the reduced size of the nanocrystals are investigated: quantum confinement and a large surface:volume ratio. The effects of quantum confinement are studied by saturating the surface dangling bonds of the nanocrystals with hypothetical H atoms. To understand the effects of the surfaces of the nanocrystals, all saturation is removed and the system is relaxed to its minimum energy position. Several different surface motifs are reported, which should be observed experimentally. Spin-polarized calculations are performed in the nonsaturated nanocrystals, leading to different magnetic moments. We propose that this magnetic moment can be responsible for the intrinsic magnetism observed in ZnO nanostructures.
Resumo:
Carbon nanotubes rank amongst potential candidates for a new family of nanoscopic devices, in particular for sensing applications. At the same time that defects in carbon nanotubes act as binding sites for foreign species, our current level of control over the fabrication process does not allow one to specifically choose where these binding sites will actually be positioned. In this work we present a theoretical framework for accurately calculating the electronic and transport properties of long disordered carbon nanotubes containing a large number of binding sites randomly distributed along a sample. This method combines the accuracy and functionality of ab initio density functional theory to determine the electronic structure with a recursive Green`s functions method. We apply this methodology on the problem of nitrogen-rich carbon nanotubes, first considering different types of defects and then demonstrating how our simulations can help in the field of sensor design by allowing one to compute the transport properties of realistic nanotube devices containing a large number of randomly distributed binding sites.
Resumo:
The use of the spin of the electron as the ultimate logic bit-in what has been dubbed spintronics-can lead to a novel way of thinking about information flow. At the same time single-layer graphene has been the subject of intense research due to its potential application in nanoscale electronics. While defects can significantly alter the electronic properties of nanoscopic systems, the lack of control can lead to seemingly deleterious effects arising from the random arrangement of such impurities. Here we demonstrate, using ab initio density functional theory and non-equilibrium Green`s functions calculations, that it is possible to obtain perfect spin selectivity in doped graphene nanoribbons to produce a perfect spin filter. We show that initially unpolarized electrons entering the system give rise to 100% polarization of the current due to random disorder. This effect is explained in terms of different localization lengths for each spin channel which leads to a new mechanism for the spin filtering effect that is disorder-driven.
Resumo:
Photoluminescence measurements at different temperatures have been performed to investigate the effects of confinement on the electron-phonon interaction in GaAs/AlGaAs quantum wells (QWs). A series of samples with different well widths in the range from 150 up to 750 A was analyzed. Using a fitting procedure based on the Passler-p model to describe the temperature dependence of the exciton recombination energy, we determined a fit parameter which is related to the strength of the electron-phonon interaction. On the basis of the behavior of this fit parameter as a function of the well width thickness of the samples investigated, we verified that effects of confinement on the exciton recombination energy are still present in QWs with well widths as large as 450 angstrom. Our findings also show that the electron-phonon interaction is three times stronger in GaAs bulk material than in Al(0.18)Ga(0.82)As/GaAs QWs.
Resumo:
We present theoretical photoluminescence (PL) spectra of undoped and p-doped Al(x)In(1-xy)Ga(y)N/Al(X)In(1) (X) (Y)Ga(Y)N double quantum wells (DQWs). The calculations were performed within the k.p method by means of solving a full eight-band Kane Hamiltonian together with the Poisson equation in a plane wave representation, including exchange-correlation effects within the local density approximation. Strain effects due to the lattice mismatch are also taken into account. We show the calculated PL spectra, analyzing the blue and red-shifts in energy as one varies the spike and the well widths, as well as the acceptor doping concentration. We found a transition between a regime of isolated quantum wells and that of interacting DQWs. Since there are few studies of optical properties of quantum wells based on nitride quaternary alloys, the results reported here will provide guidelines for the interpretation of forthcoming experiments. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
In this paper we extend the results presented in (de Ponte, Mizrahi and Moussa 2007 Phys. Rev. A 76 032101) to treat quantitatively the effects of reservoirs at finite temperature in a bosonic dissipative network: a chain of coupled harmonic oscillators whatever its topology, i.e., whichever the way the oscillators are coupled together, the strength of their couplings and their natural frequencies. Starting with the case where distinct reservoirs are considered, each one coupled to a corresponding oscillator, we also analyze the case where a common reservoir is assigned to the whole network. Master equations are derived for both situations and both regimes of weak and strong coupling strengths between the network oscillators. Solutions of these master equations are presented through the normal ordered characteristic function. These solutions are shown to be significantly involved when temperature effects are considered, making difficult the analysis of collective decoherence and dispersion in dissipative bosonic networks. To circumvent these difficulties, we turn to the Wigner distribution function which enables us to present a technique to estimate the decoherence time of network states. Our technique proceeds by computing separately the effects of dispersion and the attenuation of the interference terms of the Wigner function. A detailed analysis of the dispersion mechanism is also presented through the evolution of the Wigner function. The interesting collective dispersion effects are discussed and applied to the analysis of decoherence of a class of network states. Finally, the entropy and the entanglement of a pure bipartite system are discussed.
Resumo:
This paper presents the use of a multiprocessor architecture for the performance improvement of tomographic image reconstruction. Image reconstruction in computed tomography (CT) is an intensive task for single-processor systems. We investigate the filtered image reconstruction suitability based on DSPs organized for parallel processing and its comparison with the Message Passing Interface (MPI) library. The experimental results show that the speedups observed for both platforms were increased in the same direction of the image resolution. In addition, the execution time to communication time ratios (Rt/Rc) as a function of the sample size have shown a narrow variation for the DSP platform in comparison with the MPI platform, which indicates its better performance for parallel image reconstruction.