936 resultados para Data Acquisition Methods.
Resumo:
The World Weather Research Programme (WWRP) and the World Climate Research Programme (WCRP) have identified collaborations and scientific priorities to accelerate advances in analysis and prediction at subseasonal-to-seasonal time scales, which include i) advancing knowledge of mesoscale–planetary-scale interactions and their prediction; ii) developing high-resolution global–regional climate simulations, with advanced representation of physical processes, to improve the predictive skill of subseasonal and seasonal variability of high-impact events, such as seasonal droughts and floods, blocking, and tropical and extratropical cyclones; iii) contributing to the improvement of data assimilation methods for monitoring and predicting used in coupled ocean–atmosphere–land and Earth system models; and iv) developing and transferring diagnostic and prognostic information tailored to socioeconomic decision making. The document puts forward specific underpinning research, linkage, and requirements necessary to achieve the goals of the proposed collaboration.
Resumo:
This chapter presents techniques used for the generation of 3D digital elevation models (DEMs) from remotely sensed data. Three methods are explored and discussed—optical stereoscopic imagery, Interferometric Synthetic Aperture Radar (InSAR), and LIght Detection and Ranging (LIDAR). For each approach, the state-of-the-art presented in the literature is reviewed. Techniques involved in DEM generation are presented with accuracy evaluation. Results of DEMs reconstructed from remotely sensed data are illustrated. While the processes of DEM generation from satellite stereoscopic imagery represents a good example of passive, multi-view imaging technology, discussed in Chap. 2 of this book, InSAR and LIDAR use different principles to acquire 3D information. With regard to InSAR and LIDAR, detailed discussions are conducted in order to convey the fundamentals of both technologies.
Resumo:
Global communication requirements and load imbalance of some parallel data mining algorithms are the major obstacles to exploit the computational power of large-scale systems. This work investigates how non-uniform data distributions can be exploited to remove the global communication requirement and to reduce the communication cost in iterative parallel data mining algorithms. In particular, the analysis focuses on one of the most influential and popular data mining methods, the k-means algorithm for cluster analysis. The straightforward parallel formulation of the k-means algorithm requires a global reduction operation at each iteration step, which hinders its scalability. This work studies a different parallel formulation of the algorithm where the requirement of global communication can be relaxed while still providing the exact solution of the centralised k-means algorithm. The proposed approach exploits a non-uniform data distribution which can be either found in real world distributed applications or can be induced by means of multi-dimensional binary search trees. The approach can also be extended to accommodate an approximation error which allows a further reduction of the communication costs.
Resumo:
With advances in technology, terahertz imaging and spectroscopy are beginning to move out of the laboratory and find applications in areas as diverse as security screening, medicine, art conservation and field archaeology. Nevertheless, there is still a need to improve upon the performance of existing terahertz systems to achieve greater compactness and robustness, enhanced spatial resolution, more rapid data acquisition times and operation at greater standoff distances. This chapter will review recent technological developments in this direction that make use of nanostructures in the generation, detection and manipulation of terahertz radiation. The chapter will also explain how terahertz spectroscopy can be used as a tool to characterize the ultrafast carrier dynamics of nanomaterials.
Resumo:
Residential electricity demand in most European countries accounts for a major proportion of overall electricity consumption. The timing of residential electricity demand has significant impacts on carbon emissions and system costs. This paper reviews the data and methods used in time use studies in the context of residential electricity demand modelling. It highlights key issues which are likely to become more topical for research on the timing of electricity demand following the roll-out of smart metres.
Resumo:
Current methods for initialising coupled atmosphere-ocean forecasts often rely on the use of separate atmosphere and ocean analyses, the combination of which can leave the coupled system imbalanced at the beginning of the forecast, potentially accelerating the development of errors. Using a series of experiments with the European Centre for Medium-range Weather Forecasts coupled system, the magnitude and extent of these so-called initialisation shocks is quantified, and their impact on forecast skill measured. It is found that forecasts initialised by separate ocean and atmospheric analyses do exhibit initialisation shocks in lower atmospheric temperature, when compared to forecasts initialised using a coupled data assimilation method. These shocks result in as much as a doubling of root-mean-square error on the first day of the forecast in some regions, and in increases that are sustained for the duration of the 10-day forecasts performed here. However, the impacts of this choice of initialisation on forecast skill, assessed using independent datasets, were found to be negligible, at least over the limited period studied. Larger initialisation shocks are found to follow a change in either the atmospheric or ocean model component between the analysis and forecast phases: changes in the ocean component can lead to sea surface temperature shocks of more than 0.5K in some equatorial regions during the first day of the forecast. Implications for the development of coupled forecast systems, particularly with respect to coupled data assimilation methods, are discussed.
Resumo:
Estimating trajectories and parameters of dynamical systems from observations is a problem frequently encountered in various branches of science; geophysicists for example refer to this problem as data assimilation. Unlike as in estimation problems with exchangeable observations, in data assimilation the observations cannot easily be divided into separate sets for estimation and validation; this creates serious problems, since simply using the same observations for estimation and validation might result in overly optimistic performance assessments. To circumvent this problem, a result is presented which allows us to estimate this optimism, thus allowing for a more realistic performance assessment in data assimilation. The presented approach becomes particularly simple for data assimilation methods employing a linear error feedback (such as synchronization schemes, nudging, incremental 3DVAR and 4DVar, and various Kalman filter approaches). Numerical examples considering a high gain observer confirm the theory.
Resumo:
Accurate knowledge of the location and magnitude of ocean heat content (OHC) variability and change is essential for understanding the processes that govern decadal variations in surface temperature, quantifying changes in the planetary energy budget, and developing constraints on the transient climate response to external forcings. We present an overview of the temporal and spatial characteristics of OHC variability and change as represented by an ensemble of dynamical and statistical ocean reanalyses (ORAs). Spatial maps of the 0–300 m layer show large regions of the Pacific and Indian Oceans where the interannual variability of the ensemble mean exceeds ensemble spread, indicating that OHC variations are well-constrained by the available observations over the period 1993–2009. At deeper levels, the ORAs are less well-constrained by observations with the largest differences across the ensemble mostly associated with areas of high eddy kinetic energy, such as the Southern Ocean and boundary current regions. Spatial patterns of OHC change for the period 1997–2009 show good agreement in the upper 300 m and are characterized by a strong dipole pattern in the Pacific Ocean. There is less agreement in the patterns of change at deeper levels, potentially linked to differences in the representation of ocean dynamics, such as water mass formation processes. However, the Atlantic and Southern Oceans are regions in which many ORAs show widespread warming below 700 m over the period 1997–2009. Annual time series of global and hemispheric OHC change for 0–700 m show the largest spread for the data sparse Southern Hemisphere and a number of ORAs seem to be subject to large initialization ‘shock’ over the first few years. In agreement with previous studies, a number of ORAs exhibit enhanced ocean heat uptake below 300 and 700 m during the mid-1990s or early 2000s. The ORA ensemble mean (±1 standard deviation) of rolling 5-year trends in full-depth OHC shows a relatively steady heat uptake of approximately 0.9 ± 0.8 W m−2 (expressed relative to Earth’s surface area) between 1995 and 2002, which reduces to about 0.2 ± 0.6 W m−2 between 2004 and 2006, in qualitative agreement with recent analysis of Earth’s energy imbalance. There is a marked reduction in the ensemble spread of OHC trends below 300 m as the Argo profiling float observations become available in the early 2000s. In general, we suggest that ORAs should be treated with caution when employed to understand past ocean warming trends—especially when considering the deeper ocean where there is little in the way of observational constraints. The current work emphasizes the need to better observe the deep ocean, both for providing observational constraints for future ocean state estimation efforts and also to develop improved models and data assimilation methods.
Resumo:
TIGGE was a major component of the THORPEX (The Observing System Research and Predictability Experiment) research program, whose aim is to accelerate improvements in forecasting high-impact weather. By providing ensemble prediction data from leading operational forecast centers, TIGGE has enhanced collaboration between the research and operational meteorological communities and enabled research studies on a wide range of topics. The paper covers the objective evaluation of the TIGGE data. For a range of forecast parameters, it is shown to be beneficial to combine ensembles from several data providers in a Multi-model Grand Ensemble. Alternative methods to correct systematic errors, including the use of reforecast data, are also discussed. TIGGE data have been used for a range of research studies on predictability and dynamical processes. Tropical cyclones are the most destructive weather systems in the world, and are a focus of multi-model ensemble research. Their extra-tropical transition also has a major impact on skill of mid-latitude forecasts. We also review how TIGGE has added to our understanding of the dynamics of extra-tropical cyclones and storm tracks. Although TIGGE is a research project, it has proved invaluable for the development of products for future operational forecasting. Examples include the forecasting of tropical cyclone tracks, heavy rainfall, strong winds, and flood prediction through coupling hydrological models to ensembles. Finally the paper considers the legacy of TIGGE. We discuss the priorities and key issues in predictability and ensemble forecasting, including the new opportunities of convective-scale ensembles, links with ensemble data assimilation methods, and extension of the range of useful forecast skill.
Resumo:
Matrix-assisted laser desorption/ionisation (MALDI) coupled with time-of-flight (TOF) mass spectrometry (MS) is a powerful tool for the analysis of biological samples, and nanoflow high-performance liquid chromatography (nanoHPLC) is a useful separation technique for the analysis of complex proteomics samples. The off-line combination of MALDI and nanoHPLC has been extensively investigated and straightforward techniques have been developed, focussing particularly on automated MALDI sample preparation that yields sensitive and reproducible spectra. Normally conventional solid MALDI matrices such as α-cyano-4-hydroxycinnamic acid (CHCA) are used for sample preparation. However, they have limited usefulness in quantitative measurements and automated data acquisition because of the formation of heterogeneous crystals, resulting in highly variable ion yields and desorption/ ionization characteristics. Glycerol-based liquid support matrices (LSM) have been proposed as an alternative to the traditional solid matrices as they provide increased shot-to-shot reproducibility, leading to prolonged and stable ion signals and therefore better results. This chapter focuses on the integration of the liquid LSM MALDI matrices into the LC-MALDI MS/MS approach in identifying complex and large proteomes. The interface between LC and MALDI consists of a robotic spotter, which fractionates the eluent from the LC column into nanoliter volumes, and co-spots simultaneously the liquid matrix with the eluent fractions onto a MALDI target plate via sheath flow. The efficiency of this method is demonstrated through the analysis of trypsin digests of both bovine serum albumin (BSA) and Lactobacillus plantarum WCFS1 proteins.
Resumo:
The dynamical processes that lead to open cluster disruption cause its mass to decrease. To investigate such processes from the observational point of view, it is important to identify open cluster remnants (OCRs), which are intrinsically poorly populated. Due to their nature, distinguishing them from field star fluctuations is still an unresolved issue. In this work, we developed a statistical diagnostic tool to distinguish poorly populated star concentrations from background field fluctuations. We use 2MASS photometry to explore one of the conditions required for a stellar group to be a physical group: to produce distinct sequences in a colour-magnitude diagram (CMD). We use automated tools to (i) derive the limiting radius; (ii) decontaminate the field and assign membership probabilities; (iii) fit isochrones; and (iv) compare object and field CMDs, considering the isochrone solution, in order to verify the similarity. If the object cannot be statistically considered as a field fluctuation, we derive its probable age, distance modulus, reddening and uncertainties in a self-consistent way. As a test, we apply the tool to open clusters and comparison fields. Finally, we study the OCR candidates DoDz 6, NGC 272, ESO 435 SC48 and ESO 325 SC15. The tool is optimized to treat these low-statistic objects and to separate the best OCR candidates for studies on kinematics and chemical composition. The study of the possible OCRs will certainly provide a deep understanding of OCR properties and constraints for theoretical models, including insights into the evolution of open clusters and dissolution rates.
Resumo:
Evidence of jet precession in many galactic and extragalactic sources has been reported in the literature. Much of this evidence is based on studies of the kinematics of the jet knots, which depends on the correct identification of the components to determine their respective proper motions and position angles on the plane of the sky. Identification problems related to fitting procedures, as well as observations poorly sampled in time, may influence the follow-up of the components in time, which consequently might contribute to a misinterpretation of the data. In order to deal with these limitations, we introduce a very powerful statistical tool to analyse jet precession: the cross-entropy method for continuous multi-extremal optimization. Only based on the raw data of the jet components (right ascension and declination offsets from the core), the cross-entropy method searches for the precession model parameters that better represent the data. In this work we present a large number of tests to validate this technique, using synthetic precessing jets built from a given set of precession parameters. With the aim of recovering these parameters, we applied the cross-entropy method to our precession model, varying exhaustively the quantities associated with the method. Our results have shown that even in the most challenging tests, the cross-entropy method was able to find the correct parameters within a 1 per cent level. Even for a non-precessing jet, our optimization method could point out successfully the lack of precession.
Resumo:
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.
Resumo:
This paper presents an experimental characterization of the behavior of an analogous version of the Chua`s circuit. The electronic circuit signals are captured using a data acquisition board (DAQ) and processed using LabVIEW environment. The following aspects of the time series analysis are analyzed: time waveforms, phase portraits, frequency spectra, Poincar, sections, and bifurcation diagram. The circuit behavior is experimentally mapped with the parameter variations, where are identified equilibrium points, periodic and chaotic attractors, and bifurcations. These analysis techniques are performed in real-time and can be applied to characterize, with precision, several nonlinear systems.
Resumo:
The analysis of histological sections has long been a valuable tool in the pathological studies. The interpretation of tissue conditions, however, relies directly on visual evaluation of tissue slides, which may be difficult to interpret because of poor contrast or poor color differentiation. The Chromatic Contrast Visualization System (CCV) combines an optical microscope with electronically controlled light-emitting diodes (LEDs) in order to generate adjustable intensities of RGB channels for sample illumination. While most image enhancement techniques rely on software post-processing of an image acquired under standard illumination conditions, CCV produces real-time variations in the color composition of the light source itself. The possibility of covering the entire RGB chromatic range, combined with the optical properties of the different tissues, allows for a substantial enhancement in image details. Traditional image acquisition methods do not exploit these visual enhancements which results in poorer visual distinction among tissue structures. Photodynamic therapy (PDT) procedures are of increasing interest in the treatment of several forms of cancer. This study uses histological slides of rat liver samples that were induced to necrosis after being exposed to PDT. Results show that visualization of tissue structures could be improved by changing colors and intensities of the microscope light source. PDT-necrosed tissue samples are better differentiated when illuminated with different color wavelengths, leading to an improved differentiation of cells in the necrosis area. Due to the potential benefits it can bring to interpretation and diagnosis, further research in this field could make CCV an attractive technique for medical applications.