955 resultados para signal-to-noise-ratio (SNR)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sixteen monthly air–sea heat flux products from global ocean/coupled reanalyses are compared over 1993–2009 as part of the Ocean Reanalysis Intercomparison Project (ORA-IP). Objectives include assessing the global heat closure, the consistency of temporal variability, comparison with other flux products, and documenting errors against in situ flux measurements at a number of OceanSITES moorings. The ensemble of 16 ORA-IP flux estimates has a global positive bias over 1993–2009 of 4.2 ± 1.1 W m−2. Residual heat gain (i.e., surface flux + assimilation increments) is reduced to a small positive imbalance (typically, +1–2 W m−2). This compensation between surface fluxes and assimilation increments is concentrated in the upper 100 m. Implied steady meridional heat transports also improve by including assimilation sources, except near the equator. The ensemble spread in surface heat fluxes is dominated by turbulent fluxes (>40 W m−2 over the western boundary currents). The mean seasonal cycle is highly consistent, with variability between products mostly <10 W m−2. The interannual variability has consistent signal-to-noise ratio (~2) throughout the equatorial Pacific, reflecting ENSO variability. Comparisons at tropical buoy sites (10°S–15°N) over 2007–2009 showed too little ocean heat gain (i.e., flux into the ocean) in ORA-IP (up to 1/3 smaller than buoy measurements) primarily due to latent heat flux errors in ORA-IP. Comparisons with the Stratus buoy (20°S, 85°W) over a longer period, 2001–2009, also show the ORA-IP ensemble has 16 W m−2 smaller net heat gain, nearly all of which is due to too much latent cooling caused by differences in surface winds imposed in ORA-IP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A set of four eddy-permitting global ocean reanalyses produced in the framework of the MyOcean project have been compared over the altimetry period 1993–2011. The main differences among the reanalyses used here come from the data assimilation scheme implemented to control the ocean state by inserting reprocessed observations of sea surface temperature (SST), in situ temperature and salinity profiles, sea level anomaly and sea-ice concentration. A first objective of this work includes assessing the interannual variability and trends for a series of parameters, usually considered in the community as essential ocean variables: SST, sea surface salinity, temperature and salinity averaged over meaningful layers of the water column, sea level, transports across pre-defined sections, and sea ice parameters. The eddy-permitting nature of the global reanalyses allows also to estimate eddy kinetic energy. The results show that in general there is a good consistency between the different reanalyses. An intercomparison against experiments without data assimilation was done during the MyOcean project and we conclude that data assimilation is crucial for correctly simulating some quantities such as regional trends of sea level as well as the eddy kinetic energy. A second objective is to show that the ensemble mean of reanalyses can be evaluated as one single system regarding its reliability in reproducing the climate signals, where both variability and uncertainties are assessed through the ensemble spread and signal-to-noise ratio. The main advantage of having access to several reanalyses differing in the way data assimilation is performed is that it becomes possible to assess part of the total uncertainty. Given the fact that we use very similar ocean models and atmospheric forcing, we can conclude that the spread of the ensemble of reanalyses is mainly representative of our ability to gauge uncertainty in the assimilation methods. This uncertainty changes a lot from one ocean parameter to another, especially in global indices. However, despite several caveats in the design of the multi-system ensemble, the main conclusion from this study is that an eddy-permitting multi-system ensemble approach has become mature and our results provide a first step towards a systematic comparison of eddy-permitting global ocean reanalyses aimed at providing robust conclusions on the recent evolution of the oceanic state.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many institutions worldwide have developed ocean reanalyses systems (ORAs) utilizing a variety of ocean models and assimilation techniques. However, the quality of salinity reanalyses arising from the various ORAs has not yet been comprehensively assessed. In this study, we assess the upper ocean salinity content (depth-averaged over 0–700 m) from 14 ORAs and 3 objective ocean analysis systems (OOAs) as part of the Ocean Reanalyses Intercomparison Project. Our results show that the best agreement between estimates of salinity from different ORAs is obtained in the tropical Pacific, likely due to relatively abundant atmospheric and oceanic observations in this region. The largest disagreement in salinity reanalyses is in the Southern Ocean along the Antarctic circumpolar current as a consequence of the sparseness of both atmospheric and oceanic observations in this region. The West Pacific warm pool is the largest region where the signal to noise ratio of reanalysed salinity anomalies is >1. Therefore, the current salinity reanalyses in the tropical Pacific Ocean may be more reliable than those in the Southern Ocean and regions along the western boundary currents. Moreover, we found that the assimilation of salinity in ocean regions with relatively strong ocean fronts is still a common problem as seen in most ORAs. The impact of the Argo data on the salinity reanalyses is visible, especially within the upper 500m, where the interannual variability is large. The increasing trend in global-averaged salinity anomalies can only be found within the top 0–300m layer, but with quite large diversity among different ORAs. Beneath the 300m depth, the global-averaged salinity anomalies from most ORAs switch their trends from a slightly growing trend before 2002 to a decreasing trend after 2002. The rapid switch in the trend is most likely an artefact of the dramatic change in the observing system due to the implementation of Argo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Uncertainty in ocean analysis methods and deficiencies in the observing system are major obstacles for the reliable reconstruction of the past ocean climate. The variety of existing ocean reanalyses is exploited in a multi-reanalysis ensemble to improve the ocean state estimation and to gauge uncertainty levels. The ensemble-based analysis of signal-to-noise ratio allows the identification of ocean characteristics for which the estimation is robust (such as tropical mixed-layer-depth, upper ocean heat content), and where large uncertainty exists (deep ocean, Southern Ocean, sea ice thickness, salinity), providing guidance for future enhancement of the observing and data assimilation systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We estimate the conditions for detectability of two planets in a 2/1 mean-motion resonance from radial velocity data, as a function of their masses, number of observations and the signal-to-noise ratio. Even for a data set of the order of 100 observations and standard deviations of the order of a few meters per second, we find that Jovian-size resonant planets are difficult to detect if the masses of the planets differ by a factor larger than similar to 4. This is consistent with the present population of real exosystems in the 2/1 commensurability, most of which have resonant pairs with similar minimum masses, and could indicate that many other resonant systems exist, but are currently beyond the detectability limit. Furthermore, we analyze the error distribution in masses and orbital elements of orbital fits from synthetic data sets for resonant planets in the 2/1 commensurability. For various mass ratios and number of data points we find that the eccentricity of the outer planet is systematically overestimated, although the inner planet`s eccentricity suffers a much smaller effect. If the initial conditions correspond to small-amplitude oscillations around stable apsidal corotation resonances, the amplitudes estimated from the orbital fits are biased toward larger amplitudes, in accordance to results found in real resonant extrasolar systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We obtained long-slit spectra of high signal-to-noise ratio of the galaxy M32 with the Gemini Multi-Object Spectrograph at the Gemini-North telescope. We analysed the integrated spectra by means of full spectral fitting in order to extract the mixture of stellar populations that best represents its composite nature. Three different galactic radii were analysed, from the nuclear region out to 2 arcmin from the centre. This allows us to compare, for the first time, the results of integrated light spectroscopy with those of resolved colour-magnitude diagrams from the literature. As a main result we propose that an ancient and an intermediate-age population co-exist in M32, and that the balance between these two populations change between the nucleus and outside one effective radius (1r(eff)) in the sense that the contribution from the intermediate population is larger at the nuclear region. We retrieve a smaller signal of a young population at all radii whose origin is unclear and may be a contamination from horizontal branch stars, such as the ones identified by Brown et al. in the nuclear region. We compare our metallicity distribution function for a region 1 to 2 arcmin from the centre to the one obtained with photometric data by Grillmair et al. Both distributions are broad, but our spectroscopically derived distribution has a significant component with [Z/Z(circle dot)] <= -1, which is not found by Grillmair et al.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work the results of a spectroscopic study of the southern field narrow-line Be star HD 171054 are presented. High dispersion and signal-to-noise ratio spectra allowed the estimation of the fundamental photospheric parameters such as the projected rotational velocity, effective temperature and superficial gravity from non-LTE stellar atmosphere models. From these parameters and microturbulence, the abundances of He, C, N, O, Mg, Al and Si for this object are estimated. Results show that C is depleted whereas N is overabundant compared with the sun and OB stars in the solar vicinity. Oxygen and helium are close to the solar value. Magnesium is down by 0.43 dex and aluminium and silicon are overabundant. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we show that the steady-state free precession sequence can be used to acquire (13)C high-resolution nuclear magnetic resonance spectra and applied to qualitative analysis. The analysis of brucine sample using this sequence with 60 degrees flip angle and time interval between pulses equal to 300 ms (acquisition time, 299.7 ms; recycle delay, 300 ms) resulted in spectrum with twofold enhancement in signal-to-noise ratio, when compared to standard (13)C sequence. This gain was better when a much shorter time interval between pulses (100 ms) was applied. The result obtained was more than fivefold enhancement in signal-to-noise ratio, equivalent to more than 20-fold reduction in total data recording time. However, this short time interval between pulses produces a spectrum with severe phase and truncation anomalies. We demonstrated that these anomalies can be minimized by applying an appropriate apodization function and plotting the spectrum in the magnitude mode.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rapid development of data transfer through internet made it easier to send the data accurate and faster to the destination. There are many transmission media to transfer the data to destination like e-mails; at the same time it is may be easier to modify and misuse the valuable information through hacking. So, in order to transfer the data securely to the destination without any modifications, there are many approaches like cryptography and steganography. This paper deals with the image steganography as well as with the different security issues, general overview of cryptography, steganography and digital watermarking approaches.  The problem of copyright violation of multimedia data has increased due to the enormous growth of computer networks that provides fast and error free transmission of any unauthorized duplicate and possibly manipulated copy of multimedia information. In order to be effective for copyright protection, digital watermark must be robust which are difficult to remove from the object in which they are embedded despite a variety of possible attacks. The message to be send safe and secure, we use watermarking. We use invisible watermarking to embed the message using LSB (Least Significant Bit) steganographic technique. The standard LSB technique embed the message in every pixel, but my contribution for this proposed watermarking, works with the hint for embedding the message only on the image edges alone. If the hacker knows that the system uses LSB technique also, it cannot decrypt correct message. To make my system robust and secure, we added cryptography algorithm as Vigenere square. Whereas the message is transmitted in cipher text and its added advantage to the proposed system. The standard Vigenere square algorithm works with either lower case or upper case. The proposed cryptography algorithm is Vigenere square with extension of numbers also. We can keep the crypto key with combination of characters and numbers. So by using these modifications and updating in this existing algorithm and combination of cryptography and steganography method we develop a secure and strong watermarking method. Performance of this watermarking scheme has been analyzed by evaluating the robustness of the algorithm with PSNR (Peak Signal to Noise Ratio) and MSE (Mean Square Error) against the quality of the image for large amount of data. While coming to see results of the proposed encryption, higher value of 89dB of PSNR with small value of MSE is 0.0017. Then it seems the proposed watermarking system is secure and robust for hiding secure information in any digital system, because this system collect the properties of both steganography and cryptography sciences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Polymer matrix composites offer advantages for many applications due their combination of properties, which includes low density, high specific strength and modulus of elasticity and corrosion resistance. However, the application of non-destructive techniques using magnetic sensors for the evaluation these materials is not possible since the materials are non-magnetizable. Ferrites are materials with excellent magnetic properties, chemical stability and corrosion resistance. Due to these properties, these materials are promising for the development of polymer composites with magnetic properties. In this work, glass fiber / epoxy circular plates were produced with 10 wt% of cobalt or barium ferrite particles. The cobalt ferrite was synthesized by the Pechini method. The commercial barium ferrite was subjected to a milling process to study the effect of particle size on the magnetic properties of the material. The characterization of the ferrites was carried out by x-ray diffraction (XRD), field emission gun scanning electron microscopy (FEG-SEM) and vibrating sample magnetometry (VSM). Circular notches of 1, 5 and 10 mm diameter were introduced in the composite plates using a drill bit for the non-destructive evaluation by the technique of magnetic flux leakage (MFL). The results indicated that the magnetic signals measured in plates with barium ferrite without milling and cobalt ferrite showed good correlation with the presence of notches. The milling process for 12 h and 20 h did not contribute to improve the identification of smaller size notches (1 mm). However, the smaller particle size produced smoother magnetic curves, with fewer discontinuities and improved signal-to-noise ratio. In summary, the results suggest that the proposed approach has great potential for the detection of damage in polymer composites structures

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the present work, we report the use of bacterial colonies to optimize macroarray technique. The devised system is significantly cheaper than other methods available to detect large-scale differential gene expression. Recombinant Escherichia coli clones containing plasmid-encoded copies of 4,608 individual expressed sequence tag (ESTs) were robotically spotted onto nylon membranes that were incubated for 6 and 12 h to allow the bacteria to grow and, consequently, amplify the cloned ESTs. The membranes were then hybridized with a beta-lactamase gene specific probe from the recombinant plasmid and, subsequently, phosphorimaged to quantify the microbial cells. Variance analysis demonstrated that the spot hybridization signal intensity was similar for 3,954 ESTs (85.8%) after 6 h of bacterial growth. Membranes spotted with bacteria colonies grown for 12 h had 4,017 ESTs (87.2%) with comparable signal intensity but the signal to noise ratio was fivefold higher. Taken together, the results of this study indicate that it is possible to investigate large-scale gene expression using macroarrays based on bacterial colonies grown for 6 h onto membranes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ln this work the implementation of the SOM (Self Organizing Maps) algorithm or Kohonen neural network is presented in the form of hierarchical structures, applied to the compression of images. The main objective of this approach is to develop an Hierarchical SOM algorithm with static structure and another one with dynamic structure to generate codebooks (books of codes) in the process of the image Vector Quantization (VQ), reducing the time of processing and obtaining a good rate of compression of images with a minimum degradation of the quality in relation to the original image. Both self-organizing neural networks developed here, were denominated HSOM, for static case, and DHSOM, for the dynamic case. ln the first form, the hierarchical structure is previously defined and in the later this structure grows in an automatic way in agreement with heuristic rules that explore the data of the training group without use of external parameters. For the network, the heuristic mIes determine the dynamics of growth, the pruning of ramifications criteria, the flexibility and the size of children maps. The LBO (Linde-Buzo-Oray) algorithm or K-means, one ofthe more used algorithms to develop codebook for Vector Quantization, was used together with the algorithm of Kohonen in its basic form, that is, not hierarchical, as a reference to compare the performance of the algorithms here proposed. A performance analysis between the two hierarchical structures is also accomplished in this work. The efficiency of the proposed processing is verified by the reduction in the complexity computational compared to the traditional algorithms, as well as, through the quantitative analysis of the images reconstructed in function of the parameters: (PSNR) peak signal-to-noise ratio and (MSE) medium squared error

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the main goals of CoRoT Natal Team is the determination of rotation period for thousand of stars, a fundamental parameter for the study of stellar evolutionary histories. In order to estimate the rotation period of stars and to understand the associated uncertainties resulting, for example, from discontinuities in the curves and (or) low signal-to-noise ratio, we have compared three different methods for light curves treatment. These methods were applied to many light curves with different characteristics. First, a Visual Analysis was undertaken for each light curve, giving a general perspective on the different phenomena reflected in the curves. The results obtained by this method regarding the rotation period of the star, the presence of spots, or the star nature (binary system or other) were then compared with those obtained by two accurate methods: the CLEANest method, based on the DCDFT (Date Compensated Discrete Fourier Transform), and the Wavelet method, based on the Wavelet Transform. Our results show that all three methods have similar levels of accuracy and can complement each other. Nevertheless, the Wavelet method gives more information about the star, from the wavelet map, showing the variations of frequencies over time in the signal. Finally, we discuss the limitations of these methods, the efficiency to give us informations about the star and the development of tools to integrate different methods into a single analysis

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)