948 resultados para Frequency Modulated Signals, Parameter Estimation, Signal-to-Noise-Ratio, Simulations
Resumo:
There is a tremendous desire to attribute causes to weather and climate events that is often challenging from a physical standpoint. Headlines attributing an event solely to either human-induced climate change or natural variability can be misleading when both are invariably in play. The conventional attribution framework struggles with dynamically driven extremes because of the small signal-to-noise ratios and often uncertain nature of the forced changes. Here, we suggest that a different framing is desirable, which asks why such extremes unfold the way they do. Specifically, we suggest that it is more useful to regard the extreme circulation regime or weather event as being largely unaffected by climate change, and question whether known changes in the climate system's thermodynamic state affected the impact of the particular event. Some examples briefly illustrated include 'snowmaggedon' in February 2010, superstorm Sandy in October 2012 and supertyphoon Haiyan in November 2013, and, in more detail, the Boulder floods of September 2013, all of which were influenced by high sea surface temperatures that had a discernible human component.
Resumo:
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.
Resumo:
In this paper we present a novel approach for multispectral image contextual classification by combining iterative combinatorial optimization algorithms. The pixel-wise decision rule is defined using a Bayesian approach to combine two MRF models: a Gaussian Markov Random Field (GMRF) for the observations (likelihood) and a Potts model for the a priori knowledge, to regularize the solution in the presence of noisy data. Hence, the classification problem is stated according to a Maximum a Posteriori (MAP) framework. In order to approximate the MAP solution we apply several combinatorial optimization methods using multiple simultaneous initializations, making the solution less sensitive to the initial conditions and reducing both computational cost and time in comparison to Simulated Annealing, often unfeasible in many real image processing applications. Markov Random Field model parameters are estimated by Maximum Pseudo-Likelihood (MPL) approach, avoiding manual adjustments in the choice of the regularization parameters. Asymptotic evaluations assess the accuracy of the proposed parameter estimation procedure. To test and evaluate the proposed classification method, we adopt metrics for quantitative performance assessment (Cohen`s Kappa coefficient), allowing a robust and accurate statistical analysis. The obtained results clearly show that combining sub-optimal contextual algorithms significantly improves the classification performance, indicating the effectiveness of the proposed methodology. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Likelihood ratio tests can be substantially size distorted in small- and moderate-sized samples. In this paper, we apply Skovgaard`s [Skovgaard, I.M., 2001. Likelihood asymptotics. Scandinavian journal of Statistics 28, 3-321] adjusted likelihood ratio statistic to exponential family nonlinear models. We show that the adjustment term has a simple compact form that can be easily implemented from standard statistical software. The adjusted statistic is approximately distributed as X(2) with high degree of accuracy. It is applicable in wide generality since it allows both the parameter of interest and the nuisance parameter to be vector-valued. Unlike the modified profile likelihood ratio statistic obtained from Cox and Reid [Cox, D.R., Reid, N., 1987. Parameter orthogonality and approximate conditional inference. journal of the Royal Statistical Society B49, 1-39], the adjusted statistic proposed here does not require an orthogonal parameterization. Numerical comparison of likelihood-based tests of varying dispersion favors the test we propose and a Bartlett-corrected version of the modified profile likelihood ratio test recently obtained by Cysneiros and Ferrari [Cysneiros, A.H.M.A., Ferrari, S.L.P., 2006. An improved likelihood ratio test for varying dispersion in exponential family nonlinear models. Statistics and Probability Letters 76 (3), 255-265]. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper describes 96- and 384-microzone plates fabricated in paper as alternatives to conventional multi-well plates fabricated in molded polymers. Paper-based plates are functionally related to plastic well plates, but they offer new capabilities. For example, paper-microzone plates are thin (similar to 180 mu m), require small volumes of sample (5 mu L per zone), and can be manufactured from inexpensive materials ($0.05 per plate). The paper-based plates are fabricated by patterning sheets of paper, using photolithography, into hydrophilic zones surrounded by hydrophobic polymeric barriers. This photolithography used an inexpensive formulation photoresist that allows rapid (similar to 15 min) prototyping of paper-based plates. These plates are compatible with conventional microplate readers for quantitative absorbance and fluorescence measurements. The limit of detection per zone loaded for fluorescence was 125 fmol for fluorescein isothiocyanate-labeled bovine serum albumin, and this level corresponds to 0.02 the quantity of analyte per well used to achieve comparable signal-to-noise in a 96-well plastic plate (using a solution of 25 nM labeled protein). The limits of detection for absorbance on paper was aproximately 50 pmol per zone for both Coomassie Brilliant Blue and Amaranth dyes; these values were 0.4 that required for the plastic plate. Demonstration of quantitative colorimetric correlations using a scanner or camera to image the zones and to measure the intensity of color, makes it possible to conduct assays without a microplate reader.
Resumo:
The possibility to compress analyte bands at the beginning of CE runs has many advantages. Analytes at low concentration can be analyzed with high signal-to-noise ratios by using the so-called sample stacking methods. Moreover, sample injections with very narrow initial band widths (small initial standard deviations) are sometimes useful, especially if high resolutions among the bands are required in the shortest run time. In the present work, a method of sample stacking is proposed and demonstrated. It is based on BGEs with high thermal sensitive pHs (high dpH/dT) and analytes with low dpK(a)/dT. High thermal sensitivity means that the working pK(a) of the BGE has a high dpK(a)/dT in modulus. For instance, Tris and Ethanolamine have dpH/dT = -0.028/degrees C and -0.029/degrees C, respectively, whereas carboxylic acids have low dpK(a)/dT values, i.e. in the -0.002/degrees C to+0.002/degrees C range. The action of cooling and heating sections along the capillary during the runs affects also the local viscosity, conductivity, and electric field strength. The effect of these variables on electrophoretic velocity and band compression is theoretically calculated using a simple model. Finally, this stacking method was demonstrated for amino acids derivatized with naphthalene-2,3-dicarboxaldehyde and fluorescamine using a temperature difference of 70 degrees C between two neighbor sections and Tris as separation buffer. In this case, the BGE has a high pH thermal coefficient whereas the carboxylic groups of the analytes have low pK(a) thermal coefficients. The application of these dynamic thermal gradients increased peak height by a factor of two (and decreased the standard deviations of peaks by a factor of two) of aspartic acid and glutamic acid derivatized with naphthalene-2,3-dicarboxaldehyde and serine derivatized with fluorescamine. The effect of thermal compression of bands was not observed when runs were accomplished using phosphate buffer at pH 7 (negative control). Phosphate has a low dpH/dT in this pH range, similar to the dK(a)/dT of analytes. It is shown that vertical bar dK(a)/dT-dpH/dT vertical bar >> 0 is one determinant factor to have significant stacking produced by dynamic thermal junctions.
Resumo:
The oscillations presents in control loops can cause damages in petrochemical industry. Canceling, or even preventing such oscillations, would save up to large amount of dollars. Studies have identified that one of the causes of these oscillations are the nonlinearities present on industrial process actuators. This study has the objective to develop a methodology for removal of the harmful effects of nonlinearities. Will be proposed an parameter estimation method to Hammerstein model, whose nonlinearity is represented by dead-zone or backlash. The estimated parameters will be used to construct inverse models of compensation. A simulated level system was used as a test platform. The valve that controls inflow has a nonlinearity. Results and describing function analysis show an improvement on system response
Resumo:
Hydraulic fracturing is an operation in which pressurised fluid is injected in the geological formation surrounding the producing well to create new permeable paths for hydrocarbons. The injection of such fluids in the reservoir induces seismic events. The measurement of this reservoir stimulation can be made by location these induced microseismic events. However, microseismic monitoring is an expensive operation because the acquisition and data interpretation system using in this monitoring rely on high signal-to-noise ratios (SNR). In general, the sensors are deployed in a monitoring well near the treated well and can make a microseismic monitoring quite an expensive operation. In this dissertation we propose the application of a new method for recording and location of microseismic events called nanoseismic monitoring (Joswig, 2006). In this new method, a continuous recording is performed and the interpreter can separate events from noise using sonograms. This new method also allows the location of seismic sources even when P and S phases onsets are not clear like in situations of 0 dB SNR. The clear technical advantage of this new method is also economically advantageous since the sensors can potentially be installed on the surface rather than in observation well. In this dissertation field tests with controlled sources were made. In the first test small explosives using fire works at 28 m (slant distances) were detected yealding magnitudes between -2.4 ≤ ML ≤ -1.6.. In a second test, we monitored perforation shots in a producing oil field. In this second test, one perforation shot was located with slant distances of 861 m and magnitude 2.4 ML. Data from the tests allow us to say that the method has potential to be used in the oil industry to monitor hydrofracture
Resumo:
We perform a detailed theoretical study including decays and jet fragmentation of all the important modes of single top quark production and all the basic background processes at the upgraded Fermilab Tevatron and CERN LHC colliders. Special attention is paid to the complete tree level calculation of the QCD fake background which was not considered in previous studies. An analysis of the various kinematical distributions for the signal and backgrounds allow us to work out a set of cuts for an efficient background suppression and extraction of the signal. It is shown that the signal to background ratio after optimized cuts could reach about 0.4 at the Tevatron and 1 at the LHC. The remaining after cuts signal rate at the LHC for the lepton+jets signature is expected to be about 6.1 pb and will be enough to study single top quark physics even during LHC operation at a low luminosity. ©1999 The American Physical Society.
Resumo:
Background. From shotgun libraries used for the genomic sequencing of the phytopathogenic bacterium Xanthomonas axonopodis pv. citri (XAC), clones that were representative of the largest possible number of coding sequences (CDSs) were selected to create a DNA microarray platform on glass slides (XACarray). The creation of the XACarray allowed for the establishment of a tool that is capable of providing data for the analysis of global genome expression in this organism. Findings. The inserts from the selected clones were amplified by PCR with the universal oligonucleotide primers M13R and M13F. The obtained products were purified and fixed in duplicate on glass slides specific for use in DNA microarrays. The number of spots on the microarray totaled 6,144 and included 768 positive controls and 624 negative controls per slide. Validation of the platform was performed through hybridization of total DNA probes from XAC labeled with different fluorophores, Cy3 and Cy5. In this validation assay, 86% of all PCR products fixed on the glass slides were confirmed to present a hybridization signal greater than twice the standard deviation of the deviation of the global median signal-to-noise ration. Conclusions. Our validation of the XACArray platform using DNA-DNA hybridization revealed that it can be used to evaluate the expression of 2,365 individual CDSs from all major functional categories, which corresponds to 52.7% of the annotated CDSs of the XAC genome. As a proof of concept, we used this platform in a previously work to verify the absence of genomic regions that could not be detected by sequencing in related strains of Xanthomonas. © 2010 Moreira et al; licensee BioMed Central Ltd.
Resumo:
The aim of this study was to determine the variation of the temperature after shearing in sheep under dry and hot environment conditions and to compare the temperature changes with variation in cardiac and respiratory frequencies, ruminal movements and hydration status. Twenty Suffolk unshorn ewes were studied. Physical examination was performed in all animals three times a day at 7:00 AM, 1:00 PM and 7:00 PM, during 42 days (22 days before shearing and 20 days after shearing). The skin temperature was measured by infrared thermometer over several surfaces of the body. Data were submitted to analysis of variance, for comparisons between groups (shorn versus unshorn) at each time, and the significant difference was evaluated at level of P<0.05 by Tukey test. The respiratory frequency was statistically significant at all times. When air humidity was high, the respiratory frequencies were low. The thermal stress was clear in sheep of this study, reflecting marked changes in cardiac and respiratory frequencies and rectal temperature. The respiratory frequency was the parameter more reliable to establish a framework of thermal stress in the unshorn sheep, with values on average three times higher than those reported in the literature. The heart rate monitors the thermal variation of the environment and is also an indicator of heat stress. This variation shows the Suffolk breed is well adapted to hot climates. The correlation between the body surface temperatures with environment temperature and air humidity was negative, as explained by the effect of wool insulation, i.e. even with an increase in environment temperature and humidity, the body temperature tends to maintain a compensating balance. In the shorn animals, the correlation between skin temperature with environment temperature and air humidity showed that the skin temperature increases when the environment temperature increases. The increase in the environment temperature does not affect the body temperature of unshorn animals due the insulating effect of the wool. However, when environment temperature rises, the presence of the wool starts to affect the thermal comfort as the heat absorption is larger than the capacity of heat loss. In this study, the best thermal stress indicators were the respiratory frequency and rectal and skin temperatures. The temperatures of the skin measured at the perineum, axillae and inner thigh were considered the most reliable.
Resumo:
ABSTRACT: The Kalman-Bucy method is here analized and applied to the solution of a specific filtering problem to increase the signal message/noise ratio. The method is a time domain treatment of a geophysical process classified as stochastic non-stationary. The derivation of the estimator is based on the relationship between the Kalman-Bucy and Wiener approaches for linear systems. In the present work we emphasize the criterion used, the model with apriori information, the algorithm, and the quality as related to the results. The examples are for the ideal well-log response, and the results indicate that this method can be used on a variety of geophysical data treatments, and its study clearly offers a proper insight into modeling and processing of geophysical problems.
Resumo:
O método de migração do tipo Kirchhoff se apresenta na literatura como uma das ferramentas mais importantes de todo o processamento sísmico, servindo de base para a resolução de outros problemas de imageamento, devido ao um menor custo computacional em relação aos métodos que tem por base a solução numérica da equação da onda. No caso da aplicação em três dimensões (3D), mesmo a migração do tipo Kirchhoff torna-se dispendiosa, no que se refere aos requisitos computacionais e até mesmo numéricos para sua efetiva aplicação. Desta maneira, no presente trabalho, objetivando produzir resultados com uma razão sinal/ruído maior e um menor esforço computacional, foi utilizado uma simplificação do meio denominado 2.5D, baseado nos fundamentos teóricos da propagação de feixes gaussianos. Assim, tendo como base o operador integral com feixes gaussianos desenvolvido por Ferreira e Cruz (2009), foi derivado um novo operador integral de superposição de campos paraxiais (feixes gaussianos), o mesmo foi inserido no núcleo do operador integral de migração Kirchhoff convencional em verdadeira amplitude, para a situação 2,5D, definindo desta maneira um novo operador de migração do tipo Kirchhoff para a classe pré-empilhamento em verdadeira amplitude 2.5D (KGB,do inglês Kirchhoff-Gausian-Beam). Posteriormente, tal operador foi particularizado para as configurações de medida afastamento comum (CO, do inglês common offset) e ângulo de reflexão comum (CA, do inglês common angle), ressaltando ainda, que na presente Tese foi também idealizada uma espécie de flexibilização do operador integral de superposição de feixes gaussianos, no que concerne a sua aplicação em mais de um domínio, quais sejam, afastamento comum e fonte comum. Nesta Tese são feitas aplicações de dados sintéticos originados a partir de um modelo anticlinal.
Resumo:
The present study aimed to compare elderly and young female voices in habitual and high intensity. The effect of increased intensity on the acoustic and perceptual parameters was assessed. Sound pressure level, fundamental frequency, jitter, shimmer, and harmonic to noise ratio were obtained at habitual and high intensity voice in a group of 30 elderly women and 30 young women. Perceptual assessment was also performed. Both groups demonstrated an increase in sound pressure level and fundamental frequency from habitual voice to high intensity voice. No differences were found between groups in any acoustic variables on samples recorded with habitual intensity level. No significant differences between groups were found in habitual intensity level for pitch, hoarseness, roughness, and breathiness. Asthenia and instability obtained significant higher values in elderly than young participants, whereas, the elderly demonstrated lower values for perceived tension and loudness than young subjects. Acoustic and perceptual measures do not demonstrate evident differences between elderly and young speakers in habitual intensity level. The parameters analyzed may lack the sensitivity necessary to detect differences in subjects with normal voices. Phonation with high intensity highlights differences between groups, especially in perceptual parameters. Therefore, high intensity should be included to compare elderly and young voice.
Resumo:
The objective of the present study was to optimize a radiographic technique for hand examinations using a computed radiography (CR) system and demonstrate the potential for dose reductions compared with clinically established technique. An exposure index was generated from the optimized technique to guide operators when imaging hands. Homogeneous and anthropomorphic phantoms that simulated a patient's hand were imaged using a CR system at various tube voltages and current settings (40-55 kVp, 1.25-2.8 mAs), including those used in clinical routines (50 kVp, 2.0 mAs) to obtain an optimized chart. The homogeneous phantom was used to assess objective parameters that are associated with image quality, including the signal difference-to-noise ratio (SdNR), which is used to define a figure of merit (FOM) in the optimization process. The anthropomorphic phantom was used to subjectively evaluate image quality using Visual Grading Analysis (VGA) that was performed by three experienced radiologists. The technique that had the best VGA score and highest FOM was considered the gold standard (GS) in the present study. Image quality, dose and the exposure index that are currently used in the clinical routine for hand examinations in our institution were compared with the GS technique. The effective dose reduction was 67.0%. Good image quality was obtained for both techniques, although the exposure indices were 1.60 and 2.39 for the GS and clinical routine, respectively.