976 resultados para Spectral curve shape
Resumo:
We report the observation of strongly temperature (T)-dependent spectral lines in electronic Raman-scattering spectra of graphite in a high magnetic field up to 45 T applied along the c axis. The magnetic field quantizes the in-plane motion, while the out-of-plane motion remains free, effectively reducing the system dimension from 3 to 1. Optically created electron-hole pairs interact with, or shake up, the one-dimensional Fermi sea in the lowest Landau subbands. Based on the Tomonaga-Luttinger liquid theory, we show that interaction effects modify the spectral line shape from (ω-Δ)-1/2 to (ω-Δ)2α-1/2 at T = 0. At finite T, we predict a thermal broadening factor that increases linearly with T. Our model reproduces the observed T-dependent line shape, determining the electron-electron interaction parameter α to be ∼0.05 at 40 T. © 2014 American Physical Society.
Resumo:
Plasmon resonance spectra of supported Ag nanoparticles are studied by depositing the particles on different substrates. It was found that the dielectric properties of the substrates have significant effects on the spectral line shape, except the resonance frequency. Beyond the plasmon resonance band, the spectral shape is mainly governed by the dielectric function, particularly its imaginary part, of the substrate. The plasmon resonance band, on the other hand, may be severely distorted if the substrate is absorbing strongly.
Resumo:
Because of the complexity and particularity, especially the result is more depend on the expert' s experience, the calculate method which is based on the simplicity mathematical model can hardly have any effective role in the oilfield .The coalescent method of artificial intelligence and signal manage in the correlation of reservoir use log curve has been put forward.in this paper. Following the principle of "controlled by classification and correlation by deposit gyration ". The system of correlation has been setup, which can identify "standard layer" first by the improved method of gray connection system, and then on the basis of identified "standard layer", interpret the fault, and last identify the layer in the reservoir. A effective method of "the consistent character of a reservoir "has been adopt to solved the puzzle of interpret the fault. On the basis of sedimentary theory and the quantity analysis of log curve shape of different type microfacies, a serial of different type micofacies' s models has been build that use eight optimized parameters, five of eight rationed parameters being used to describe microfacies with log curve, the distribution area of every parameters for the microfacies has been give. Because the classical math can only be used in the areas that principles are very clearly, not be fit for the description of geology character, so The fuzzy math integrate judgment has been adopt in the using log curve to determine microfacies; the accordance ration is 85 percent. A set of software has been programmed which is on the system of Windows. the software has the integration function of data process, auto-contrast reservoir layer, determination of microfacies using log curve, character the connectivity of sandstones and plotting of geology map. Through the application, this system has high precision and has become a useful tool in the study of geology.
Resumo:
To pick velocity automatically is not only helpful to improve the efficiency of seismic data process, but also to provide quickly the initial velocity for prestack depth migration. In this thesis, we use the Viterbi algorithm to do automatic picking, but the velocity picked usually is immoderate. By thorough study and analysis, we think that the Viterbi algorithm has the function to do quickly and effectually automatic picking, but the data provided for picking maybe not continuous on derivative of its curved surface, viz., the curved face on velocity spectrum is not slick. Therefore, the velocity picked may include irrational velocity information. To solve the problem above, we develop a new method to filter signal by performing nonlinear transformation of coordinate and filter of function. Here, we call it as Gravity Center Preserved Pulse Compressed Filter (GCPPCF). The main idea to perform the GCPPCF as follows: separating a curve, such as a pulse, to several subsection, calculating the gravity center (coordinate displacement), and then assign the value (density) on the subsection to gravity center. When gravity center departure away from center of its subsection, the value assigned to gravity center is smaller than the actual one, but non other than gravity center anastomoses fully with its subsection center, the assigned value equal to the actual one. By doing so, the curve shape under new coordinate breadthwise narrows down compare to its original one. It is a process of nonlinear transformation of coordinate, due to gravity center changing with the shape of subsection. Furthermore, the gravity function is filter one, because it is a cause of filtering that the value assigned from subsection center to gravity center is obtained by calculating its weight mean of subsetion function. In addition, the filter has the properties of the adaptive time delay changed filter, owing to the weight coefficient used for weight mean also changes with the shape of subsection. In this thesis, the Viterbi algorithm inducted, being applied to auto pick the stack velocity, makes the rule to integral the max velocity spectrum ("energy group") forward and to get the optimal solution in recursion backward. It is a convenient tool to pick automatically velocity. The GCPPCF above not only can be used to preserve the position of peak value and compress the velocity spectrum, but also can be used as adaptive time delay changed filter to smooth object curved line or curved face. We apply it to smooth variable of sequence observed to get a favourable source data ta provide for achieving the final exact resolution. If there is no the adaptive time delay-changed filter to perform optimization, we can't get a finer source data and also can't valid velocity information, moreover, if there is no the Viterbi algorithm to do shortcut searching, we can't pick velocity automatically. Accordingly, combination of both of algorithm is to make an effective method to do automatic picking. We apply the method of automatic picking velocity to do velocity analysis of the wavefield extrapolated. The results calculated show that the imaging effect of deep layer with the wavefield extrapolated was improved dominantly. The GCPPCF above has achieved a good effect in application. It not only can be used to optimize and smooth velocity spectrum, but also can be used to perform a correlated process for other type of signal. The method of automatic picking velocity developed in this thesis has obtained favorable result by applying it to calculate single model, complicated model (Marmousi model) and also the practical data. The results show that it not only has feasibility, but also practicability.
Resumo:
We assemble a sample of 24 hydrogen-poor superluminous supernovae(SLSNe). Parameterizing the light-curve shape through rise and declinetime-scales shows that the two are highly correlated. Magnetar-poweredmodels can reproduce the correlation, with the diversity in rise anddecline rates driven by the diffusion time-scale. Circumstellarinteraction models can exhibit a similar rise-decline relation, but onlyfor a narrow range of densities, which may be problematic for thesemodels. We find that SLSNe are approximately 3.5 mag brighter and havelight curves three times broader than SNe Ibc, but that the intrinsicshapes are similar. There are a number of SLSNe with particularly broadlight curves, possibly indicating two progenitor channels, butstatistical tests do not cleanly separate two populations. The generalspectral evolution is also presented. Velocities measured from Fe II aresimilar for SLSNe and SNe Ibc, suggesting that diffusion timedifferences are dominated by mass or opacity. Flat velocity evolution inmost SLSNe suggests a dense shell of ejecta. If opacities in SLSNe aresimilar to other SNe Ibc, the average ejected mass is higher by a factor2-3. Assuming κ = 0.1 cm2 g-1, we estimate amean (median) SLSN ejecta mass of 10 M⊙ (6M⊙), with a range of 3-30 M⊙. Doubling theassumed opacity brings the masses closer to normal SNe Ibc, but with ahigh-mass tail. The most probable mechanism for generating SLSNe seemsto be the core collapse of a very massive hydrogen-poor star, forming amillisecond magnetar.
Resumo:
Many marine organisms have pelagic larval stages that settle into benthic habitats occupied by older individuals; however, a mechanistic understanding of inter cohort interactions remains elusive for most species. Patterns of spatial covariation in the densities of juvenile and adult age classes of a small temperate reef fish, the common triplefin (Forsterygion lapillum), were evaluated during the recruitment season (Feb–Mar, 2011) in Wellington, New Zealand (41°17′S, 174°46′E). The relationship between juvenile and adult density among sites was best approximated by a dome-shaped curve, with a negative correlation between densities of juveniles and adults at higher adult densities. The curve shape was temporally variable, but was unaffected by settlement habitat type (algal species). A laboratory experiment using a “multiple-predator effects”design tested the hypothesis that increased settler mortality in the presence of adults (via enhanced predation risk or cannibalism) contributed to the observed negative relationship between juveniles and adults. Settler mortality did not differ between controls and treatments that contained either one (p = 0.08) or two (p = 0.09) adults. However, post hoca analyses revealed a significant positive correlation between the mean length of juveniles used in experimental trials and survival of juveniles in these treatments, suggesting that smaller juveniles may be vulnerable to cannibalism. There was no evidence for risk enhancement or predator interference when adults were present alongside a hetero specific predator (F. varium). These results highlight the complex nature of intercohort relationships in shaping recruitment patterns and add to the growing body of literature recognizing the importance of age class interactions.
Resumo:
Travail réalisé en cotutelle avec l'université Paris-Diderot et le Commissariat à l'Energie Atomique sous la direction de John Harnad et Bertrand Eynard.
Resumo:
Les humains communiquent via différents types de canaux: les mots, la voix, les gestes du corps, des émotions, etc. Pour cette raison, un ordinateur doit percevoir ces divers canaux de communication pour pouvoir interagir intelligemment avec les humains, par exemple en faisant usage de microphones et de webcams. Dans cette thèse, nous nous intéressons à déterminer les émotions humaines à partir d’images ou de vidéo de visages afin d’ensuite utiliser ces informations dans différents domaines d’applications. Ce mémoire débute par une brève introduction à l'apprentissage machine en s’attardant aux modèles et algorithmes que nous avons utilisés tels que les perceptrons multicouches, réseaux de neurones à convolution et autoencodeurs. Elle présente ensuite les résultats de l'application de ces modèles sur plusieurs ensembles de données d'expressions et émotions faciales. Nous nous concentrons sur l'étude des différents types d’autoencodeurs (autoencodeur débruitant, autoencodeur contractant, etc) afin de révéler certaines de leurs limitations, comme la possibilité d'obtenir de la coadaptation entre les filtres ou encore d’obtenir une courbe spectrale trop lisse, et étudions de nouvelles idées pour répondre à ces problèmes. Nous proposons également une nouvelle approche pour surmonter une limite des autoencodeurs traditionnellement entrainés de façon purement non-supervisée, c'est-à-dire sans utiliser aucune connaissance de la tâche que nous voulons finalement résoudre (comme la prévision des étiquettes de classe) en développant un nouveau critère d'apprentissage semi-supervisé qui exploite un faible nombre de données étiquetées en combinaison avec une grande quantité de données non-étiquetées afin d'apprendre une représentation adaptée à la tâche de classification, et d'obtenir une meilleure performance de classification. Finalement, nous décrivons le fonctionnement général de notre système de détection d'émotions et proposons de nouvelles idées pouvant mener à de futurs travaux.
Resumo:
Accurate estimates of how soil water stress affects plant transpiration are crucial for reliable land surface model (LSM) predictions. Current LSMs generally use a water stress factor, β, dependent on soil moisture content, θ, that ranges linearly between β = 1 for unstressed vegetation and β = 0 when wilting point is reached. This paper explores the feasibility of replacing the current approach with equations that use soil water potential as their independent variable, or with a set of equations that involve hydraulic and chemical signaling, thereby ensuring feedbacks between the entire soil–root–xylem–leaf system. A comparison with the original linear θ-based water stress parameterization, and with its improved curvi-linear version, was conducted. Assessment of model suitability was focused on their ability to simulate the correct (as derived from experimental data) curve shape of relative transpiration versus fraction of transpirable soil water. We used model sensitivity analyses under progressive soil drying conditions, employing two commonly used approaches to calculate water retention and hydraulic conductivity curves. Furthermore, for each of these hydraulic parameterizations we used two different parameter sets, for 3 soil texture types; a total of 12 soil hydraulic permutations. Results showed that the resulting transpiration reduction functions (TRFs) varied considerably among the models. The fact that soil hydraulic conductivity played a major role in the model that involved hydraulic and chemical signaling led to unrealistic values of β, and hence TRF, for many soil hydraulic parameter sets. However, this model is much better equipped to simulate the behavior of different plant species. Based on these findings, we only recommend implementation of this approach into LSMs if great care with choice of soil hydraulic parameters is taken
Resumo:
Grossular is one of six members of silicate Garnet group. Two samples GI and GII have been investigated concerning their luminescence thermally stimulated (TL). EPR and optical absorption and the measurements were carried out to find out whether or not same point defects are responsible for all three properties. Although X-rays diffraction analysis has shown that both GI and GII have practically the same crystal structure of a standard grossular crystal, they presented different behavior in many aspects. The TL glow curve shape, TL response to radiation dose, the effect of annealing at high temperatures before irradiation, the dependence of UV bleaching parameters on peak temperature, all of them differ going from GI to GII. The EPR signals around g = 2.0 as well as at g = 4.3 and 6.0 have much larger intensity in GI than in GII. Very high temperature (> 800 degrees C annealing causes large increase in the bulk background absorption in GI, however, only very little in GII. In the cases of EPR and optical absorption, the difference in their behavior can be attributed to Fe3+ ions; however, in the TL case one cannot and the cause was not found as yet. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The purpose of this study was to determine the influence of hearing protection devices (HPDs) on the understanding of speech in young adults with normal hearing, both in a silent situation and in the presence of ambient noise. The experimental research was carried out with the following variables: five different conditions of HPD use (without protectors, with two earplugs and with two earmuffs); a type of noise (pink noise); 4 test levels (60, 70, 80 and 90 dB[A]); 6 signal/noise ratios (without noise, + 5, + 10, zero, - 5 and - 10 dB); 5 repetitions for each case, totalling 600 tests with 10 monosyllables in each one. The variable measure was the percentage of correctly heard words (monosyllabic) in the test. The results revealed that, at the lowest levels (60 and 70 dB), the protectors reduced the intelligibility of speech (compared to the tests without protectors) while, in the presence of ambient noise levels of 80 and 90 dB and unfavourable signal/noise ratios (0, -5 and -10 dB), the HPDs improved the intelligibility. A comparison of the effectiveness of earplugs versus earmuffs showed that the former offer greater efficiency in respect to the recognition of speech, providing a 30% improvement over situations in which no protection is used. As might be expected, this study confirmed that the protectors' influence on speech intelligibility is related directly to the spectral curve of the protector's attenuation. (C) 2003 Elsevier B.V. Ltd. All rights reserved.
Resumo:
Aim. Duplex scanning has been used in the evaluation of the aorta and proximal arteries of the lower extremities, but has limitations in evaluating the arteries of the leg. The utilization of ultrasonographic contrast (USC) may be helpful in improving the quality of the image in these arteries. The objective of the present study was to verify whether the USC increases the diagnostic accuracy of patency of the leg arteries and if it diminishes the time needed to perform duplex scanning.Methods. Twenty patients with critical ischemia (20 lower extremities) were examined by standard duplex scanning, duplex scanning with contrast and digital subtraction arteriography (DSA). The 3 arteries of the leg were divided into 3 segments, for a total of 9 segments per limb. Each segment was evaluated for patency in order to compare the 3 diagnostic methods. Comparison was made between standard duplex scanning and duplex scanning with contrast in terms of quality of the color-coded Doppler signal and of the spectral curve, and also of the time to perform the exams.Results. Duplex scanning with contrast was similar to arteriography in relation to patency diagnosis (p>0.3) and even superior in some of the segments. Standard duplex scanning was inferior to arteriography and to duplex scanning with contrast (p<0.001). There were improvements of 70% in intensity of the color-coded Doppler signal and 76% in the spectral curve after the utilization of contrast. The time necessary to perform the examinations was 23.7 minutes for standard duplex scanning and 16.9 minutes for duplex scanning with contrast (p<0.001).Conclusion. The use of ultrasonographic contrast increased the accuracy of the diagnosis of patency of leg arteries and diminished the time necessary for the execution of duplex scanning.
Resumo:
A utilização de funções matemáticas para descrever o crescimento animal é antiga. Elas permitem resumir informações em alguns pontos estratégicos do desenvolvimento ponderal e descrever a evolução do peso em função da idade do animal. Também é possível comparar taxas de crescimento de diferentes indivíduos em estados fisiológicos equivalentes. Os modelos de curvas de crescimento mais utilizados na avicultura são os derivados da função Richards, pois apresentam parâmetros que possibilitam interpretação biológica e portanto podem fornecer subsídios para seleção de uma determinada forma da curva de crescimento em aves. Também pode-se utilizar polinômios segmentados para descrever as mudanças de tendência da curva de crescimento animal. Entretanto, existem importantes fatores de variação para os parâmetros das curvas, como a espécie, o sistema de criação, o sexo e suas interações. A adequação dos modelos pode ser verificada pelos valores do coeficiente de determinação (R2), do quadrado médio do resíduo (QM res), do erro de predição médio (EPm), da facilidade de convergência dos dados e pela possibilidade de interpretação biológica dos parâmetros. Estudos envolvendo modelagem e descrição da curva de crescimento e seus componentes são amplamente discutidos na literatura. Porém, programas de seleção que visem a progressos genéticos para a forma da curva não são mencionados. A importância da avaliação dos parâmetros dos modelos de curvas de crescimento é ainda mais relevante já que os maiores ganhos genéticos para peso estão relacionados com seleção para pesos em idades próximas ao ponto de inflexão. A seleção para precocidade pode ser auxiliada com base nos parâmetros do modelo associados à variáveis que descrevem esta característica genética dos animais. Esses parâmetros estão relacionados a importantes características produtivas e reprodutivas e apresentam magnitudes diferentes, de acordo com a espécie, o sexo e o modelo utilizados na avaliação. Outra metodologia utilizada são os modelos de regressão aleatória, permitindo mudanças graduais nas covariâncias entre idades ao longo do tempo e predizendo variâncias e covariâncias em pontos contidos ao longo da trajetória estudada. A utilização de modelos de regressões aleatórias traz como vantagem a separação da variação da curva de crescimento fenotípica em seus diferentes efeitos genético aditivo e de ambiente permanente individual, mediante a determinação dos coeficientes de regressão aleatórios para esses diferentes efeitos. Além disto, não há necessidade de utilizar fatores de ajuste para a idade. Esta revisão teve por objetivos levantar os principais modelos matemáticos frequentistas utilizados no estudo de curvas de crescimento de aves, com maior ênfase nos empregados com a finalidade de estimar parâmetros genéticos e fenotípicos.
Resumo:
Pós-graduação em Genética e Melhoramento Animal - FCAV
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)