990 resultados para 1 sigma error


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Spectral changes of Na(2) in liquid helium were studied using the sequential Monte Carlo-quantum mechanics method. Configurations composed by Na(2) surrounded by explicit helium atoms sampled from the Monte Carlo simulation were submitted to time-dependent density-functional theory calculations of the electronic absorption spectrum using different functionals. Attention is given to both line shift and line broadening. The Perdew, Burke, and Ernzerhof (PBE1PBE, also known as PBE0) functional, with the PBE1PBE/6-311++G(2d,2p) basis set, gives the spectral shift, compared to gas phase, of 500 cm(-1) for the allowed X (1)Sigma(+)(g) -> B (1)Pi(u) transition, in very good agreement with the experimental value (700 cm(-1)). For comparison, cluster calculations were also performed and the first X (1)Sigma(+)(g) -> A (1)Sigma(+)(u) transition was also considered.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent detections of high-redshift absorption by both atomic hydrogen and molecular gas in the radio spectra of quasars have provided a powerful tool for measuring possible temporal and spatial variations of physical 'constants' in the Universe. We compare the frequency of high-redshift hydrogen 21-cm absorption with that of associated molecular absorption in two quasars to place new (1 sigma) upper limits on any variation in y = g(p) alpha(2) (where alpha is the fine-structure constant, and g(p) is the proton g-factor) of \Delta y/y\ < 5 x 10(-6) at redshifts z = 0.25 and 0.68. These quasars are separated by a comoving distance of 3000 Mpc (for H-0=75 km s(-1) Mpc(-1) and q(0) = 0). We also derive limits on the time rates of change of \(g) over dot (p)/(g) over dot (p)\ < 1 x 10(-15) yr(-1) and \(alpha) over dot/(a) over dot\ < 5 x 10(-16) yr(-1) between the present epoch and z = 0.68, These limits are more than an order of magnitude smaller than previous results derived from highredshift measurements.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE: To analyze the effects of variations in femtosecond laser energy level on corneal stromal cell death. and inflammatory cell influx following flap creation in a rabbit model. METHODS: Eighteen rabbits were stratified in three different groups according to level of energy applied for flap creation (six animals per group). Three different energy levels were chosen for both the lamellar and side cut; 2.7 mu J (high energy), 1.6 mu J (intermediate energy), and 0.5 mu J (low energy) with a 60 kHz, model II, femtosecond laser (IntraLase). The opposite eye of each rabbit served as a control. At the 24-hour time point after surgery, all rabbits were euthanized and the comeoscleral rims were analyzed for the levels of cell death and inflammatory cell influx with the terminal uridine deoxynucleotidyl transferase dUTP-nick end labeling (TUNEL) assay and immunocytochemistry for monocyte marker CD11b, respectively. RESULTS: The high energy group (31.9 +/- 7.1 [standard error of mean (SEM) 2.9]) had significantly more TUNEL positive cells in the central flap compared to the intermediate (22.2 +/- 1.9 [SEM 0.8], P=.004), low (17.9 +/- 4.0 [SEM 1.6], P <= .001), and control eye (0.06 +/- 0.02 [SEM 0.009], P <= .001) groups. The intermediate and low energy groups also had significantly more TUNEL positive cells than the control groups (P <= .001). The difference between the intermediate and low energy levels was not significant (P=.56). The mean for CD11b-positive cells/400x field at the flap edge was 26.1 +/- 29.3 (SEM 11.9), 5.8 +/- 4.1 (SEM 1.6), 1.6 +/- 4.1 (SEM 1.6), and 0.005 +/- 0.01 (SEM 0.005) for high energy, intermediate energy, low energy, and control groups, respectively. Only the intermediate energy group showed statistically more inflammatory cells than control eyes (P = .015), most likely due to variability between eyes. CONCLUSIONS: Higher energy levels trigger greater cell death when the femtosecond laser is used to create corneal flaps: Greater corneal inflammatory cell infiltration is observed with higher femtosecond laser energy levels. [J Refract Surg. 2009;25:869-874.] doi:10.3928/1081597X-20090917-08

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of the present study was to evaluate the intra and interday reliability of surface electromyographic amplitude values of the scapular girdle muscles and upper limbs during 3 isometric closed kinetic chain exercises, involving upper limbs with the fixed distal segment extremity on stable base of support and on a Swiss ball (relatively unstable). Twenty healthy adults performed the exercises push-up, bench-press and wall-press with different effort levels (80% and 100% maximal load). Subjects performed three maximal voluntary contractions (MVC) in muscular testing position of each muscle to obtain a reference value for root mean square (RMS) normalization. Individuals were instructed to randomly perform three isometric contraction series, in which each exercise lasted 6 s with a 2-min resting-period between series and exercises. Intra and interday reliabilities were calculated through the intraclass correlation coefficient (ICC 2.1), standard error of the measurement (SEM). Results indicated an excellent intraday reliability of electromyographic amplitude values (ICC >= 0.75). The interday reliability of normalized RMS values ranged between good and excellent (ICC 0.52-0.98). Finally, it is suggested that the reliability of normalized electromyographic amplitude values of the analyzed muscles present better values during exercises on a stable surface. However, load levels used during the exercises do not seem to have any influence on variability levels, possibly because the loads were quite similar. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work we investigate the population dynamics of cooperative hunting extending the McCann and Yodzis model for a three-species food chain system with a predator, a prey, and a resource species. The new model considers that a given fraction sigma of predators cooperates in prey's hunting, while the rest of the population 1-sigma hunts without cooperation. We use the theory of symbolic dynamics to study the topological entropy and the parameter space ordering of the kneading sequences associated with one-dimensional maps that reproduce significant aspects of the dynamics of the species under several degrees of cooperative hunting. Our model also allows us to investigate the so-called deterministic extinction via chaotic crisis and transient chaos in the framework of cooperative hunting. The symbolic sequences allow us to identify a critical boundary in the parameter spaces (K, C-0) and (K, sigma) which separates two scenarios: (i) all-species coexistence and (ii) predator's extinction via chaotic crisis. We show that the crisis value of the carrying capacity K-c decreases at increasing sigma, indicating that predator's populations with high degree of cooperative hunting are more sensitive to the chaotic crises. We also show that the control method of Dhamala and Lai [Phys. Rev. E 59, 1646 (1999)] can sustain the chaotic behavior after the crisis for systems with cooperative hunting. We finally analyze and quantify the inner structure of the target regions obtained with this control method for wider parameter values beyond the crisis, showing a power law dependence of the extinction transients on such critical parameters.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We produce five flavour models for the lepton sector. All five models fit perfectly well - at the 1 sigma level - the existing data on the neutrino mass-squared differences and on the lepton mixing angles. The models are based on the type I seesaw mechanism, on a Z(2) symmetry for each lepton flavour, and either on a (spontaneously broken) symmetry under the interchange of two lepton flavours or on a (spontaneously broken) CP symmetry incorporating that interchange - or on both symmetries simultaneously. Each model makes definite predictions both for the scale of the neutrino masses and for the phase delta in lepton mixing; the fifth model also predicts a correlation between the lepton mixing angles theta(12) and theta(23).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We analyse the possibility that, in two Higgs doublet models, one or more of the Higgs couplings to fermions or to gauge bosons change sign, relative to the respective Higgs Standard Model couplings. Possible sign changes in the coupling of a neutral scalar to charged ones are also discussed. These wrong signs can have important physical consequences, manifesting themselves in Higgs production via gluon fusion or Higgs decay into two gluons or into two photons. We consider all possible wrong sign scenarios, and also the symmetric limit, in all possible Yukawa implementations of the two Higgs doublet model, in two different possibilities: the observed Higgs boson is the lightest CP-even scalar, or the heaviest one. We also analyse thoroughly the impact of the currently available LHC data on such scenarios. With all 8 TeV data analysed, all wrong sign scenarios are allowed in all Yukawa types, even at the 1 sigma level. However, we will show that B-physics constraints are crucial in excluding the possibility of wrong sign scenarios in the case where tan beta is below 1. We will also discuss the future prospects for probing the wrong sign scenarios at the next LHC run. Finally we will present a scenario where the alignment limit could be excluded due to non-decoupling in the case where the heavy CP-even Higgs is the one discovered at the LHC.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Com a massificação do uso da tecnologia no dia-a-dia, os sistemas de localização têm vindo a aumentar a sua popularidade, devido à grande diversidade de funcionalidades que proporcionam e aplicações a que se destinam. No entanto, a maior parte dos sistemas de posicionamento não funcionam adequadamente em ambientes indoor, impedindo o desenvolvimento de aplicações de localização nestes ambientes. Os acelerómetros são muito utilizados nos sistemas de localização inercial, pelas informações que fornecem acerca das acelerações sofridas por um corpo. Para tal, neste trabalho, recorrendo à análise do sinal de aceleração provindo de um acelerómetro, propõe-se uma técnica baseada na deteção de passos para que, em aplicações futuras, possa constituir-se como um recurso a utilizar para calcular a posição do utilizador dentro de um edifício. Neste sentido, este trabalho tem como objetivo contribuir para o desenvolvimento da análise e identificação do sinal de aceleração obtido num pé, por forma a determinar a duração de um passo e o número de passos dados. Para alcançar o objetivo de estudo foram analisados, com recurso ao Matlab, um conjunto de 12 dados de aceleração (para marcha normal, rápida e corrida) recolhidos por um sistema móvel (e provenientes de um acelerómetro). A partir deste estudo exploratório tornou-se possível apresentar um algoritmo baseado no método de deteção de pico e na utilização de filtros de mediana e Butterworth passa-baixo para a contagem de passos, que apresentou bons resultados. Por forma a validar as informações obtidas nesta fase, procedeu-se, seguidamente, à realização de um conjunto de testes experimentais a partir da recolha de 33 novos dados para a marcha e corrida. Identificaram-se o número de passos efetuados, o tempo médio de passo e da passada e a percentagem de erro como as variáveis em estudo. Obteve-se uma percentagem de erro igual a 1% para o total dos dados recolhidos de 20, 100, 500 e 1000 passos com a aplicação do método proposto para a contagem do passo. Não obstante as dificuldades observadas na análise dos sinais de aceleração relativos à corrida, o algoritmo proposto mostrou bom desempenho, conseguindo valores próximos aos esperados. Os resultados obtidos permitem afirmar que foi possível atingir-se o objetivo de estudo com sucesso. Sugere-se, no entanto, o desenvolvimento de futuras investigações de forma a alargar estes resultados em outras direções.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Major and trace element compositions, stable H and 0 isotope compositions and Fe 31 contents of amphibole megacrysts of Pliocene-Pleistocene alkaline basalts have been investigated to obtain information on the origin of mantle fluids beneath the Carpathian-Pannonian region. The megacrysts have been regarded as igneous cumulates formed in the mantle and brought to the surface by the basaltic magma. The studied amphiboles have oxygen isotope compositions (5.4 +/- 0.2 %., 1 sigma), supporting their primary mantle origin. Even within the small 6180 variation observed, correlations with major and trace elements are detected. The negative delta(18)O-MgO and the positive delta(18)O-La/Sm(N) correlations are interpreted to have resulted from varying degrees of partial melting. The halogen (F, Cl) contents are very low (< 0.1 wt. %), however, a firm negative (F+Cl)-MgO correlation (R(2) = 0.84) can be related to the Mg-Cl avoidance in the amphibole structure. The relationships between water contents, H isotope compositions and Fe 31 contents of the amphibole megacrysts revealed degassing. Selected undegassed amphibole megacrysts show a wide 813 range from -80 to -20 parts per thousand. The low delta D value is characteristic of the normal mantle, whereas the high delta D values may indicate the influence of fluids released from subducted oceanic crust. The chemical and isotopic evidence collectively suggest that formation of the amphibole megacrysts is related to fluid metasomatism, whereas direct melt addition is insignificant.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Swain corrects the chi-square overidentification test (i.e., likelihood ratio test of fit) for structural equation models whethr with or without latent variables. The chi-square statistic is asymptotically correct; however, it does not behave as expected in small samples and/or when the model is complex (cf. Herzog, Boomsma, & Reinecke, 2007). Thus, particularly in situations where the ratio of sample size (n) to the number of parameters estimated (p) is relatively small (i.e., the p to n ratio is large), the chi-square test will tend to overreject correctly specified models. To obtain a closer approximation to the distribution of the chi-square statistic, Swain (1975) developed a correction; this scaling factor, which converges to 1 asymptotically, is multiplied with the chi-square statistic. The correction better approximates the chi-square distribution resulting in more appropriate Type 1 reject error rates (see Herzog & Boomsma, 2009; Herzog, et al., 2007).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Whole-body counting is a technique of choice for assessing the intake of gamma-emitting radionuclides. An appropriate calibration is necessary, which is done either by experimental measurement or by Monte Carlo (MC) calculation. The aim of this work was to validate a MC model for calibrating whole-body counters (WBCs) by comparing the results of computations with measurements performed on an anthropomorphic phantom and to investigate the effect of a change in phantom's position on the WBC counting sensitivity. GEANT MC code was used for the calculations, and an IGOR phantom loaded with several types of radionuclides was used for the experimental measurements. The results show a reasonable agreement between measurements and MC computation. A 1-cm error in phantom positioning changes the activity estimation by >2%. Considering that a 5-cm deviation of the positioning of the phantom may occur in a realistic counting scenario, this implies that the uncertainty of the activity measured by a WBC is ∼10-20%.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study uses several measures derived from the error matrix for comparing two thematic maps generated with the same sample set. The reference map was generated with all the sample elements and the map set as the model was generated without the two points detected as influential by the analysis of local influence diagnostics. The data analyzed refer to the wheat productivity in an agricultural area of 13.55 ha considering a sampling grid of 50 x 50 m comprising 50 georeferenced sample elements. The comparison measures derived from the error matrix indicated that despite some similarity on the maps, they are different. The difference between the estimated production by the reference map and the actual production was of 350 kilograms. The same difference calculated with the mode map was of 50 kilograms, indicating that the study of influential points is of fundamental importance to obtain a more reliable estimative and use of measures obtained from the error matrix is a good option to make comparisons between thematic maps.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A fully numerical two-dimensional solution of the Schrödinger equation is presented for the linear polyatomic molecule H^2+_3 using the finite element method (FEM). The Coulomb singularities at the nuclei are rectified by using both a condensed element distribution around the singularities and special elements. The accuracy of the results for the 1\sigma and 2\sigma orbitals is of the order of 10^-7 au.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the present research, we investigated effects of existential threat on veracity judgments. According to several meta-analyses, people judge potentially deceptive messages of other people as true rather than as false (so-called truth bias). This judgmental bias has been shown to depend on how people weigh the error of judging a true message as a lie (error 1) and the error of judging a lie as a true message (error 2). The weight of these errors has been further shown to be affected by situational variables. Given that research on terror management theory has found evidence that mortality salience (MS) increases the sensitivity toward the compliance of cultural norms, especially when they are of focal attention, we assumed that when the honesty norm is activated, MS affects judgmental error weighing and, consequently, judgmental biases. Specifically, activating the norm of honesty should decrease the weight of error 1 (the error of judging a true message as a lie) and increase the weight of error 2 (the error of judging a lie as a true message) when mortality is salient. In a first study, we found initial evidence for this assumption. Furthermore, the change in error weighing should reduce the truth bias, automatically resulting in better detection accuracy of actual lies and worse accuracy of actual true statements. In two further studies, we manipulated MS and honesty norm activation before participants judged several videos containing actual truths or lies. Results revealed evidence for our prediction. Moreover, in Study 3, the truth bias was increased after MS when group solidarity was previously emphasized.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Realizar la caracterización lingüística específica de la dislexia en el idioma castellano, atendiendo a los errores de lectura. 148 alumnos, 73 varones y 75 mujeres; 76 escolarizados en centros estatales y 72 en centros no estatales. Estudio sobre la dislexia entendida ésta como el mayor problema lector en el idioma castellano. Este trabajo parte de la exposición de las definiciones de dislexia, según diversos autores. Se presenta después las causas de la dislexia, ofreciendo a continuación los tipos de errores de lectura. Seguidamente se describe el TALE, test de análisis de lecto-escritura, haciendo hincapié en el subtest de lectura y dentro de éste la prueba correspondiente a la lectura de palabras. Otro apartado está dedicado a describir la muestra de los malos lectores en cuarto de EGB. Se presentan después las tablas con los datos relativos a los fallos detectados en la lectura. Junto con las tablas se ofrecen los comentarios de los datos obtenidos. 1) El error más frecuente es el de vacilación seguido del de repetición. 2) Los menos frecuentes son los de rotación e inversión. 3) No existen diferencias notables entre el grupo de varones y mujeres. 4) En la palabra 'rastapi' están presentes todos los tipos de errores, predominando el error de repetición. 5) En los errores de rotación niños y niñas cometen los mismos fallos. En los errores de omisión e inversión los niños superan a las niñas. En cambio, en los demás tipos de errores las niñas superan a los niños. 6) Los mayores porcentaje de error corresponden a las palabras logótomas.