939 resultados para measurement error models
Resumo:
The study of the large-sample distribution of the canonical correlations and variates in cointegrated models is extended from the first-order autoregression model to autoregression of any (finite) order. The cointegrated process considered here is nonstationary in some dimensions and stationary in some other directions, but the first difference (the “error-correction form”) is stationary. The asymptotic distribution of the canonical correlations between the first differences and the predictor variables as well as the corresponding canonical variables is obtained under the assumption that the process is Gaussian. The method of analysis is similar to that used for the first-order process.
Resumo:
In the Monte Carlo simulation of both lattice field theories and of models of statistical mechanics, identities verified by exact mean values, such as Schwinger-Dyson equations, Guerra relations, Callen identities, etc., provide well-known and sensitive tests of thermalization bias as well as checks of pseudo-random-number generators. We point out that they can be further exploited as control variates to reduce statistical errors. The strategy is general, very simple, and almost costless in CPU time. The method is demonstrated in the twodimensional Ising model at criticality, where the CPU gain factor lies between 2 and 4.
Resumo:
Although it may sound reasonable that American education continues to be more effective at sending high school students to college, in a study conducted in 2009, The Council of the Great City Schools states that "slightly more than half of entering ninth grade students arrive performing below grade level in reading and math, while one in five entering ninth grade students is more than two years behind grade level...[and] 25% received support in the form of remedial literacy instruction or interventions" (Council of the Great City Schools, 2009). Students are distracted with technology (Lei & Zhao, 2005), family (Xu & Corno, 2003), medical illnesses (Nielson, 2009), learning disabilities and perhaps the most detrimental to academic success, the very lack of interest in school (Ruch, 1963). In a Johns Hopkins research study, Building a Graduation Nation - Colorado (Balfanz, 2008), warning signs were apparent years before the student dropped out of high school. The ninth grade was often referenced as a critical point that indicated success or failure to graduate high school. The research conducted by Johns Hopkins illustrates the problem: students who become disengaged from school have a much greater chance of dropping out of high school and not graduating. The first purpose of this study was to compare different measurement models of the Student School Engagement (SSE) using Factor Analysis to verify model fit with student engagement. The second purpose was to determine the extent to which the SSE instrument measures student school engagement by investigating convergent validity (via the SSE and Appleton, Christenson, Kim and Reschly's instrument and Fredricks, Blumenfeld, Friedel and Paris's instrument), discriminant validity (via Huebner's Student Life Satisfaction Survey) and criterion-related validity (via the sub-latent variables of Aspirations, Belonging and Productivity and student outcome measures such as achievement, attendance and discipline). Discriminant validity was established between the SSE and the Appleton, Christenson, Kim and Reschly's model and Fredricks, Blumenfeld, Friedel and Paris's (2005) Student Engagement Instruments (SEI). When confirming discriminant validity, the SSE's correlations were weak and statistically not significant, thus establishing discriminant validity with the SLSS. Criterion-related validity was established through structural equation modeling when the SSE was found to be a significant predictor of student outcome measures when both risk score and CSAP scores were used. The third purpose of this study was to assess the factorial invariance of the SSE instrument across gender to ensure the instrument is measuring the intended construct across different groups. Conclusively, configural, weak and metric invariances were established for the SSE as a non-significant change in chi-square indicating that all parameters including the error variances were invariant across groups of gender. Engagement is not a clearly defined psychological construct; it requires more research in order to fully comprehend its complexity. Hopefully, with parental and teacher involvement and a sense of community, student engagement can be nurtured to result in a meaningful attachment to school and academic success.
Resumo:
Introdução: A prevalência de doenças crônicas, sobretudo na população idosa, nos coloca diante da necessidade de modelos longitudinais de cuidado. Atualmente os sujeitos estão sendo cada vez mais responsabilizados pelo gerenciamento de sua saúde através do uso de dispositivos de monitoramento, tais como o glicosímetro e o aferidor de pressão arterial. Esta nova realidade culmina na tomada de decisão no próprio domicílio. Objetivos: Identificar a tomada de decisão de idosos no monitoramento domiciliar das condições crônicas; identificar se as variáveis: sexo, escolaridade e renda influenciam a tomada de decisão; identificar a percepção dos idosos quanto às ações de cuidado no domicílio; identificar as dificuldades e estratégias no manuseio dos dispositivos de monitoramento. Materiais e métodos: Estudo quantitativo, exploratório e transversal. Casuística: 150 sujeitos com 60 anos de idade ou mais, sem comprometimento cognitivo, sem depressão e que façam uso do glicosímetro e/ou do aferidor de pressão arterial no domicílio. Instrumentos para seleção dos participantes: (1) Mini Exame do Estado Mental; (2) Escala de Depressão Geriátrica e (3) Escala de Atividades Instrumentais de Vida Diária de Lawton e Brody; Coleta de dados: realizada na cidade de Ribeirão Preto - SP entre setembro de 2014 e outubro de 2015. Instrumentos: (1) Questionário Socioeconômico; (2) Questionário sobre a tomada de decisão no monitoramento da saúde no domicílio (3) Classificação do uso de dispositivos eletrônicos voltados aos cuidados à saúde. Análise dos dados: Realizada estatística descritiva e quantificações absolutas e percentuais para identificar a relação entre tomada de decisão de acordo com o sexo, escolaridade e renda. Resultados: Participaram 150 idosos, sendo 117 mulheres e 33 homens, com média de idade de 72 anos. Destes, 113 são hipertensos e 62 são diabéticos. Quanto à tomada de decisão imediata, tanto os que fazem uso do aferidor de pressão arterial (n=128) quanto do glicosímetro (n=62) referem em sua maioria procurar ajuda médica, seguida da administração do medicamento prescrito e opções alternativas de tratamento. Em médio prazo destaca-se a procura por ajuda profissional para a maioria dos idosos em ambos os grupos. Foi notada pequena diferença na tomada de decisão com relação ao sexo. Quanto à escolaridade, os idosos com mais anos de estudos tendem a procurar mais pelo serviço de saúde se comparado aos idosos de menor escolaridade. A renda não mostrou influencia entre os usuários do glicosímetro. Já entre os usuários do aferidor de pressão arterial, idosos de maior renda tendem a procurar mais pelo serviço de saúde. A maioria dos participantes se refere ao monitoramento domiciliar da saúde de maneira positiva, principalmente pela praticidade em não sair de casa, obtenção rápida de resultados e possibilidade de controle contínuo da doença. As principais dificuldades no manuseio do glicosímetro estão relacionadas ao uso da lanceta e fita reagente, seguida da checagem dos resultados armazenados. Já as dificuldades no uso do aferidor de pressão arterial estão relacionadas a conferir o resultado após cada medida e ao posicionamento correto do corpo durante o monitoramento. Em ambos os grupos as estratégias utilizadas são pedir o auxílio de terceiros e tentativa e erro. Conclusão: Os idosos tem se mostrado favoráveis às ações de monitoramento domiciliar da saúde. De maneira geral, de imediato decidem por ações dentro do próprio domicílio para o controle dos sintomas e isto reforça a necessidade do investimento em informação de qualidade e educação em saúde para que o gerenciamento domiciliar possa vir a ser uma vertente do cuidado integral no tratamento das condições crônicas.
Resumo:
Tide gauge (TG) data along the northern Mediterranean and Black Sea coasts are compared to the sea-surface height (SSH) anomaly obtained from ocean altimetry (TOPEX/Poseidon and ERS-1/2) for a period of nine years (1993–2001). The TG measures the SSH relative to the ground whereas the altimetry does so with respect to the geocentric reference frame; therefore their difference would be in principle a vertical ground motion of the TG sites, though there are different error sources for this estimate as is discussed in the paper. In this study we estimate such vertical ground motion, for each TG site, from the slope of the SSH time series of the (non-seasonal) difference between the TG record and the altimetry measurement at a point closest to the TG. Where possible, these estimates are further compared with those derived from nearby continuous Global Positioning System (GPS) data series. These results on vertical ground motion along the Mediterranean and Black Sea coasts provide useful source data for studying, contrasting, and constraining tectonic models of the region. For example, in the eastern coast of the Adriatic Sea and in the western coast of Greece, a general subsidence is observed which may be related to the Adriatic lithosphere subducting beneath the Eurasian plate along the Dinarides fault.
Resumo:
Three sets of laboratory column experimental results concerning the hydrogeochemistry of seawater intrusion have been modelled using two codes: ACUAINTRUSION (Chemical Engineering Department, University of Alicante) and PHREEQC (U.S.G.S.). These reactive models utilise the hydrodynamic parameters determined using the ACUAINTRUSION TRANSPORT software and fit the chloride breakthrough curves perfectly. The ACUAINTRUSION code was improved, and the instabilities were studied relative to the discretisation. The relative square errors were obtained using different combinations of the spatial and temporal steps: the global error for the total experimental data and the partial error for each element. Good simulations for the three experiments were obtained using the ACUAINTRUSION software with slight variations in the selectivity coefficients for both sediments determined in batch experiments with fresh water. The cation exchange parameters included in ACUAINTRUSION are those reported by the Gapon convention with modified exponents for the Ca/Mg exchange. PHREEQC simulations performed using the Gains-Thomas convention were unsatisfactory, with the exchange coefficients from the database of PHREEQC (or range), but those determined with fresh water – natural sediment allowed only an approximation to be obtained. For the treated sediment, the adjusted exchange coefficients were determined to improve the simulation and are vastly different from those from the database of PHREEQC or batch experiment values; however, these values fall in an order similar to the others determined under dynamic conditions. Different cation concentrations were simulated using two different software packages; this disparity could be attributed to the defined selectivity coefficients that affect the gypsum equilibrium. Consequently, different calculated sulphate concentrations are obtained using each type of software; a smaller mismatch was predicted using ACUAINTRUSION. In general, the presented simulations by ACUAINTRUSION and PHREEQC produced similar results, making predictions consistent with the experimental data. However, the simulated results are not identical to the experimental data; sulphate (total S) is overpredicted by both models, most likely due to such factors as the kinetics of gypsum, the possible variations in the exchange coefficients due to salinity and the neglect of other processes.
Resumo:
Purpose. To evaluate theoretically in normal eyes the influence on IOL power (PIOL) calculation of the use of a keratometric index (nk) and to analyze and validate preliminarily the use of an adjusted keratometric index (nkadj) in the IOL power calculation (PIOLadj). Methods. A model of variable keratometric index (nkadj) for corneal power calculation (Pc) was used for IOL power calculation (named PIOLadj). Theoretical differences ($PIOL) between the new proposed formula (PIOLadj) and which is obtained through Gaussian optics (PIOL Gauss) were determined using Gullstrand and Le Grand eye models. The proposed new formula for IOL power calculation (PIOLadj) was prevalidated clinically in 81 eyes of 81 candidates for corneal refractive surgery and compared with Haigis, HofferQ, Holladay, and SRK/T formulas. Results. A theoretical PIOL underestimation greater than 0.5 diopters was present in most of the cases when nk = 1.3375 was used. If nkadj was used for Pc calculation, a maximal calculated error in $PIOL of T0.5 diopters at corneal vertex in most cases was observed independently from the eye model, r1c, and the desired postoperative refraction. The use of nkadj in IOL power calculation (PIOLadj) could be valid with effective lens position optimization nondependent of the corneal power. Conclusions. The use of a single value of nk for Pc calculation can lead to significant errors in PIOL calculation that may explain some IOL power overestimations with conventional formulas. These inaccuracies can be minimized by using the new PIOLadj based on the algorithm of nkadj.
Resumo:
Purpose: To calculate theoretically the errors in the estimation of corneal power when using the keratometric index (nk) in eyes that underwent laser refractive surgery for the correction of myopia and to define and validate clinically an algorithm for minimizing such errors. Methods: Differences between corneal power estimation by using the classical nk and by using the Gaussian equation in eyes that underwent laser myopic refractive surgery were simulated and evaluated theoretically. Additionally, an adjusted keratometric index (nkadj) model dependent on r1c was developed for minimizing these differences. The model was validated clinically by retrospectively using the data from 32 myopic eyes [range, −1.00 to −6.00 diopters (D)] that had undergone laser in situ keratomileusis using a solid-state laser platform. The agreement between Gaussian (PGaussc) and adjusted keratometric (Pkadj) corneal powers in such eyes was evaluated. Results: It was found that overestimations of corneal power up to 3.5 D were possible for nk = 1.3375 according to our simulations. The nk value to avoid the keratometric error ranged between 1.2984 and 1.3297. The following nkadj models were obtained: nkadj= −0.0064286r1c + 1.37688 (Gullstrand eye model) and nkadj = −0.0063804r1c + 1.37806 (Le Grand). The mean difference between Pkadj and PGaussc was 0.00 D, with limits of agreement of −0.45 and +0.46 D. This difference correlated significantly with the posterior corneal radius (r = −0.94, P < 0.01). Conclusions: The use of a single nk for estimating the corneal power in eyes that underwent a laser myopic refractive surgery can lead to significant errors. These errors can be minimized by using a variable nk dependent on r1c.
Resumo:
During Legs 127 and 128, we found a systematic error in the index property measurements, in that the wet bulk density, grain density, and porosity did not satisfy well-established interrelationships. We have found that an almost constant difference exists between the weight of water lost during drying and the volume of water lost. This discrepancy is independent of volume or water content of the sample. The water losses should be equal because the density of water is close to 1.0 g/cm**3. The pycnometer wet volume measurement has been identified as the source of the systematic error. The wet volume on average is 0.2 cm**3 too low. For the rare cases when the water content is negligible, there is no offset. The source of the wet volume error results from the partial vapor pressure of water in the pycnometer cell. Newly corrected tables of index properties measured during Legs 127 and 128 are included. The corrected index properties are internally consistent. The data are in better agreement with theoretical models that relate the index properties to other physical properties, such as thermal conductivity and acoustic velocity. In future, a standard volume sampler should be used, or the wet volume should be calculated from the dry volume and the water loss by weight.
Resumo:
Planktonic foraminiferal faunas of the southeast Pacific indicate that sea surface temperatures (SST) have varied by as much as 8-10°C in the Peru Current, and by ?5-7°C along the equator, over the past 150,000 years. Changes in SST at times such as the Last Glacial Maximum reflect incursion of high-latitude species Globorotalia inflata and Neogloboquadrina pachyderma into the eastern boundary current and as far north as the equator. A simple heat budget model of the equatorial Pacific shows that observed changes in Peru Current advection can account for about half of the total variability in equatorial SSTs. The remaining changes in equatorial SST, which are likely related to local changes in upwelling or pycnocline depth, precede changes in polar climates as recorded by d18O. This partitioning of processes in eastern equatorial Pacific SST reveals that net ice-age cooling here reflects first a rapid response of equatorial upwelling to insolation, followed by a later response to changes in the eastern boundary current associated with high-latitude climate (which closely resembles variations in atmospheric CO2 as recorded in the Vostok ice core). Although precise mechanisms responsible for the equatorial upwelling component of climate change remain uncertain, one likely candidate that may operate independently of the ice sheets is insolation-driven changes in El Niño/Southern Oscillation (ENSO) frequency. Early responses of equatorial SST detected both here and elsewhere highlight the sensitivity of tropical systems to small changes in seasonal insolation. The scale of tropical changes we have observed are substantially greater than model predictions, suggesting a need for further quantitative assessment of processes associated with long-term climate change.
Resumo:
There has been an abundance of literature on the modelling of hydrocyclones over the past 30 years. However, in the comminution area at least, the more popular commercially available packages (e.g. JKSimMet, Limn, MODSIM) use the models developed by Nageswararao and Plitt in the 1970s, either as published at that time, or with minor modification. With the benefit of 30 years of hindsight, this paper discusses the assumptions and approximations used in developing these models. Differences in model structure and the choice of dependent and independent variables are also considered. Redundancies are highlighted and an assessment made of the general applicability of each of the models, their limitations and the sources of error in their model predictions. This paper provides the latest version of the Nageswararao model based on the above analysis, in a form that can readily be implemented in any suitable programming language, or within a spreadsheet. The Plitt model is also presented in similar form. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
The use of presence/absence data in wildlife management and biological surveys is widespread. There is a growing interest in quantifying the sources of error associated with these data. We show that false-negative errors (failure to record a species when in fact it is present) can have a significant impact on statistical estimation of habitat models using simulated data. Then we introduce an extension of logistic modeling, the zero-inflated binomial (ZIB) model that permits the estimation of the rate of false-negative errors and the correction of estimates of the probability of occurrence for false-negative errors by using repeated. visits to the same site. Our simulations show that even relatively low rates of false negatives bias statistical estimates of habitat effects. The method with three repeated visits eliminates the bias, but estimates are relatively imprecise. Six repeated visits improve precision of estimates to levels comparable to that achieved with conventional statistics in the absence of false-negative errors In general, when error rates are less than or equal to50% greater efficiency is gained by adding more sites, whereas when error rates are >50% it is better to increase the number of repeated visits. We highlight the flexibility of the method with three case studies, clearly demonstrating the effect of false-negative errors for a range of commonly used survey methods.
Resumo:
We describe a scheme for quantum-error correction that employs feedback and weak measurement rather than the standard tools of projective measurement and fast controlled unitary gates. The advantage of this scheme over previous protocols [for example, Ahn Phys. Rev. A 65, 042301 (2001)], is that it requires little side processing while remaining robust to measurement inefficiency, and is therefore considerably more practical. We evaluate the performance of our scheme by simulating the correction of bit flips. We also consider implementation in a solid-state quantum-computation architecture and estimate the maximal error rate that could be corrected with current technology.
Resumo:
Pulse oximetry is commonly used as an arterial blood oxygen saturation (SaO(2)) measure. However, its other serial output, the photoplethysmography (PPG) signal, is not as well studied. Raw PPG signals can be used to estimate cardiovascular measures like pulse transit time (PTT) and possibly heart rate (HR). These timing-related measurements are heavily dependent on the minimal variability in phase delay of the PPG signals. Masimo SET (R) Rad-9 (TM) and Novametrix Oxypleth oximeters were investigated for their PPG phase characteristics on nine healthy adults. To facilitate comparison, PPG signals were acquired from fingers on the same hand in a random fashion. Results showed that mean PTT variations acquired from the Masimo oximeter (37.89 ms) were much greater than the Novametrix (5.66 ms). Documented evidence suggests that I ms variation in PTT is equivalent to I mmHg change in blood pressure. Moreover, the PTT trend derived from the Masimo oximeter can be mistaken as obstructive sleep apnoeas based on the known criteria. HR comparison was evaluated against estimates attained from an electrocardiogram (ECG). Novametrix differed from ECG by 0.71 +/- 0.58% (p < 0.05) while Masimo differed by 4.51 +/- 3.66% (p > 0.05). Modem oximeters can be attractive for their improved SaO(2) measurement. However, using raw PPG signals obtained directly from these oximeters for timing-related measurements warrants further investigations.
Resumo:
The process of adsorption of two dissociating and two non-dissociating aromatic compounds from dilute aqueous solutions on an untreated commercially available activated carbon (B.D.H.) was investigated systematically. All adsorption experiments were carried out in pH controlled aqueous solutions. The experimental isotherms were fitted into four different models (Langmuir homogenous Models, Langmuir binary Model, Langmuir-Freundlich single model and Langmuir-Freundlich double model). Variation of the model parameters with the solution pH was studied and used to gain further insight into the adsorption process. The relationship between the model parameters and the solution pH and pK(a) was used to predict the adsorption capacity in molecular and ionic form of solutes in other solution. A relationship was sought to predict the effect of pH on the adsorption systems and for estimating the maximum adsorption capacity of carbon at any pH where the solute is ionized reasonably well. N-2 and CO2 adsorption were used to characterize the carbon. X-ray Photoelectron Spectroscopy (XPS) measurement was used for surface elemental analysis of the activated carbon.