855 resultados para SYSTEMATIC-ERROR CORRECTION
Resumo:
The impact of systematic model errors on a coupled simulation of the Asian Summer monsoon and its interannual variability is studied. Although the mean monsoon climate is reasonably well captured, systematic errors in the equatorial Pacific mean that the monsoon-ENSO teleconnection is rather poorly represented in the GCM. A system of ocean-surface heat flux adjustments is implemented in the tropical Pacific and Indian Oceans in order to reduce the systematic biases. In this version of the GCM, the monsoon-ENSO teleconnection is better simulated, particularly the lag-lead relationships in which weak monsoons precede the peak of El Nino. In part this is related to changes in the characteristics of El Nino, which has a more realistic evolution in its developing phase. A stronger ENSO amplitude in the new model version also feeds back to further strengthen the teleconnection. These results have important implications for the use of coupled models for seasonal prediction of systems such as the monsoon, and suggest that some form of flux correction may have significant benefits where model systematic error compromises important teleconnections and modes of interannual variability.
Resumo:
We show that retrievals of sea surface temperature from satellite infrared imagery are prone to two forms of systematic error: prior error (familiar from the theory of atmospheric sounding) and error arising from nonlinearity. These errors have different complex geographical variations, related to the differing geographical distributions of the main geophysical variables that determine clear-sky brightness-temperatures over the oceans. We show that such errors arise as an intrinsic consequence of the form of the retrieval (rather than as a consequence of sub-optimally specified retrieval coefficients, as is often assumed) and that the pattern of observed errors can be simulated in detail using radiative-transfer modelling. The prior error has the linear form familiar from atmospheric sounding. A quadratic equation for nonlinearity error is derived, and it is verified that the nonlinearity error exhibits predominantly quadratic behaviour in this case.
Resumo:
This paper examines the lead–lag relationship between the FTSE 100 index and index futures price employing a number of time series models. Using 10-min observations from June 1996–1997, it is found that lagged changes in the futures price can help to predict changes in the spot price. The best forecasting model is of the error correction type, allowing for the theoretical difference between spot and futures prices according to the cost of carry relationship. This predictive ability is in turn utilised to derive a trading strategy which is tested under real-world conditions to search for systematic profitable trading opportunities. It is revealed that although the model forecasts produce significantly higher returns than a passive benchmark, the model was unable to outperform the benchmark after allowing for transaction costs.
Resumo:
Low-power medium access control (MAC) protocols used for communication of energy constraint wireless embedded devices do not cope well with situations where transmission channels are highly erroneous. Existing MAC protocols discard corrupted messages which lead to costly retransmissions. To improve transmission performance, it is possible to include an error correction scheme and transmit/receive diversity. It is possible to add redundant information to transmitted packets in order to recover data from corrupted packets. It is also possible to make use of transmit/receive diversity via multiple antennas to improve error resiliency of transmissions. Both schemes may be used in conjunction to further improve the performance. In this study, the authors show how an error correction scheme and transmit/receive diversity can be integrated in low-power MAC protocols. Furthermore, the authors investigate the achievable performance gains of both methods. This is important as both methods have associated costs (processing requirements; additional antennas and power) and for a given communication situation it must be decided which methods should be employed. The authors’ results show that, in many practical situations, error control coding outperforms transmission diversity; however, if very high reliability is required, it is useful to employ both schemes together.
The effect of teacher correction and student revision on university A-level student written accuracy
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
OBJETIVO: Avaliar as causas de baixa visão e cegueira em indivíduos facectomizados, de amostra da população de cidades da região centrooeste do estado de São Paulo. Métodos: Estudo transversal, observacional, feito em cinco cidades da região centro-oeste do estado de São Paulo, em amostra domiciliar e baseada nos dados do último Censo Demográfico (IBGE, 1995), com escolha sistemática dos domicílios. Foi considerada para o presente estudo uma subamostra de indivíduos facectomizados, dos quais foram obtidos dados de identificação e exame oftalmológico completo. Os dados foram avaliados por estatísticas descritivas, análise de freqüência de ocorrência e proporção de concordância, com intervalo de confiança de 95%. RESULTADOS: Dos indivíduos amostrados, 2,37% haviam sido submetidos à facectomia. Dos 201 olhos operados, 26,9% apresentavam acuidade visual compatível com cegueira ou deficiência visual. Com a melhor correção óptica, a acuidade visual permaneceu <0,3 em 19,0%. O exame refracional proporcionou melhora da acuidade visual para 27,9% dos indivíduos facectomizados. As causas de baixa visão foram os erros refrativos não corrigidos, opacidade de cápsula posterior (19,4%), ceratopatia bolhosa (8,3%) coriorretinite cicatricial (8,3%), afacia (8,3%), degeneração macular relacionada a idade (5,5%), leucoma (5,5%), glaucoma (5,5%), atrofia de papila (5,5,%), descolamento de retina (2,8%), atrofia de epitelio pigmentado da retina (2,8%) e alta miopia (2,8%). CONCLUSÃO: Apesar da catarata ser causa de cegueira que pode ser evitável, mesmo após a correção cirúrgica porcentagem expressiva de indivíduos permanece com baixa visão, em geral, em decorrência de fatores relacionados ao seguimento pós-operatório negligenciado.
Resumo:
O objetivo do presente estudo foi desenvolver um snorquel (SNQ) de baixo custo para mensuração de parâmetros cardiorrespiratórios em natação. Para isso, a máscara do analisador de gases VO2000 (MASC) foi adaptada a um SNQ desenvolvido artesanalmente com espaço morto de 250ml. Oito participantes foram submetidos a dois testes incrementais (TI) em cicloergômetro utilizando a MASC e o SNQ. Os TI foram realizados até a exaustão voluntária e foram compostos por estágios de 3min com carga inicial e incrementos de 35W. em ambas as situações, amostras gasosas foram coletadas em intervalos de 10s para determinação dos volumes de oxigênio (VO2), gás carbônico (VCO2), ventilatório (VE) e mensuração da freqüência cardíaca (FC). A comparação dos parâmetros cardiorrespiratórios (VO2, VE, VCO2 e FC) mensurados com o SNQ e a MASC foi realizada com o teste t de Student para amostras dependentes, enquanto que o teste de correlação de Pearson e a análise gráfica de Bland e Altman foram utilizados para verificar as associações e concordância entre parâmetros. em todos os casos, o nível de significância foi de P < 0,05. A adequação das equações de correção para os valores provenientes do SNQ foi verificada pelos erros sistemáticos (bias), aleatórios (precisão) e acurácia (ac). Não foram observadas diferenças significativas entre os valores de VO2, VCO2 e FC obtidos com a MASC e SNQ. Os valores de VE mensurados com o SNQ foram significativamente superiores aos obtidos com a MASC. No entanto, todos os parâmetros apresentaram elevada concordância e coeficiente de correlação (0,88 a 0,97). Além disso, foram verificados reduzidos valores de bias (VO2 = 0,11L/min; VE = 4,11L/min; VCO2 = 0,54L/min; 8,87bpm), precisão (VO2 = 0,24L/min; VE = 11,02L/ min; VCO2 = 0,18L/min; 7,42bpm) e ac (VO2 = 0,27L/min; VE = 11,76L/min; VCO2 = 0,56L/min; 11,56bpm). Desse modo, pode-se concluir que o SNQ desenvolvido neste estudo possibilita a mensuração válida de parâmetros cardiorrespiratórios em natação.
Resumo:
Systematic errors can have a significant effect on GPS observable. In medium and long baselines the major systematic error source are the ionosphere and troposphere refraction and the GPS satellites orbit errors. But, in short baselines, the multipath is more relevant. These errors degrade the accuracy of the positioning accomplished by GPS. So, this is a critical problem for high precision GPS positioning applications. Recently, a method has been suggested to mitigate these errors: the semiparametric model and the penalised least squares technique. It uses a natural cubic spline to model the errors as a function which varies smoothly in time. The systematic errors functions, ambiguities and station coordinates, are estimated simultaneously. As a result, the ambiguities and the station coordinates are estimated with better reliability and accuracy than the conventional least square method.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Corresponding to $C_{0}[n,n-r]$, a binary cyclic code generated by a primitive irreducible polynomial $p(X)\in \mathbb{F}_{2}[X]$ of degree $r=2b$, where $b\in \mathbb{Z}^{+}$, we can constitute a binary cyclic code $C[(n+1)^{3^{k}}-1,(n+1)^{3^{k}}-1-3^{k}r]$, which is generated by primitive irreducible generalized polynomial $p(X^{\frac{1}{3^{k}}})\in \mathbb{F}_{2}[X;\frac{1}{3^{k}}\mathbb{Z}_{0}]$ with degree $3^{k}r$, where $k\in \mathbb{Z}^{+}$. This new code $C$ improves the code rate and has error corrections capability higher than $C_{0}$. The purpose of this study is to establish a decoding procedure for $C_{0}$ by using $C$ in such a way that one can obtain an improved code rate and error-correcting capabilities for $C_{0}$.
Resumo:
Das Ziel des Experiments NA48 am CERN ist die Messung des Parameters Re(epsilon'/epsilon) der direktenCP-Verletzung mit einer Genauigkeit von 2x10^-4. Experimentell zugänglich ist das DoppelverhältnisR, das aus den Zerfällen des KL und KS in zwei neutrale bzw. zwei geladene Pionengebildet wird. Für R gilt in guter Näherung: R=1-6Re(epsilon'/epsilon).
NA48 verwendet eine Wichtung der KL-Ereignisse zur Reduzierung der Sensitivität auf dieDetektorakzeptanz. Zur Kontrolle der bisherigen Standardanalyse wurde eine Analyse ohne Ereigniswichtung durchgeführt. Das Ergebnis derungewichteten Analyse wird in dieser Arbeit vorgestellt. Durch Verzicht auf die Ereigniswichtung kann derstatistische Anteil des Gesamtfehlers deutlich verringert werden. Da der limitierende Kanal der Zerfall deslanglebigen Kaons in zwei neutrale Pionen ist, ist die Verwendung der gesamten Anzahl derKL-Zerfälle ein lohnendes Ziel.Im Laufe dieser Arbeit stellte sich heraus, dass dersystematische Fehler der Akzeptanzkorrektur diesen Gewinn wieder aufhebt.
Das Ergebnis der Arbeit für die Daten aus den Jahren 1998und 1999 ohne Ereigniswichtung lautet
Re(epsilon'/epsilon)=(17,91+-4,41(syst.)+-1,36(stat.))x10^-4.
Damit ist eindeutig die Existenz der direkten CP-Verletzungbestätigt. Dieses Ergebnis ist mit dem veröffentlichten Ergebnis vonNA48 verträglichSomit ist der Test der bisherigen Analysestrategie bei NA48erfolgreich durchgeführt worden.
Resumo:
In recent years, the use of Reverse Engineering systems has got a considerable interest for a wide number of applications. Therefore, many research activities are focused on accuracy and precision of the acquired data and post processing phase improvements. In this context, this PhD Thesis deals with the definition of two novel methods for data post processing and data fusion between physical and geometrical information. In particular a technique has been defined for error definition in 3D points’ coordinates acquired by an optical triangulation laser scanner, with the aim to identify adequate correction arrays to apply under different acquisition parameters and operative conditions. Systematic error in data acquired is thus compensated, in order to increase accuracy value. Moreover, the definition of a 3D thermogram is examined. Object geometrical information and its thermal properties, coming from a thermographic inspection, are combined in order to have a temperature value for each recognizable point. Data acquired by an optical triangulation laser scanner are also used to normalize temperature values and make thermal data independent from thermal-camera point of view.
Resumo:
The space environment has always been one of the most challenging for communications, both at physical and network layer. Concerning the latter, the most common challenges are the lack of continuous network connectivity, very long delays and relatively frequent losses. Because of these problems, the normal TCP/IP suite protocols are hardly applicable. Moreover, in space scenarios reliability is fundamental. In fact, it is usually not tolerable to lose important information or to receive it with a very large delay because of a challenging transmission channel. In terrestrial protocols, such as TCP, reliability is obtained by means of an ARQ (Automatic Retransmission reQuest) method, which, however, has not good performance when there are long delays on the transmission channel. At physical layer, Forward Error Correction Codes (FECs), based on the insertion of redundant information, are an alternative way to assure reliability. On binary channels, when single bits are flipped because of channel noise, redundancy bits can be exploited to recover the original information. In the presence of binary erasure channels, where bits are not flipped but lost, redundancy can still be used to recover the original information. FECs codes, designed for this purpose, are usually called Erasure Codes (ECs). It is worth noting that ECs, primarily studied for binary channels, can also be used at upper layers, i.e. applied on packets instead of bits, offering a very interesting alternative to the usual ARQ methods, especially in the presence of long delays. A protocol created to add reliability to DTN networks is the Licklider Transmission Protocol (LTP), created to obtain better performance on long delay links. The aim of this thesis is the application of ECs to LTP.