862 resultados para Error correction model
Resumo:
In this paper, the gamma-gamma probability distribution is used to model turbulent channels. The bit error rate (BER) performance of free space optical (FSO) communication systems employing on-off keying (OOK) or subcarrier binary phase-shift keying (BPSK) modulation format is derived. A tip-tilt adaptive optics system is also incorporated with a FSO system using the above modulation formats. The tip-tilt compensation can alleviate effects of atmospheric turbulence and thereby improve the BER performance. The improvement is different for different turbulence strengths and modulation formats. In addition, the BER performance of communication systems employing subcarrier BPSK modulation is much better than that of compatible systems employing OOK modulation with or without tip-tilt compensation.
Resumo:
Within the framework of second-order Rayleigh-Schrodinger perturbation theory, the polaronic correction to the first excited state energy of an electron in an quantum dot with anisotropic parabolic confinements is presented. Compared with isotropic confinements, anisotropic confinements will make the degeneracy of the excited states to be totally or partly lifted. On the basis of a three-dimensional Frohlich's Hamiltonian with anisotropic confinements, the first excited state properties in two-dimensional quantum dots as well as quantum wells and wires can also be easily obtained by taking special limits. Calculations show that the first excited polaronic effect can be considerable in small quantum dots.
Resumo:
This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.
Resumo:
I. It was not possible to produce anti-tetracycline antibody in laboratory animals by any of the methods tried. Tetracycline protein conjugates were prepared and characterized. It was shown that previous reports of the detection of anti-tetracycline antibody by in vitro-methods were in error. Tetracycline precipitates non-specifically with serum proteins. The anaphylactic reaction reported was the result of misinterpretation, since the observations were inconsistent with the known mechanism of anaphylaxis and the supposed antibody would not sensitize guinea pig skin. The hemagglutination reaction was not reproducible and was extremely sensitive to minute amounts of microbial contamination. Both free tetracyclines and the conjugates were found to be poor antigens.
II. Anti-aspiryl antibodies were produced in rabbits using 3 protein carriers. The method of inhibition of precipitation was used to determine the specificity of the antibody produced. ε-Aminocaproate was found to be the most effective inhibitor of the haptens tested, indicating that the combining hapten of the protein is ε-aspiryl-lysyl. Free aspirin and salicylates were poor inhibitors and did not combine with the antibody to a significant extent. The ortho group was found to participate in the binding to antibody. The average binding constants were measured.
Normal rabbit serum was acetylated by aspirin under in vitro conditions, which are similar to physiological conditions. The extent of acetylation was determined by immunochemical tests. The acetylated serum proteins were shown to be potent antigens in rabbits. It was also shown that aspiryl proteins were partially acetylated. The relation of these results to human aspirin intolerance is discussed.
III. Aspirin did not induce contact sensitivity in guinea pigs when they were immunized by techniques that induce sensitivity with other reactive compounds. The acetylation mechanism is not relevant to this type of hypersensitivity, since sensitivity is not produced by potent acetylating agents like acetyl chloride and acetic anhydride. Aspiryl chloride, a totally artificial system, is a good sensitizer. Its specificity was examined.
IV. Protein conjugates were prepared with p-aminosalicylic acid and various carriers using azo, carbodiimide and mixed anhydride coupling. These antigens were injected into rabbits and guinea pigs and no anti-hapten IgG or IgM response was obtained. Delayed hypersensitivity was produced in guinea pigs by immunization with the conjugates, and its specificity was determined. Guinea pigs were not sensitized by either injections or topical application of p-amino-salicylic acid or p-aminosalicylate.
Resumo:
A study of human eye movements was made in order to elucidate the nature of the control mechanism in the binocular oculomotor system.
We first examined spontaneous eye movements during monocular and binocular fixation in order to determine the corrective roles of flicks and drifts. It was found that both types of motion correct fixational errors, although flicks are somewhat more active in this respect. Vergence error is a stimulus for correction by drifts but not by flicks, while binocular vertical discrepancy of the visual axes does not trigger corrective movements.
Second, we investigated the non-linearities of the oculomotor system by examining the eye movement responses to point targets moving in two dimensions in a subjectively unpredictable manner. Such motions consisted of hand-limited Gaussian random motion and also of the sum of several non-integrally related sinusoids. We found that there is no direct relationship between the phase and the gain of the oculomotor system. Delay of eye movements relative to target motion is determined by the necessity of generating a minimum afferent (input) signal at the retina in order to trigger corrective eye movements. The amplitude of the response is a function of the biological constraints of the efferent (output) portion of the system: for target motions of narrow bandwidth, the system responds preferentially to the highest frequency; for large bandwidth motions, the system distributes the available energy equally over all frequencies. Third, the power spectra of spontaneous eye movements were compared with the spectra of tracking eye movements for Gaussian random target motions of varying bandwidths. It was found that there is essentially no difference among the various curves. The oculomotor system tracks a target, not by increasing the mean rate of impulses along the motoneurons of the extra-ocular muscles, but rather by coordinating those spontaneous impulses which propagate along the motoneurons during stationary fixation. Thus, the system operates at full output at all times.
Fourth, we examined the relative magnitude and phase of motions of the left and the right visual axes during monocular and binocular viewing. We found that the two visual axes move vertically in perfect synchronization at all frequencies for any viewing condition. This is not true for horizontal motions: the amount of vergence noise is highest for stationary fixation and diminishes for tracking tasks as the bandwidth of the target motion increases. Furthermore, movements of the occluded eye are larger than those of the seeing eye in monocular viewing. This effect is more pronounced for horizontal motions, for stationary fixation, and for lower frequencies.
Finally, we have related our findings to previously known facts about the pertinent nerve pathways in order to postulate a model for the neurological binocular control of the visual axes.
Resumo:
Este trabalho de pesquisa descreve três estudos de utilização de métodos quimiométricos para a classificação e caracterização de óleos comestíveis vegetais e seus parâmetros de qualidade através das técnicas de espectrometria de absorção molecular no infravermelho médio com transformada de Fourier e de espectrometria no infravermelho próximo, e o monitoramento da qualidade e estabilidade oxidativa do iogurte usando espectrometria de fluorescência molecular. O primeiro e segundo estudos visam à classificação e caracterização de parâmetros de qualidade de óleos comestíveis vegetais utilizando espectrometria no infravermelho médio com transformada de Fourier (FT-MIR) e no infravermelho próximo (NIR). O algoritmo de Kennard-Stone foi usado para a seleção do conjunto de validação após análise de componentes principais (PCA). A discriminação entre os óleos de canola, girassol, milho e soja foi investigada usando SVM-DA, SIMCA e PLS-DA. A predição dos parâmetros de qualidade, índice de refração e densidade relativa dos óleos, foi investigada usando os métodos de calibração multivariada dos mínimos quadrados parciais (PLS), iPLS e SVM para os dados de FT-MIR e NIR. Vários tipos de pré-processamentos, primeira derivada, correção do sinal multiplicativo (MSC), dados centrados na média, correção do sinal ortogonal (OSC) e variação normal padrão (SNV) foram utilizados, usando a raiz quadrada do erro médio quadrático de validação cruzada (RMSECV) e de predição (RMSEP) como parâmetros de avaliação. A metodologia desenvolvida para determinação de índice de refração e densidade relativa e classificação dos óleos vegetais é rápida e direta. O terceiro estudo visa à avaliação da estabilidade oxidativa e qualidade do iogurte armazenado a 4C submetido à luz direta e mantido no escuro, usando a análise dos fatores paralelos (PARAFAC) na luminescência exibida por três fluoróforos presentes no iogurte, onde pelo menos um deles está fortemente relacionado com as condições de armazenamento. O sinal fluorescente foi identificado pelo espectro de emissão e excitação das substâncias fluorescentes puras, que foram sugeridas serem vitamina A, triptofano e riboflavina. Modelos de regressão baseados nos escores do PARAFAC para a riboflavina foram desenvolvidos usando os escores obtidos no primeiro dia como variável dependente e os escores obtidos durante o armazenamento como variável independente. Foi visível o decaimento da curva analítica com o decurso do tempo da experimentação. Portanto, o teor de riboflavina pode ser considerado um bom indicador para a estabilidade do iogurte. Assim, é possível concluir que a espectroscopia de fluorescência combinada com métodos quimiométricos é um método rápido para monitorar a estabilidade oxidativa e a qualidade do iogurte
Resumo:
Only the first- order Doppler frequency shift is considered in current laser dual- frequency interferometers; however; the second- order Doppler frequency shift should be considered when the measurement corner cube ( MCC) moves at high velocity or variable velocity because it can cause considerable error. The influence of the second- order Doppler frequency shift on interferometer error is studied in this paper, and a model of the second- order Doppler error is put forward. Moreover, the model has been simulated with both high velocity and variable velocity motion. The simulated results show that the second- order Doppler error is proportional to the velocity of the MCC when it moves with uniform motion and the measured displacement is certain. When the MCC moves with variable motion, the second- order Doppler error concerns not only velocity but also acceleration. When muzzle velocity is zero the second- order Doppler error caused by an acceleration of 0.6g can be up to 2.5 nm in 0.4 s, which is not negligible in nanometric measurement. Moreover, when the muzzle velocity is nonzero, the accelerated motion may result in a greater error and decelerated motion may result in a smaller error.
Resumo:
The time series of abundance indices for many groundfish populations, as determined from trawl surveys, are often imprecise and short, causing stock assessment estimates of abundance to be imprecise. To improve precision, prior probability distributions (priors) have been developed for parameters in stock assessment models by using meta-analysis, expert judgment on catchability, and empirically based modeling. This article presents a synthetic approach for formulating priors for rockfish trawl survey catchability (qgross). A multivariate prior for qgross for different surveys is formulated by using 1) a correction factor for bias in estimating fish density between trawlable and untrawlable areas, 2) expert judgment on trawl net catchability, 3) observations from trawl survey experiments, and 4) data on the fraction of population biomass in each of the areas surveyed. The method is illustrated by using bocaccio (Sebastes paucipinis) in British Columbia. Results indicate that expert judgment can be updated markedly by observing the catch-rate ratio from different trawl gears in the same areas. The marginal priors for qgross are consistent with empirical estimates obtained by fitting a stock assessment model to the survey data under a noninformative prior for qgross. Despite high prior uncertainty (prior coefficients of variation ≥0.8) and high prior correlation between qgross, the prior for qgross still enhances the precision of key stock assessment quantities.
Resumo:
O biodiesel tem sido amplamente utilizado como uma fonte de energia renovável, que contribui para a diminuição de demanda por diesel mineral. Portanto, existem várias propriedades que devem ser monitoradas, a fim de produzir e distribuir biodiesel com a qualidade exigida. Neste trabalho, as propriedades físicas do biodiesel, tais como massa específica, índice de refração e ponto de entupimento de filtro a frio foram medidas e associadas a espectrometria no infravermelho próximo (NIR) e espectrometria no infravermelho médio (Mid-IR) utilizando ferramentas quimiométricas. Os métodos de regressão por mínimos quadrados parciais (PLS), regressão de mínimos quadrados parciais por intervalos (iPLS), e regressão por máquinas de vetor de suporte (SVM) com seleção de variáveis por Algoritmo Genético (GA) foram utilizadas para modelar as propriedades mencionadas. As amostras de biodiesel foram sintetizadas a partir de diferentes fontes, tais como canola, girassol, milho e soja. Amostras adicionais de biodiesel foram adquiridas de um fornecedor da região sul do Brasil. Em primeiro lugar, o pré-processamento de correção de linha de base foi usado para normalizar os dados espectrais de NIR, seguidos de outros tipos de pré-processamentos que foram aplicados, tais como centralização dos dados na média, 1 derivada e variação de padrão normal. O melhor resultado para a previsão do ponto de entupimento de filtro a frio foi utilizando os espectros de Mid-IR e o método de regressão GA-SVM, com alto coeficiente de determinação da previsão, R2Pred=0,96 e baixo valor da Raiz Quadrada do Erro Médio Quadrático da previsão, RMSEP (C)= 0,6. Para o modelo de previsão da massa específica, o melhor resultado foi obtido utilizando os espectros de Mid-IR e regressão por PLS, com R2Pred=0,98 e RMSEP (g/cm3)= 0,0002. Quanto ao modelo de previsão para o índice de refração, o melhor resultado foi obtido utilizando os espectros de Mid-IR e regressão por PLS, com excelente R2Pred=0,98 e RMSEP= 0,0001. Para esses conjuntos de dados, o PLS e o SVM demonstraram sua robustez, apresentando-se como ferramentas úteis para a previsão das propriedades do biodiesel estudadas
Resumo:
Body length measurement is an important part of growth, condition, and mortality analyses of larval and juvenile fish. If the measurements are not accurate (i.e., do not reflect real fish length), results of subsequent analyses may be affected considerably (McGurk, 1985; Fey, 1999; Porter et al., 2001). The primary cause of error in fish length measurement is shrinkage related to collection and preservation (Theilacker, 1980; Hay, 1981; Butler, 1992; Fey, 1999). The magnitude of shrinkage depends on many factors, namely the duration and speed of the collection tow, abundance of other planktonic organisms in the sample (Theilacker, 1980; Hay, 1981; Jennings, 1991), the type and strength of the preservative (Hay, 1982), and the species of fish (Jennings, 1991; Fey, 1999). Further, fish size affects shrinkage (Fowler and Smith, 1983; Fey, 1999, 2001), indicating that live length should be modeled as a function of preserved length (Pepin et al., 1998; Fey, 1999).
Resumo:
In previous papers (S. Adhikari and J. Woodhouse 2001 Journal of Sound and Vibration 243, 43-61; 63-88; S. Adhikari and J. Woodhouse 2002 Journal of Sound and Vibration 251, 477-490) methods were proposed to obtain the coefficient matrix for a viscous damping model or a non-viscous damping model with an exponential relaxation function, from measured complex natural frequencies and modes. In all these works, it has been assumed that exact complex natural frequencies and complex modes are known. In reality, this will not be the case. The purpose of this paper is to analyze the sensitivity of the identified damping matrices to measurement errors. By using numerical and analytical studies it is shown that the proposed methods can indeed be expected to give useful results from moderately noisy data provided a correct damping model is selected for fitting. Indications are also given of what level of noise in the measured modal properties is needed to mask the true physical behaviour.
Resumo:
O sucesso do tratamento endodôntico depende da cuidadosa realização de todas as suas fases, terminando com uma obturação tridimensional que alcance todo o sistema de canais radiculares. Desta forma, os materiais obturadores, ou as substâncias liberadas, entrarão em contato com os tecidos perirradiculares, o que poderá influenciar a resposta inflamatória e o processo de reparo. A terapia laser de baixa potência (TLBP) tem sido estudada quanto à sua ação anti-inflamatória, favorecendo o reparo. O objetivo deste estudo foi investigar a produção das citocinas IL-1β, IL-6 e IL-8 por fibroblastos de gengiva humana (linhagem FMM1) como resposta à presença dos extratos dos cimentos endodônticos AH Plus, MTA Fillapex e EndoSequence BC Sealer, bem como a eficácia da TLBP, neste modelo. Para isto, extratos destes cimentos, recém-manipulados e após 24 h do endurecimento, foram preparados em meio de cultura DMEM fresco, conforme as normas ISO 10993-12. Inicialmente, a citotoxicidade dos cimentos foi avaliada, após a interação das células com a diluição seriada destes extratos (1:1 a 1:16), por meio do ensaio MTT. Para a análise da produção de citocinas, 106 células por poço foram cultivadas em placas de cultura de 24 poços, para a interação com os extratos dos cimentos, na diluição 1:4. Estabeleceram-se os grupos não irradiado e irradiado. No grupo irradiado, as culturas celulares receberam duas irradiações do laser InGaAlP (660 nm, 30 mW, 5 J/cm2 e área do feixe de 0,028 cm2), com intervalo de 12 h. O grupo não irradiado foi submetido às mesmas condições ambientais que o irradiado. Os sobrenadantes das culturas foram coletados, centrifugados, aliquotados e armazenados congelados, para a posterior análise pelo ensaio de ELISA. Todos os dados obtidos (médias erro padrão) foram tratados estatisticamente por ANOVA one-way, complementado pelo teste de Tuckey e ANOVA two-way, com correção de Bonferroni (p< 0,05). A citotoxicidade dos cimentos AH Plus e EndoSequence BC Sealer revelou-se tempo/concentração-dependente, enquanto a do MTA Fillapex mostrou-se concentração-dependente. Os cimentos endodônticos induziram a produção das citocinas IL-1β, IL-6 e IL-8 pelos fibroblastos, sem diferença significativa com os controles (p> 0,05). Somente o LPS de E. coli induziu a secreção de IL-8, com diferença estatística (p< 0,05). A TLBP não foi capaz de modular a produção das citocinas em questão, significativamente.
Resumo:
We report a Monte Carlo representation of the long-term inter-annual variability of monthly snowfall on a detailed (1 km) grid of points throughout the southwest. An extension of the local climate model of the southwestern United States (Stamm and Craig 1992) provides spatially based estimates of mean and variance of monthly temperature and precipitation. The mean is the expected value from a canonical regression using independent variables that represent controls on climate in this area, including orography. Variance is computed as the standard error of the prediction and provides site-specific measures of (1) natural sources of variation and (2) errors due to limitations of the data and poor distribution of climate stations. Simulation of monthly temperature and precipitation over a sequence of years is achieved by drawing from a bivariate normal distribution. The conditional expectation of precipitation. given temperature in each month, is the basis of a numerical integration of the normal probability distribution of log precipitation below a threshold temperature (3°C) to determine snowfall as a percent of total precipitation. Snowfall predictions are tested at stations for which long-term records are available. At Donner Memorial State Park (elevation 1811 meters) a 34-year simulation - matching the length of instrumental record - is within 15 percent of observed for mean annual snowfall. We also compute resulting snowpack using a variation of the model of Martinec et al. (1983). This allows additional tests by examining spatial patterns of predicted snowfall and snowpack and their hydrologic implications.
Resumo:
We present a method to integrate environmental time series into stock assessment models and to test the significance of correlations between population processes and the environmental time series. Parameters that relate the environmental time series to population processes are included in the stock assessment model, and likelihood ratio tests are used to determine if the parameters improve the fit to the data significantly. Two approaches are considered to integrate the environmental relationship. In the environmental model, the population dynamics process (e.g. recruitment) is proportional to the environmental variable, whereas in the environmental model with process error it is proportional to the environmental variable, but the model allows an additional temporal variation (process error) constrained by a log-normal distribution. The methods are tested by using simulation analysis and compared to the traditional method of correlating model estimates with environmental variables outside the estimation procedure. In the traditional method, the estimates of recruitment were provided by a model that allowed the recruitment only to have a temporal variation constrained by a log-normal distribution. We illustrate the methods by applying them to test the statistical significance of the correlation between sea-surface temperature (SST) and recruitment to the snapper (Pagrus auratus) stock in the Hauraki Gulf–Bay of Plenty, New Zealand. Simulation analyses indicated that the integrated approach with additional process error is superior to the traditional method of correlating model estimates with environmental variables outside the estimation procedure. The results suggest that, for the snapper stock, recruitment is positively correlated with SST at the time of spawning.
Resumo:
We have formulated a model for analyzing the measurement error in marine survey abundance estimates by using data from parallel surveys (trawl haul or acoustic measurement). The measurement error is defined as the component of the variability that cannot be explained by covariates such as temperature, depth, bottom type, etc. The method presented is general, but we concentrate on bottom trawl catches of cod (Gadus morhua). Catches of cod from 10 parallel trawling experiments in the Barents Sea with a total of 130 paired hauls were used to estimate the measurement error in trawl hauls. Based on the experimental data, the measurement error is fairly constant in size on the logarithmic scale and is independent of location, time, and fish density. Compared with the total variability of the winter and autumn surveys in the Barents Sea, the measurement error is small (approximately 2–5%, on the log scale, in terms of variance of catch per towed distance). Thus, the cod catch rate is a fairly precise measure of fish density at a given site at a given time.