943 resultados para Error de subsunción


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Medium density fiberboard (MDF) is an engineered wood product formed by breaking down selected lignin-cellulosic material residuals into fibers, combining it with wax and a resin binder, and then forming panels by applying high temperature and pressure. Because the raw material in the industrial process is ever-changing, the panel industry requires methods for monitoring the composition of their products. The aim of this study was to estimate the ratio of sugarcane (SC) bagasse to Eucalyptus wood in MDF panels using near infrared (NIR) spectroscopy. Principal component analysis (PCA) and partial least square (PLS) regressions were performed. MDF panels having different bagasse contents were easily distinguished from each other by the PCA of their NIR spectra with clearly different patterns of response. The PLS-R models for SC content of these MDF samples presented a strong coefficient of determination (0.96) between the NIR-predicted and Lab-determined values and a low standard error of prediction (similar to 1.5%) in the cross-validations. A key role of resins (adhesives), cellulose, and lignin for such PLS-R calibrations was shown. PLS-DA model correctly classified ninety-four percent of MDF samples by cross-validations and ninety-eight percent of the panels by independent test set. These NIR-based models can be useful to quickly estimate sugarcane bagasse vs. Eucalyptus wood content ratio in unknown MDF samples and to verify the quality of these engineered wood products in an online process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Some factors complicate comparisons between linkage maps from different studies. This problem can be resolved if measures of precision, such as confidence intervals and frequency distributions, are associated with markers. We examined the precision of distances and ordering of microsatellite markers in the consensus linkage maps of chromosomes 1, 3 and 4 from two F 2 reciprocal Brazilian chicken populations, using bootstrap sampling. Single and consensus maps were constructed. The consensus map was compared with the International Consensus Linkage Map and with the whole genome sequence. Some loci showed segregation distortion and missing data, but this did not affect the analyses negatively. Several inversions and position shifts were detected, based on 95% confidence intervals and frequency distributions of loci. Some discrepancies in distances between loci and in ordering were due to chance, whereas others could be attributed to other effects, including reciprocal crosses, sampling error of the founder animals from the two populations, F(2) population structure, number of and distance between microsatellite markers, number of informative meioses, loci segregation patterns, and sex. In the Brazilian consensus GGA1, locus LEI1038 was in a position closer to the true genome sequence than in the International Consensus Map, whereas for GGA3 and GGA4, no such differences were found. Extending these analyses to the remaining chromosomes should facilitate comparisons and the integration of several available genetic maps, allowing meta-analyses for map construction and quantitative trait loci (QTL) mapping. The precision of the estimates of QTL positions and their effects would be increased with such information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hardy-Weinberg Equilibrium (HWE) is an important genetic property that populations should have whenever they are not observing adverse situations as complete lack of panmixia, excess of mutations, excess of selection pressure, etc. HWE for decades has been evaluated; both frequentist and Bayesian methods are in use today. While historically the HWE formula was developed to examine the transmission of alleles in a population from one generation to the next, use of HWE concepts has expanded in human diseases studies to detect genotyping error and disease susceptibility (association); Ryckman and Williams (2008). Most analyses focus on trying to answer the question of whether a population is in HWE. They do not try to quantify how far from the equilibrium the population is. In this paper, we propose the use of a simple disequilibrium coefficient to a locus with two alleles. Based on the posterior density of this disequilibrium coefficient, we show how one can conduct a Bayesian analysis to verify how far from HWE a population is. There are other coefficients introduced in the literature and the advantage of the one introduced in this paper is the fact that, just like the standard correlation coefficients, its range is bounded and it is symmetric around zero (equilibrium) when comparing the positive and the negative values. To test the hypothesis of equilibrium, we use a simple Bayesian significance test, the Full Bayesian Significance Test (FBST); see Pereira, Stern andWechsler (2008) for a complete review. The disequilibrium coefficient proposed provides an easy and efficient way to make the analyses, especially if one uses Bayesian statistics. A routine in R programs (R Development Core Team, 2009) that implements the calculations is provided for the readers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Data: Photodynamic therapy (PDT) involves the photoinduction of cytotoxicity using a photosensitizer agent, a light source of the proper wavelength, and the presence of molecular oxygen. A model for tissue response to PDT based on the photodynamic threshold dose (Dth) has been widely used. In this model cells exposed to doses below Dth survive while at doses above the Dth necrosis takes place. Objective: This study evaluated the light Dth values by using two different methods of determination. One model concerns the depth of necrosis and the other the width of superficial necrosis. Materials and Methods: Using normal rat liver we investigated the depth and width of necrosis induced by PDT when a laser with a gaussian intensity profile is used. Different light doses, photosensitizers (Photogem, Photofrin, Photosan, Foscan, Photodithazine, and Radachlorin), and concentrations were employed. Each experiment was performed on five animals and the average and standard deviations were calculated. Results: A simple depth and width of necrosis model analysis allows us to determine the threshold dose by measuring both depth and surface data. Comparison shows that both measurements provide the same value within the degree of experimental error. Conclusion: This work demonstrates that by knowing the extent of the superficial necrotic area of a target tissue irradiated by a gaussian light beam, it is possible to estimate the threshold dose. This technique may find application where the determination of Dth must be done without cutting the tissue.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: High level piano performance requires complex integration of perceptual, motor, cognitive and emotive skills. Observations in psychology and neuroscience studies have suggested reciprocal inhibitory modulation of the cognition by emotion and emotion by cognition. However, it is still unclear how cognitive states may influence the pianistic performance. The aim of the present study is to verify the influence of cognitive and affective attention in the piano performances. Methods and Findings: Nine pianists were instructed to play the same piece of music, firstly focusing only on cognitive aspects of musical structure (cognitive performances), and secondly, paying attention solely on affective aspects (affective performances). Audio files from pianistic performances were examined using a computational model that retrieves nine specific musical features (descriptors) - loudness, articulation, brightness, harmonic complexity, event detection, key clarity, mode detection, pulse clarity and repetition. In addition, the number of volunteers' errors in the recording sessions was counted. Comments from pianists about their thoughts during performances were also evaluated. The analyses of audio files throughout musical descriptors indicated that the affective performances have more: agogics, legatos, pianos phrasing, and less perception of event density when compared to the cognitive ones. Error analysis demonstrated that volunteers misplayed more left hand notes in the cognitive performances than in the affective ones. Volunteers also played more wrong notes in affective than in cognitive performances. These results correspond to the volunteers' comments that in the affective performances, the cognitive aspects of piano execution are inhibited, whereas in the cognitive performances, the expressiveness is inhibited. Conclusions: Therefore, the present results indicate that attention to the emotional aspects of performance enhances expressiveness, but constrains cognitive and motor skills in the piano execution. In contrast, attention to the cognitive aspects may constrain the expressivity and automatism of piano performances.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new statistical algorithm to estimate rainfall over the Amazon Basin region using the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). The algorithm relies on empirical relationships derived for different raining-type systems between coincident measurements of surface rainfall rate and 85-GHz polarization-corrected brightness temperature as observed by the precipitation radar (PR) and TMI on board the TRMM satellite. The scheme includes rain/no-rain area delineation (screening) and system-type classification routines for rain retrieval. The algorithm is validated against independent measurements of the TRMM-PR and S-band dual-polarization Doppler radar (S-Pol) surface rainfall data for two different periods. Moreover, the performance of this rainfall estimation technique is evaluated against well-known methods, namely, the TRMM-2A12 [ the Goddard profiling algorithm (GPROF)], the Goddard scattering algorithm (GSCAT), and the National Environmental Satellite, Data, and Information Service (NESDIS) algorithms. The proposed algorithm shows a normalized bias of approximately 23% for both PR and S-Pol ground truth datasets and a mean error of 0.244 mm h(-1) ( PR) and -0.157 mm h(-1)(S-Pol). For rain volume estimates using PR as reference, a correlation coefficient of 0.939 and a normalized bias of 0.039 were found. With respect to rainfall distributions and rain area comparisons, the results showed that the formulation proposed is efficient and compatible with the physics and dynamics of the observed systems over the area of interest. The performance of the other algorithms showed that GSCAT presented low normalized bias for rain areas and rain volume [0.346 ( PR) and 0.361 (S-Pol)], and GPROF showed rainfall distribution similar to that of the PR and S-Pol but with a bimodal distribution. Last, the five algorithms were evaluated during the TRMM-Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) 1999 field campaign to verify the precipitation characteristics observed during the easterly and westerly Amazon wind flow regimes. The proposed algorithm presented a cumulative rainfall distribution similar to the observations during the easterly regime, but it underestimated for the westerly period for rainfall rates above 5 mm h(-1). NESDIS(1) overestimated for both wind regimes but presented the best westerly representation. NESDIS(2), GSCAT, and GPROF underestimated in both regimes, but GPROF was closer to the observations during the easterly flow.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present K-band spectra of newly born OB stars in the obscured Galactic giant H II region W51A and approximate to 0.8 '' angular resolution images in the J, H, and K(S)-bands. Four objects have been spectroscopically classified as O-type stars. The mean spectroscopic parallax of the four stars gives a distance of 2.0 +/- 0.3 kpc (error in the mean), significantly smaller than the radio recombination line kinematic value of 5.5 kpc or the values derived from maser proper motion observations (6-8 kpc). The number of Lyman continuum photons from the contribution of all massive stars (NLyc approximate to 1.5 x 10(50) s(-1)) is in good agreement with that inferred from radio recombination lines (NLyc = 1.3 x 10(50) s(-1)) after accounting for the smaller distance derived here. We present analysis of archival high angular resolution images (NAOS CONICA at VLT and T-ReCS at Gemini) of the compact region W51 IRS 2. The K(S)-band images resolve the infrared source IRS 2 indicating that it is a very young compact H II region. Sources IRS 2E was resolved into compact cluster (within 660 AU of projected distance) of three objects, but one of them is just bright extended emission. W51d1 and W51d2 were identified with compact clusters of three objects (maybe four in the case of W51d1) each one. Although IRS 2E is the brightest source in the K-band and at 12.6 mu m, it is not clearly associated with a radio continuum source. Our spectrum of IRS 2E shows, similar to previous work, strong emission in Br gamma and He I, as well as three forbidden emission lines of Fe III and emission lines of molecular hydrogen (H(2)) marking it as a massive young stellar object.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a new set of oscillator strengths for 142 Fe II lines in the wavelength range 4000-8000 angstrom. Our gf-values are both accurate and precise, because each multiplet was globally normalized using laboratory data ( accuracy), while the relative gf-values of individual lines within a given multiplet were obtained from theoretical calculations ( precision). Our line list was tested with the Sun and high-resolution (R approximate to 10(5)), high-S/N (approximate to 700-900) Keck+HIRES spectra of the metal-poor stars HD 148816 and HD 140283, for which line-to-line scatter (sigma) in the iron abundances from Fe II lines as low as 0.03, 0.04, and 0.05 dex are found, respectively. For these three stars the standard error in the mean iron abundance from Fe II lines is negligible (sigma(mean) <= 0.01 dex). The mean solar iron abundance obtained using our gf-values and different model atmospheres is A(Fe) = 7.45(sigma = 0.02).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this study was to establish a digital elevation model and its horizontal resolution to interpolate the annual air temperature for the Alagoas State by means of multiple linear regression models. A multiple linear regression model was adjusted to series (11 to 34 years) of annual air temperatures obtained from 28 weather stations in the states of Alagoas, Bahia, Pernambuco and Sergipe, in the Northeast of Brazil, in function of latitude, longitude and altitude. The elevation models SRTM and GTOPO30 were used in the analysis, with original resolutions of 90 and 900 m, respectively. The SRTM was resampled for horizontal resolutions of 125, 250, 500, 750 and 900 m. For spatializing the annual mean air temperature for the state of Alagoas, a multiple linear regression model was used for each elevation and spatial resolution on a grid of the latitude and longitude. In Alagoas, estimates based on SRTM data resulted in a standard error of estimate (0.57 degrees C) and dispersion (r(2) = 0.62) lower than those obtained from GTOPO30 (0.93 degrees C and 0.20). In terms of SRTM resolutions, no significant differences were observed between the standard error (0.55 degrees C; 750 m - 0.58 degrees C; 250m) and dispersion (0.60; 500 m - 0.65; 750 m) estimates. The spatialization of annual air temperature in Alagoas, via multiple regression models applied to SRTM data showed higher concordance than that obtained with the GTOPO30, independent of the spatial resolution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bee males (drones) of stingless bees tend to congregate near entrances of conspecific nests, where they wait for virgin queens that initiate their nuptial flight. We observed that the Neotropical solitary wasp Trachypus boharti (Hymenoptera, Cabronidae) specifically preys on males of the stingless bee Scaptotrigona postica (Hymenoptera, Apidae); these wasps captured up to 50 males per day near the entrance of a single hive. Over 90% of the wasp attacks were unsuccessful; such erroneous attacks often involved conspecific wasps and worker bees. After the capture of non-male prey, wasps almost immediately released these individuals unharmed and continued hunting. A simple behavioral experiment showed that at short distances wasps were not specifically attracted to S. postica males nor were they repelled by workers of the same species. Likely, short-range prey detection near the bees' nest is achieved mainly by vision whereas close-range prey recognition is based principally on chemical and/or mechanical cues. We argue that the dependence on the wasp's visual perception during attack and the crowded and dynamic hunting conditions caused wasps to make many preying attempts that failed. Two wasp-density-related factors, wasp-prey distance and wasp-wasp encounters, may account for the fact that the highest male capture and unsuccessful wasp bee encounter rates occurred at intermediate wasp numbers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper studies semistability of the recursive Kalman filter in the context of linear time-varying (LTV), possibly nondetectable systems with incorrect noise information. Semistability is a key property, as it ensures that the actual estimation error does not diverge exponentially. We explore structural properties of the filter to obtain a necessary and sufficient condition for the filter to be semistable. The condition does not involve limiting gains nor the solution of Riccati equations, as they can be difficult to obtain numerically and may not exist. We also compare semistability with the notions of stability and stability w.r.t. the initial error covariance, and we show that semistability in a sense makes no distinction between persistent and nonpersistent incorrect noise models, as opposed to stability. In the linear time invariant scenario we obtain algebraic, easy to test conditions for semistability and stability, which complement results available in the context of detectable systems. Illustrative examples are included.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper studies a nonlinear, discrete-time matrix system arising in the stability analysis of Kalman filters. These systems present an internal coupling between the state components that gives rise to complex dynamic behavior. The problem of partial stability, which requires that a specific component of the state of the system converge exponentially, is studied and solved. The convergent state component is strongly linked with the behavior of Kalman filters, since it can be used to provide bounds for the error covariance matrix under uncertainties in the noise measurements. We exploit the special features of the system-mainly the connections with linear systems-to obtain an algebraic test for partial stability. Finally, motivated by applications in which polynomial divergence of the estimates is acceptable, we study and solve a partial semistability problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Incoherent eta photoproduction in nuclei is evaluated at forward angles within 4 to 9 GeV using a multiple scattering Monte Carlo cascade calculation with full eta-nucleus final-state interactions. The Primakoff, nuclear coherent and nuclear incoherent components of the cross sections fit remarkably well previous measurements for Be and Cu from Cornell, suggesting a destructive interference between the Coulomb and nuclear coherent amplitudes for Cu. The inelastic background of the data is consistently attributed to the nuclear incoherent part, which is clearly not isotropic as previously considered in Cornell's analysis. The respective Primakoff cross sections from Be and Cu give Gamma(eta ->gamma gamma)=0.476(62) keV, where the quoted error is only statistical. This result is consistent with the Particle Data Group average of 0.510(26) keV and in sharp contrast (similar to 50%) with the value of 0.324(46) keV obtained at Cornell.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report on the event structure and double helicity asymmetry (A(LL)) of jet production in longitudinally polarized p + p collisions at root s = 200 GeV. Photons and charged particles were measured by the PHENIX experiment at midrapidity vertical bar eta vertical bar < 0.35 with the requirement of a high-momentum (> 2 GeV/c) photon in the event. Event structure, such as multiplicity, p(T) density and thrust in the PHENIX acceptance, were measured and compared with the results from the PYTHIA event generator and the GEANT detector simulation. The shape of jets and the underlying event were well reproduced at this collision energy. For the measurement of jet A(LL), photons and charged particles were clustered with a seed-cone algorithm to obtain the cluster pT sum (p(T)(reco)). The effect of detector response and the underlying events on p(T)(reco) was evaluated with the simulation. The production rate of reconstructed jets is satisfactorily reproduced with the next-to-leading-order and perturbative quantum chromodynamics jet production cross section. For 4< p(T)(reco) < 12 GeV/c with an average beam polarization of < P > = 49% we measured Lambda(LL) = -0.0014 +/- 0.0037(stat) at the lowest p(T)(reco) bin (4-5 GeV= c) and -0.0181 +/- 0.0282(stat) at the highest p(T)(reco) bin (10-12 GeV= c) with a beam polarization scale error of 9.4% and a pT scale error of 10%. Jets in the measured p(T)(reco) range arise primarily from hard-scattered gluons with momentum fraction 0: 02 < x < 0: 3 according to PYTHIA. The measured A(LL) is compared with predictions that assume various Delta G(x) distributions based on the Gluck-Reya-Stratmann-Vogelsang parameterization. The present result imposes the limit -a.1 < integral(0.3)(0.02) dx Delta G(x, mu(2) = GeV2) < 0.4 at 95% confidence level or integral(0.3)(0.002) dx Delta G(x, mu(2) = 1 GeV2) < 0.5 at 99% confidence level.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose and analyze two different Bayesian online algorithms for learning in discrete Hidden Markov Models and compare their performance with the already known Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of generalization we draw learning curves in simplified situations for these algorithms and compare their performances.