943 resultados para Error


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Incidentes com medicamentos geram problemas aos pacientes e custos adicionais ao sistema de saúde. A variedade de termos utilizada para comunicá-los propicia divergências nos resultados de pesquisas e confundem notificadores. Objetivou-se revisar os termos utilizados para descrever estes incidentes confrontando-os com as conceituações/definições oficiais disponíveis. Pesquisaram-se as bases PubMed, MEDLINE, IPA e LILACS para selecionar estudos publicados entre janeiro de 1990 e dezembro de 2005. Selecionaram-se 33 publicações. Verificou-se que a terminologia supranacional recomendada para descrever incidentes com medicamentos é insuficiente, mas que há consenso de uso das expressões em função do gênero do incidente. O termo Reação Adversa a Medicamento é mais utilizado quando não se verifica intencionalidade. A expressão Evento Adverso a Medicamento foi mais usada quando se descreviam incidentes durante a hospitalização; e Problema Relacionado a Medicamento foi mais utilizada em estudos que avaliaram atenção/cuidados farmacêuticos (uso/falta do medicamento). Ainda assim, a linha divisória entre essas três categorias não é clara e simples. Futuros estudos das relações entre as categorias e investigações multidisciplinares sobre erro humano podem subsidiar a proposição de novas conceituações

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study aimed to describe and compare the ventilation behavior during an incremental test utilizing three mathematical models and to compare the feature of ventilation curve fitted by the best mathematical model between aerobically trained (TR) and untrained ( UT) men. Thirty five subjects underwent a treadmill test with 1 km.h(-1) increases every minute until exhaustion. Ventilation averages of 20 seconds were plotted against time and fitted by: bi-segmental regression model (2SRM); three-segmental regression model (3SRM); and growth exponential model (GEM). Residual sum of squares (RSS) and mean square error (MSE) were calculated for each model. The correlations between peak VO2 (VO2PEAK), peak speed (Speed(PEAK)), ventilatory threshold identified by the best model (VT2SRM) and the first derivative calculated for workloads below (moderate intensity) and above (heavy intensity) VT2SRM were calculated. The RSS and MSE for GEM were significantly higher (p < 0.01) than for 2SRM and 3SRM in pooled data and in UT, but no significant difference was observed among the mathematical models in TR. In the pooled data, the first derivative of moderate intensities showed significant negative correlations with VT2SRM (r = -0.58; p < 0.01) and Speed(PEAK) (r = -0.46; p < 0.05) while the first derivative of heavy intensities showed significant negative correlation with VT2SRM (r = -0.43; p < 0.05). In UT group the first derivative of moderate intensities showed significant negative correlations with VT2SRM (r = -0.65; p < 0.05) and Speed(PEAK) (r = -0.61; p < 0.05), while the first derivative of heavy intensities showed significant negative correlation with VT2SRM (r= -0.73; p < 0.01), Speed(PEAK) (r = -0.73; p < 0.01) and VO2PEAK (r = -0.61; p < 0.05) in TR group. The ventilation behavior during incremental treadmill test tends to show only one threshold. UT subjects showed a slower ventilation increase during moderate intensities while TR subjects showed a slower ventilation increase during heavy intensities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, the effects of indenter tip roundness oil the load-depth indentation curves were analyzed using finite element modeling. The tip roundness level was Studied based on the ratio between tip radius and maximum penetration depth (R/h(max)), which varied from 0.02 to 1. The proportional Curvature constant (C), the exponent of depth during loading (alpha), the initial unloading slope (S), the correction factor (beta), the level of piling-up or sinking-in (h(c)/h(max)), and the ratio h(max)/h(f) are shown to be strongly influenced by the ratio R/h(max). The hardness (H) was found to be independent of R/h(max) in the range studied. The Oliver and Pharr method was successful in following the variation of h(c)/h(max) with the ratio R/h(max) through the variation of S with the ratio R/h(max). However, this work confirmed the differences between the hardness values calculated using the Oliver-Pharr method and those obtained directly from finite element calculations; differences which derive from the error in area calculation that Occurs when given combinations of indented material properties are present. The ratio of plastic work to total work (W(p)/W(t)) was found to be independent of the ratio R/h(max), which demonstrates that the methods for the Calculation of mechanical properties based on the *indentation energy are potentially not Susceptible to errors caused by tip roundness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, the effects of conical indentation variables on the load-depth indentation curves were analyzed using finite element modeling and dimensional analysis. A factorial design 2(6) was used with the aim of quantifying the effects of the mechanical properties of the indented material and of the indenter geometry. Analysis was based on the input variables Y/E, R/h(max), n, theta, E, and h(max). The dimensional variables E and h(max) were used such that each value of dimensionless Y/E was obtained with two different values of E and each value of dimensionless R/h(max) was obtained with two different h(max) values. A set of dimensionless functions was defined to analyze the effect of the input variables: Pi(1) = P(1)/Eh(2), Pi(2) = h(c)/h, Pi(3) = H/Y, Pi(4) = S/Eh(max), Pi(6) = h(max)/h(f) and Pi(7) = W(P)/W(T). These six functions were found to depend only on the dimensionless variables studied (Y/E, R/h(max), n, theta). Another dimension less function, Pi(5) = beta, was not well defined for most of the dimensionless variables and the only variable that provided a significant effect on beta was theta. However, beta showed a strong dependence on the fraction of the data selected to fit the unloading curve, which means that beta is especially Susceptible to the error in the Calculation of the initial unloading slope.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diagnostic methods have been an important tool in regression analysis to detect anomalies, such as departures from error assumptions and the presence of outliers and influential observations with the fitted models. Assuming censored data, we considered a classical analysis and Bayesian analysis assuming no informative priors for the parameters of the model with a cure fraction. A Bayesian approach was considered by using Markov Chain Monte Carlo Methods with Metropolis-Hasting algorithms steps to obtain the posterior summaries of interest. Some influence methods, such as the local influence, total local influence of an individual, local influence on predictions and generalized leverage were derived, analyzed and discussed in survival data with a cure fraction and covariates. The relevance of the approach was illustrated with a real data set, where it is shown that, by removing the most influential observations, the decision about which model best fits the data is changed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Medium density fiberboard (MDF) is an engineered wood product formed by breaking down selected lignin-cellulosic material residuals into fibers, combining it with wax and a resin binder, and then forming panels by applying high temperature and pressure. Because the raw material in the industrial process is ever-changing, the panel industry requires methods for monitoring the composition of their products. The aim of this study was to estimate the ratio of sugarcane (SC) bagasse to Eucalyptus wood in MDF panels using near infrared (NIR) spectroscopy. Principal component analysis (PCA) and partial least square (PLS) regressions were performed. MDF panels having different bagasse contents were easily distinguished from each other by the PCA of their NIR spectra with clearly different patterns of response. The PLS-R models for SC content of these MDF samples presented a strong coefficient of determination (0.96) between the NIR-predicted and Lab-determined values and a low standard error of prediction (similar to 1.5%) in the cross-validations. A key role of resins (adhesives), cellulose, and lignin for such PLS-R calibrations was shown. PLS-DA model correctly classified ninety-four percent of MDF samples by cross-validations and ninety-eight percent of the panels by independent test set. These NIR-based models can be useful to quickly estimate sugarcane bagasse vs. Eucalyptus wood content ratio in unknown MDF samples and to verify the quality of these engineered wood products in an online process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Some factors complicate comparisons between linkage maps from different studies. This problem can be resolved if measures of precision, such as confidence intervals and frequency distributions, are associated with markers. We examined the precision of distances and ordering of microsatellite markers in the consensus linkage maps of chromosomes 1, 3 and 4 from two F 2 reciprocal Brazilian chicken populations, using bootstrap sampling. Single and consensus maps were constructed. The consensus map was compared with the International Consensus Linkage Map and with the whole genome sequence. Some loci showed segregation distortion and missing data, but this did not affect the analyses negatively. Several inversions and position shifts were detected, based on 95% confidence intervals and frequency distributions of loci. Some discrepancies in distances between loci and in ordering were due to chance, whereas others could be attributed to other effects, including reciprocal crosses, sampling error of the founder animals from the two populations, F(2) population structure, number of and distance between microsatellite markers, number of informative meioses, loci segregation patterns, and sex. In the Brazilian consensus GGA1, locus LEI1038 was in a position closer to the true genome sequence than in the International Consensus Map, whereas for GGA3 and GGA4, no such differences were found. Extending these analyses to the remaining chromosomes should facilitate comparisons and the integration of several available genetic maps, allowing meta-analyses for map construction and quantitative trait loci (QTL) mapping. The precision of the estimates of QTL positions and their effects would be increased with such information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hardy-Weinberg Equilibrium (HWE) is an important genetic property that populations should have whenever they are not observing adverse situations as complete lack of panmixia, excess of mutations, excess of selection pressure, etc. HWE for decades has been evaluated; both frequentist and Bayesian methods are in use today. While historically the HWE formula was developed to examine the transmission of alleles in a population from one generation to the next, use of HWE concepts has expanded in human diseases studies to detect genotyping error and disease susceptibility (association); Ryckman and Williams (2008). Most analyses focus on trying to answer the question of whether a population is in HWE. They do not try to quantify how far from the equilibrium the population is. In this paper, we propose the use of a simple disequilibrium coefficient to a locus with two alleles. Based on the posterior density of this disequilibrium coefficient, we show how one can conduct a Bayesian analysis to verify how far from HWE a population is. There are other coefficients introduced in the literature and the advantage of the one introduced in this paper is the fact that, just like the standard correlation coefficients, its range is bounded and it is symmetric around zero (equilibrium) when comparing the positive and the negative values. To test the hypothesis of equilibrium, we use a simple Bayesian significance test, the Full Bayesian Significance Test (FBST); see Pereira, Stern andWechsler (2008) for a complete review. The disequilibrium coefficient proposed provides an easy and efficient way to make the analyses, especially if one uses Bayesian statistics. A routine in R programs (R Development Core Team, 2009) that implements the calculations is provided for the readers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Data: Photodynamic therapy (PDT) involves the photoinduction of cytotoxicity using a photosensitizer agent, a light source of the proper wavelength, and the presence of molecular oxygen. A model for tissue response to PDT based on the photodynamic threshold dose (Dth) has been widely used. In this model cells exposed to doses below Dth survive while at doses above the Dth necrosis takes place. Objective: This study evaluated the light Dth values by using two different methods of determination. One model concerns the depth of necrosis and the other the width of superficial necrosis. Materials and Methods: Using normal rat liver we investigated the depth and width of necrosis induced by PDT when a laser with a gaussian intensity profile is used. Different light doses, photosensitizers (Photogem, Photofrin, Photosan, Foscan, Photodithazine, and Radachlorin), and concentrations were employed. Each experiment was performed on five animals and the average and standard deviations were calculated. Results: A simple depth and width of necrosis model analysis allows us to determine the threshold dose by measuring both depth and surface data. Comparison shows that both measurements provide the same value within the degree of experimental error. Conclusion: This work demonstrates that by knowing the extent of the superficial necrotic area of a target tissue irradiated by a gaussian light beam, it is possible to estimate the threshold dose. This technique may find application where the determination of Dth must be done without cutting the tissue.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: High level piano performance requires complex integration of perceptual, motor, cognitive and emotive skills. Observations in psychology and neuroscience studies have suggested reciprocal inhibitory modulation of the cognition by emotion and emotion by cognition. However, it is still unclear how cognitive states may influence the pianistic performance. The aim of the present study is to verify the influence of cognitive and affective attention in the piano performances. Methods and Findings: Nine pianists were instructed to play the same piece of music, firstly focusing only on cognitive aspects of musical structure (cognitive performances), and secondly, paying attention solely on affective aspects (affective performances). Audio files from pianistic performances were examined using a computational model that retrieves nine specific musical features (descriptors) - loudness, articulation, brightness, harmonic complexity, event detection, key clarity, mode detection, pulse clarity and repetition. In addition, the number of volunteers' errors in the recording sessions was counted. Comments from pianists about their thoughts during performances were also evaluated. The analyses of audio files throughout musical descriptors indicated that the affective performances have more: agogics, legatos, pianos phrasing, and less perception of event density when compared to the cognitive ones. Error analysis demonstrated that volunteers misplayed more left hand notes in the cognitive performances than in the affective ones. Volunteers also played more wrong notes in affective than in cognitive performances. These results correspond to the volunteers' comments that in the affective performances, the cognitive aspects of piano execution are inhibited, whereas in the cognitive performances, the expressiveness is inhibited. Conclusions: Therefore, the present results indicate that attention to the emotional aspects of performance enhances expressiveness, but constrains cognitive and motor skills in the piano execution. In contrast, attention to the cognitive aspects may constrain the expressivity and automatism of piano performances.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new statistical algorithm to estimate rainfall over the Amazon Basin region using the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). The algorithm relies on empirical relationships derived for different raining-type systems between coincident measurements of surface rainfall rate and 85-GHz polarization-corrected brightness temperature as observed by the precipitation radar (PR) and TMI on board the TRMM satellite. The scheme includes rain/no-rain area delineation (screening) and system-type classification routines for rain retrieval. The algorithm is validated against independent measurements of the TRMM-PR and S-band dual-polarization Doppler radar (S-Pol) surface rainfall data for two different periods. Moreover, the performance of this rainfall estimation technique is evaluated against well-known methods, namely, the TRMM-2A12 [ the Goddard profiling algorithm (GPROF)], the Goddard scattering algorithm (GSCAT), and the National Environmental Satellite, Data, and Information Service (NESDIS) algorithms. The proposed algorithm shows a normalized bias of approximately 23% for both PR and S-Pol ground truth datasets and a mean error of 0.244 mm h(-1) ( PR) and -0.157 mm h(-1)(S-Pol). For rain volume estimates using PR as reference, a correlation coefficient of 0.939 and a normalized bias of 0.039 were found. With respect to rainfall distributions and rain area comparisons, the results showed that the formulation proposed is efficient and compatible with the physics and dynamics of the observed systems over the area of interest. The performance of the other algorithms showed that GSCAT presented low normalized bias for rain areas and rain volume [0.346 ( PR) and 0.361 (S-Pol)], and GPROF showed rainfall distribution similar to that of the PR and S-Pol but with a bimodal distribution. Last, the five algorithms were evaluated during the TRMM-Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) 1999 field campaign to verify the precipitation characteristics observed during the easterly and westerly Amazon wind flow regimes. The proposed algorithm presented a cumulative rainfall distribution similar to the observations during the easterly regime, but it underestimated for the westerly period for rainfall rates above 5 mm h(-1). NESDIS(1) overestimated for both wind regimes but presented the best westerly representation. NESDIS(2), GSCAT, and GPROF underestimated in both regimes, but GPROF was closer to the observations during the easterly flow.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present K-band spectra of newly born OB stars in the obscured Galactic giant H II region W51A and approximate to 0.8 '' angular resolution images in the J, H, and K(S)-bands. Four objects have been spectroscopically classified as O-type stars. The mean spectroscopic parallax of the four stars gives a distance of 2.0 +/- 0.3 kpc (error in the mean), significantly smaller than the radio recombination line kinematic value of 5.5 kpc or the values derived from maser proper motion observations (6-8 kpc). The number of Lyman continuum photons from the contribution of all massive stars (NLyc approximate to 1.5 x 10(50) s(-1)) is in good agreement with that inferred from radio recombination lines (NLyc = 1.3 x 10(50) s(-1)) after accounting for the smaller distance derived here. We present analysis of archival high angular resolution images (NAOS CONICA at VLT and T-ReCS at Gemini) of the compact region W51 IRS 2. The K(S)-band images resolve the infrared source IRS 2 indicating that it is a very young compact H II region. Sources IRS 2E was resolved into compact cluster (within 660 AU of projected distance) of three objects, but one of them is just bright extended emission. W51d1 and W51d2 were identified with compact clusters of three objects (maybe four in the case of W51d1) each one. Although IRS 2E is the brightest source in the K-band and at 12.6 mu m, it is not clearly associated with a radio continuum source. Our spectrum of IRS 2E shows, similar to previous work, strong emission in Br gamma and He I, as well as three forbidden emission lines of Fe III and emission lines of molecular hydrogen (H(2)) marking it as a massive young stellar object.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a new set of oscillator strengths for 142 Fe II lines in the wavelength range 4000-8000 angstrom. Our gf-values are both accurate and precise, because each multiplet was globally normalized using laboratory data ( accuracy), while the relative gf-values of individual lines within a given multiplet were obtained from theoretical calculations ( precision). Our line list was tested with the Sun and high-resolution (R approximate to 10(5)), high-S/N (approximate to 700-900) Keck+HIRES spectra of the metal-poor stars HD 148816 and HD 140283, for which line-to-line scatter (sigma) in the iron abundances from Fe II lines as low as 0.03, 0.04, and 0.05 dex are found, respectively. For these three stars the standard error in the mean iron abundance from Fe II lines is negligible (sigma(mean) <= 0.01 dex). The mean solar iron abundance obtained using our gf-values and different model atmospheres is A(Fe) = 7.45(sigma = 0.02).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this study was to establish a digital elevation model and its horizontal resolution to interpolate the annual air temperature for the Alagoas State by means of multiple linear regression models. A multiple linear regression model was adjusted to series (11 to 34 years) of annual air temperatures obtained from 28 weather stations in the states of Alagoas, Bahia, Pernambuco and Sergipe, in the Northeast of Brazil, in function of latitude, longitude and altitude. The elevation models SRTM and GTOPO30 were used in the analysis, with original resolutions of 90 and 900 m, respectively. The SRTM was resampled for horizontal resolutions of 125, 250, 500, 750 and 900 m. For spatializing the annual mean air temperature for the state of Alagoas, a multiple linear regression model was used for each elevation and spatial resolution on a grid of the latitude and longitude. In Alagoas, estimates based on SRTM data resulted in a standard error of estimate (0.57 degrees C) and dispersion (r(2) = 0.62) lower than those obtained from GTOPO30 (0.93 degrees C and 0.20). In terms of SRTM resolutions, no significant differences were observed between the standard error (0.55 degrees C; 750 m - 0.58 degrees C; 250m) and dispersion (0.60; 500 m - 0.65; 750 m) estimates. The spatialization of annual air temperature in Alagoas, via multiple regression models applied to SRTM data showed higher concordance than that obtained with the GTOPO30, independent of the spatial resolution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bee males (drones) of stingless bees tend to congregate near entrances of conspecific nests, where they wait for virgin queens that initiate their nuptial flight. We observed that the Neotropical solitary wasp Trachypus boharti (Hymenoptera, Cabronidae) specifically preys on males of the stingless bee Scaptotrigona postica (Hymenoptera, Apidae); these wasps captured up to 50 males per day near the entrance of a single hive. Over 90% of the wasp attacks were unsuccessful; such erroneous attacks often involved conspecific wasps and worker bees. After the capture of non-male prey, wasps almost immediately released these individuals unharmed and continued hunting. A simple behavioral experiment showed that at short distances wasps were not specifically attracted to S. postica males nor were they repelled by workers of the same species. Likely, short-range prey detection near the bees' nest is achieved mainly by vision whereas close-range prey recognition is based principally on chemical and/or mechanical cues. We argue that the dependence on the wasp's visual perception during attack and the crowded and dynamic hunting conditions caused wasps to make many preying attempts that failed. Two wasp-density-related factors, wasp-prey distance and wasp-wasp encounters, may account for the fact that the highest male capture and unsuccessful wasp bee encounter rates occurred at intermediate wasp numbers.