979 resultados para test sequence


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The widespread falsification and/or adulteration of commercially available pharmaceutical preparations call for reliable methods of drug identification, preferably through selective and rapid sorting color tests that could be undertaken with minimum equipment remote from laboratory facilities. The present work deals with a convenient adaptation and refinement of a spot test devised by Feigl (1966) for urotropine, based on the hydrolytic cleavage of that substance in the presence of sulfuric acid, splitting out formaldehyde which is identified by its color reaction with chromotropic acid. A simple emergency kit was developed for the quick, efficient, inexpensive and easy performance of urotropine tests by semiskilled personnel even in the drugstore laboratory (or office) as well as in a mobile screening operation. It is shown that when the reagents are added according to the recommended sequence a self-heating system is generated, increasing substantially the reactions' rates and the test sensitivity as well. The identification limit found was 25 mug of urotropine, for both solid and liquid samples. The possible interference of 84 substances/materials was investigated. Interference was noted only for methylene blue, acriflavine, Ponceau Red, Bordeaux Red (these dyes are often included in urotropine dosage forms), pyramidone, dipyrone, quinine and tetracycline. A simple procedure for removing most of the interferences is described. Data for 8 commercial dosage forms and results obtained from their analysis are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aggressive display in Betta splendens is particularly prominent, and vital to its adaptation to the environment. Methylmercury is an organic variation of Hg that presents particularly pronounced neuro-behavioral effects. The present experiments aim to test the effect of acute and chronic poisoning with methylmercury on the display in Bettas. The animals were poisoned by trophic means in both experiments (16 ug/kg in acute poisoning; 16 ug/kg/day for chronic poisoning), and tested in agonistic pairs. The total frequency of the display was recorded, analyzing the topography of the agonistic response. The methylmercury seems to present a dose- and detoxification-dependent effect on these responses, with a more pronounced effect on motivity in acute poisoning and on emotionality in the chronic poisoning. It is possible that this effect could be mediated by alteration in the mono-amino-oxidase systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This quasi-experimental study describes the effects of a yoga sequence following hemodynamic and biochemical parameters in patients with hypertension. Thirty-three volunteers participated in the study (control = 16 and yoga = 17) for four months. Blood pressure measurements, cardiac and respiratory rate were collected monthly, while the biochemical profile was taken at the beginning and end of the program. To analyze the data, Student's t test and repeated measures analyses were performed. The yoga group showed a significant reduction of systolic blood pressure, heart and respiratory rate (p < 0.05). As for the biochemical profile, the yoga group showed correlation coefficients between initial values and final responses greater than the control of fasting glucose, total cholesterol, LDL-cholesterol and triglycerides. The elaborated sequence practice promoted significant cardiovascular and metabolic benefits. The yoga exercises performed in the proposed sequence constitute complementary non-pharmacological control of blood pressure in patients with hypertension. © 2012 Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose. We quantified the main sequence of spontaneous blinks in normal subjects and Graves' disease patients with upper eyelid retraction using a nonlinear and two linear models, and examined the variability of the main sequence estimated with standard linear regression for 10-minute periods of time. Methods. A total of 20 normal subjects and 12 patients had their spontaneous blinking measured with the magnetic search coil technique when watching a video during one hour. The main sequence was estimated with a power-law function, and with standard and trough the origin linear regressions. Repeated measurements ANOVA was used to test the mean sequence stability of 10-minute bins measured with standard linear regression. Results. In 95% of the sample the correlation coefficients of the main sequence ranged from 0.60 to 0.94. Homoscedasticity of the peak velocity was not verified in 20% of the subjects and 25% of the patients. The power-law function provided the best main sequence fitting for subjects and patients. The mean sequence of 10-minute bins measured with standard linear regression did not differ from the one-hour period value. For the entire period of observation and the slope obtained by standard linear regression, the main sequence of the patients was reduced significantly compared to the normal subjects. Conclusions. Standard linear regression is a valid and stable approximation for estimating the main sequence of spontaneous blinking. However, the basic assumptions of the linear regression model should be examined on an individual basis. The maximum velocity of large blinks is slower in Graves' disease patients than in normal subjects. © 2013 The Association for Research in Vision and Ophthalmology, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. Lithium abundances in open clusters are a very effective probe of mixing processes, and their study can help us to understand the large depletion of lithium that occurs in the Sun. Owing to its age and metallicity, the open cluster M 67 is especially interesting on this respect. Many studies of lithium abundances in M 67 have been performed, but a homogeneous global analysis of lithium in stars from subsolar masses and extending to the most massive members, has yet to be accomplished for a large sample based on high-quality spectra. Aims. We test our non-standard models, which were calibrated using the Sun with observational data. Methods. We collect literature data to analyze, for the first time in a homogeneous way, the non-local thermal equilibrium lithium abundances of all observed single stars in M 67 more massive than similar to 0.9 M-circle dot. Our grid of evolutionary models is computed assuming a non-standard mixing at metallicity [Fe/H] = 0.01, using the Toulouse-Geneva evolution code. Our analysis starts from the entrance into the zero-age main-sequence. Results. Lithium in M 67 is a tight function of mass for stars more massive than the Sun, apart from a few outliers. A plateau in lithium abundances is observed for turn-off stars. Both less massive (M >= 1.10 M-circle dot) and more massive (M >= 1.28 M-circle dot) stars are more depleted than those in the plateau. There is a significant scatter in lithium abundances for any given mass M <= 1.1 M-circle dot. Conclusions. Our models qualitatively reproduce most of the features described above, although the predicted depletion of lithium is 0.45 dex smaller than observed for masses in the plateau region, i.e. between 1.1 and 1.28 solar masses. More work is clearly needed to accurately reproduce the observations. Despite hints that chromospheric activity and rotation play a role in lithium depletion, no firm conclusion can be drawn with the presently available data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The asymptotic expansion of the distribution of the gradient test statistic is derived for a composite hypothesis under a sequence of Pitman alternative hypotheses converging to the null hypothesis at rate n(-1/2), n being the sample size. Comparisons of the local powers of the gradient, likelihood ratio, Wald and score tests reveal no uniform superiority property. The power performance of all four criteria in one-parameter exponential family is examined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background A large number of probabilistic models used in sequence analysis assign non-zero probability values to most input sequences. To decide when a given probability is sufficient the most common way is bayesian binary classification, where the probability of the model characterizing the sequence family of interest is compared to that of an alternative probability model. We can use as alternative model a null model. This is the scoring technique used by sequence analysis tools such as HMMER, SAM and INFERNAL. The most prevalent null models are position-independent residue distributions that include: the uniform distribution, genomic distribution, family-specific distribution and the target sequence distribution. This paper presents a study to evaluate the impact of the choice of a null model in the final result of classifications. In particular, we are interested in minimizing the number of false predictions in a classification. This is a crucial issue to reduce costs of biological validation. Results For all the tests, the target null model presented the lowest number of false positives, when using random sequences as a test. The study was performed in DNA sequences using GC content as the measure of content bias, but the results should be valid also for protein sequences. To broaden the application of the results, the study was performed using randomly generated sequences. Previous studies were performed on aminoacid sequences, using only one probabilistic model (HMM) and on a specific benchmark, and lack more general conclusions about the performance of null models. Finally, a benchmark test with P. falciparum confirmed these results. Conclusions Of the evaluated models the best suited for classification are the uniform model and the target model. However, the use of the uniform model presents a GC bias that can cause more false positives for candidate sequences with extreme compositional bias, a characteristic not described in previous studies. In these cases the target model is more dependable for biological validation due to its higher specificity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Identification of nontuberculous mycobacteria (NTM) based on phenotypic tests is time-consuming, labor-intensive, expensive and often provides erroneous or inconclusive results. In the molecular method referred to as PRA-hsp65, a fragment of the hsp65 gene is amplified by PCR and then analyzed by restriction digest; this rapid approach offers the promise of accurate, cost-effective species identification. The aim of this study was to determine whether species identification of NTM using PRA-hsp65 is sufficiently reliable to serve as the routine methodology in a reference laboratory. Results A total of 434 NTM isolates were obtained from 5019 cultures submitted to the Institute Adolpho Lutz, Sao Paulo Brazil, between January 2000 and January 2001. Species identification was performed for all isolates using conventional phenotypic methods and PRA-hsp65. For isolates for which these methods gave discordant results, definitive species identification was obtained by sequencing a 441 bp fragment of hsp65. Phenotypic evaluation and PRA-hsp65 were concordant for 321 (74%) isolates. These assignments were presumed to be correct. For the remaining 113 discordant isolates, definitive identification was based on sequencing a 441 bp fragment of hsp65. PRA-hsp65 identified 30 isolates with hsp65 alleles representing 13 previously unreported PRA-hsp65 patterns. Overall, species identification by PRA-hsp65 was significantly more accurate than by phenotype methods (392 (90.3%) vs. 338 (77.9%), respectively; p < .0001, Fisher's test). Among the 333 isolates representing the most common pathogenic species, PRA-hsp65 provided an incorrect result for only 1.2%. Conclusion PRA-hsp65 is a rapid and highly reliable method and deserves consideration by any clinical microbiology laboratory charged with performing species identification of NTM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last decade, the end-state comfort effect (e.g., Rosenbaum et al., 2006) has received a considerable amount of attention. However, some of the underlying mechanisms are still to be investigated, amongst others, how sequential planning affects end-state comfort and how this effect develops over learning. In a two-step sequencing task, e.g., postural comfort can be planned on the intermediate position (next state) or on the actual end position (final state). It might be hypothesized that, in initial acquisition, next state’s comfort is crucial for action planning but that, in the course of learning, final state’s comfort is taken more and more into account. To test this hypothesis, a variant of Rosenbaum’s vertical stick transportation task was used. Participants (N = 16, right-handed) received extensive practice on a two-step transportation task (10,000 trials over 12 sessions). From the initial position on the middle stair of a staircase in front of the participant, the stick had to be transported either 20 cm upwards and then 40 cm downwards or 20 cm downwards and then 40 cm upwards (N = 8 per subgroup). Participants were supposed to produce fluid movements without changing grasp. In the pre- and posttest, participants were tested on both two-step sequencing tasks as well as on 20 cm single-step upwards and downwards movements (10 trials per condition). For the test trials, grasp height was calculated kinematographically. In the pretest, large end/next/final-state comfort effects for single-step transportation tasks and large next-state comfort effects for sequenced tasks were found. However, no change in grasp height from pre- to posttest could be revealed. Results show that, in vertical stick transportation sequences, the final state is not taken into account when planning grasp height. Instead, action planning seems to be solely based on aspects of the next action goal that is to be reached.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Typically, statistical learning is investigated by testing the acquisition of specific items or forming general rules. As implicit sequence learning also involves the extraction of regularities from the environment, it can also be considered as an instance of statistical learning. In the present study, a Serial Reaction Time Task was used to test whether the continuous versus interleaved repetition of a sequence affects implicit learning despite the equal exposure to the sequences. The results revealed a sequence learning advantage for the continuous repetition condition compared to the interleaved condition. This suggests that by repetition, additional sequence information was extracted although the exposure to the sequences was identical as in the interleaved condition. The results are discussed in terms of similarities and potential differences between typical statistical learning paradigms and sequence learning.