968 resultados para Random Number of Ancestors
Resumo:
The design of randomized controlled trials entails decisions that have economic as well as statistical implications. In particular, the choice of an individual or cluster randomization design may affect the cost of achieving the desired level of power, other things being equal. Furthermore, if cluster randomization is chosen, the researcher must decide how to balance the number of clusters, or sites, and the size of each site. This article investigates these interrelated statistical and economic issues. Its principal purpose is to elucidate the statistical and economic trade-offs to assist researchers to employ randomized controlled trials that have desired economic, as well as statistical, properties. (C) 2003 Elsevier Inc. All rights reserved.
Resumo:
The ability to generate enormous random libraries of DNA probes via split-and-mix synthesis on solid supports is an important biotechnological application of colloids that has not been fully utilized to date. To discriminate between colloid-based DNA probes each colloidal particle must be 'encoded' so it is distinguishable from all other particles. To this end, we have used novel particle synthesis strategies to produce large numbers of optically encoded particle suitable for DNA library synthesis. Multifluorescent particles with unique and reproducible optical signatures (i.e., fluorescence and light-scattering attributes) suitable for high-throughput flow cytometry have been produced. In the spectroscopic study presented here, we investigated the optical characteristics of multi-fluorescent particles that were synthesized by coating silica 'core' particles with up to six different fluorescent dye shells alternated with non-fluorescent silica 'spacer' shells. It was observed that the diameter of the particles increased by up to 20% as a result of the addition of twelve concentric shells and that there was a significant reduction in fluorescence emission intensities from inner shells as an increasing number of shells were deposited.
Resumo:
Besides its importance in the coffee tree nutrition, there is almost no information relating zinc nutrition and bean quality. This work evaluated the effect of zinc on the coffee yield and bean quality. The experiment was conducted with Coffea arabica L. in "Zona da Mata" region, Minas Gerais, Brazil. Twelve plots were established at random with 4 competitive plants each. Treatments included plants supplemented with zinc (eight plots) and control without zinc supplementation (four plots). Plants were subjected to two treatments: zinc supplementation and control. Yield, number of defective beans, beans attacked by berry borers, bean size, cup quality, beans zinc concentration, potassium leaching, electrical conductivity, color index, total tritable acidity, pH, chlorogenic acids contents and ferric-reducing antioxidant activity of beans were evaluated. Zinc positively affected quality of coffee beans, which presented lower percentage of medium and small beans, lower berry borer incidence, lower potassium leaching and electrical conductivity, higher contents of zinc and chlorogenic acids and higher antioxidant activity in comparison with control beans.
Resumo:
Survival analysis is applied when the time until the occurrence of an event is of interest. Such data are routinely collected in plant diseases, although applications of the method are uncommon. The objective of this study was to use two studies on post-harvest diseases of peaches, considering two harvests together and the existence of random effect shared by fruits of a same tree, in order to describe the main techniques in survival analysis. The nonparametric Kaplan-Meier method, the log-rank test and the semi-parametric Cox's proportional hazards model were used to estimate the effect of cultivars and the number of days after full bloom on the survival to the brown rot symptom and the instantaneous risk of expressing it in two consecutive harvests. The joint analysis with baseline effect, varying between harvests, and the confirmation of the tree effect as a grouping factor with random effect were appropriate to interpret the phenomenon (disease) evaluated and can be important tools to replace or complement the conventional analysis, respecting the nature of the variable and the phenomenon.
Resumo:
INTRODUCTION: Cheese should be produced from ingredients of good quality and processed under hygienic conditions. Further, cheese should be transported, stored and sold in an appropriate manner in order to avoid, among other things, the incorporation of extraneous materials (filth) of biological origin or otherwise, in contravention of the relevant food legislation. The aim of the study was to evaluate the hygienic conditions of "prato", "mussarela", and "mineiro" cheeses sold at the street food markets in the city of S. Paulo, Brazil. MATERIALS AND METHOD: Forty-seven samples of each of the three types of cheese were collected during the period from March, 1993 to February, 1994. The Latin square was used as a statistical model for sampling and random selection of the street markets from which to collect the cheese samples. The samples were analysed for the presence of extraneous matters outside for which purpose the samples were washed and filtered and inside, for which the methodology of enzymathic digestion of the sample with pancreatine, followed by filtering,was used. RESULTS AND CONCLUSION: Of the 141 samples analysed, 75.9% exhibited at least one sort of extraneous matters. For the "prato" and "mussarela" cheeses, the high number of contaminated samples was due mainly to extraneous matters present inside the cheese, whereas in the "mineiro" cheese, besides the internal filth, 100% of the samples had external filth.
Resumo:
OBJECTIVE: Pharmaceutical assistance is essential in health care and a right of citizens according to Brazilian law and drug policies. The study purpose was to evaluate aspects of pharmaceutical assistance in public primary health care. METHODS: A cross-sectional study using WHO drug indicators was carried out in Brasília in 2001. From a random sample of 15 out of 62 centers thirty exiting patients per center were interviewed. RESULTS: Only 18.7% of the patients fully understood the prescription, 56.3% could read it, 61.2% of the prescribed drugs were actually dispensed, and mean duration of pharmaceutical dispensing was 53.2 seconds. Each visit lasted on average 9.4 minutes. Of prescribed and non-dispensed drugs, 85.3% and 60.6% were on the local essential drug list (EDL) respectively. On average 83.2% of 40 essential drugs were in stock, and only two centers had a pharmacist in charge of the pharmacy. The mean number of drugs per prescription was 2.3, 85.3% of prescribed drugs were on the EDL, 73.2% were prescribed using the generic denomination, 26.4% included antibiotics and 7.5% were injectables. The most prescribed groups were: cardiovascular drugs (26.8%), anti-infective drugs (13.1%), analgesics (8.9%), anti-asthmatic drugs (5.8%), anti-diabetic drugs (5.3%), psychoactive drugs (3.7%), and combination drugs (2.7%). CONCLUSIONS: Essential drugs were only moderately available almost 30 years after the first Brazilian EDL was formulated. While physician use of essential drugs and generic names was fairly high, efficiency was impaired by the poor quality of pharmaceutical care, resulting in very low patient understanding and insufficient guarantee of supply, particularly for chronic diseases.
Resumo:
Myocardial Perfusion Gated Single Photon Emission Tomography (Gated-SPET) imaging is used for the combined evaluation of myocardial perfusion and left ventricular (LV) function. But standard protocols of the Gated-SPECT studies require long acquisition times for each study. It is therefore important to reduce as much as possible the total duration of image acquisition. However, it is known that this reduction leads to decrease on counts statistics per projection and raises doubts about the validity of the functional parameters determined by Gated-SPECT. Considering that, it’s difficult to carry out this analysis in real patients. For ethical, logistical and economical matters, simulated studies could be required for this analysis. Objective: Evaluate the influence of the total number of counts acquired from myocardium, in the calculation of myocardial functional parameters (LVEF – left ventricular ejection fraction, EDV – end-diastolic volume, ESV – end-sistolic volume) using routine software procedures.
Resumo:
Dissertação de Mestrado, Gestão de Empresa (MBA), 16 de Julho de 2013, Universidade dos Açores.
Resumo:
OBJECTIVE: A cross-sectional population-based study was conducted to assess, in active smokers, the relationship of number of cigarettes smoked and other characteristics to salivary cotinine concentrations. METHODS: A random sample of active smokers aged 15 years or older was selected using a stepwise cluster sample strategy, in the year 2000 in Rio de Janeiro, Brazil. The study included 401 subjects. Salivary cotinine concentration was determined using gas chromatography with nitrogen-phosphorus detection. A standard questionnaire was used to collect demographic and smoking behavioral data. The relation between the number of cigarettes smoked in the last 24h and cotinine level was examined by means of a nonparametric fitting technique of robust locally weighted regression. RESULTS: Significantly (p<0.05) higher adjusted mean cotinine levels were found in subjects smoking their first cigarette within five minutes after waking up, and in those smoking 1-20 cigarettes in the last 24h who reported inhaling more than ½ the time. In those smoking 1-20 cigarettes, the slope was significantly higher for those subjects waiting for more than five minutes before smoking their first cigarette after waking up, and those smoking "light" cigarettes when compared with their counterparts. These heterogeneities became negligible and non-significant when subjects with cotinine >40 ng/mL per cigarette were excluded. CONCLUSIONS: There was found a positive association between self-reporting smoking five minutes after waking up, and inhaling more than ½ the time are consistent and higher cotinine levels. These can be markers of dependence and higher nicotine intake. Salivary cotinine proved to be a useful biomarker of recent smoking and can be used in epidemiological studies and smoking cessation programs.
Resumo:
OBJECTIVE: To estimate the spatial intensity of urban violence events using wavelet-based methods and emergency room data. METHODS: Information on victims attended at the emergency room of a public hospital in the city of São Paulo, Southeastern Brazil, from January 1, 2002 to January 11, 2003 were obtained from hospital records. The spatial distribution of 3,540 events was recorded and a uniform random procedure was used to allocate records with incomplete addresses. Point processes and wavelet analysis technique were used to estimate the spatial intensity, defined as the expected number of events by unit area. RESULTS: Of all georeferenced points, 59% were accidents and 40% were assaults. There is a non-homogeneous spatial distribution of the events with high concentration in two districts and three large avenues in the southern area of the city of São Paulo. CONCLUSIONS: Hospital records combined with methodological tools to estimate intensity of events are useful to study urban violence. The wavelet analysis is useful in the computation of the expected number of events and their respective confidence bands for any sub-region and, consequently, in the specification of risk estimates that could be used in decision-making processes for public policies.
Resumo:
Myocardial Perfusion Gated Single Photon Emission Tomography (Gated-SPET) imaging is used for the combined evaluation of myocardial perfusion and left ventricular (LV). The purpose of this study is to evaluate the influence of the total number of counts acquired from myocardium, in the calculation of myocardial functional parameters using routine software procedures. Methods: Gated-SPET studies were simulated using Monte Carlo GATE package and NURBS phantom. Simulated data were reconstructed and processed using the commercial software package Quantitative Gated-SPECT. The Bland-Altman and Mann-Whitney-Wilcoxon tests were used to analyze the influence of the number of total counts in the calculation of LV myocardium functional parameters. Results: In studies simulated with 3MBq in the myocardium there were significant differences in the functional parameters: Left ventricular ejection fraction (LVEF), end-systolic volume (ESV), Motility and Thickness; between studies acquired with 15s/projection and 30s/projection. Simulations with 4.2MBq show significant differences in LVEF, end-diastolic volume (EDV) and Thickness. Meanwhile in the simulations with 5.4MBq and 8.4MBq the differences were statistically significant for Motility and Thickness. Conclusion: The total number of counts per simulation doesn't significantly interfere with the determination of Gated-SPET functional parameters using the administered average activity of 450MBq to 5.4MBq in myocardium.
Resumo:
OBJECTIVES: Estimate the frequency of online searches on the topic of smoking and analyze the quality of online resources available to smokers interested in giving up smoking. METHODS: Search engines were used to revise searches and online resources related to stopping smoking in Brazil in 2010. The number of searches was determined using analytical tools available on Google Ads; the number and type of sites were determined by replicating the search patterns of internet users. The sites were classified according to content (advertising, library of articles and other). The quality of the sites was analyzed using the Smoking Treatment Scale- Content (STS-C) and the Smoking Treatment Scale - Rating (STS-R). RESULTS: A total of 642,446 searches was carried out. Around a third of the 113 sites encountered were of the 'library' type, i.e. they only contained articles, followed by sites containing clinical advertising (18.6) and professional education (10.6). Thirteen of the sites offered advice on quitting directed at smokers. The majority of the sites did not contain evidence-based information, were not interactive and did not have the possibility of communicating with users after the first contact. Other limitations we came across were a lack of financial disclosure as well as no guarantee of privacy concerning information obtained and no distinction made between editorial content and advertisements. CONCLUSIONS: There is a disparity between the high demand for online support in giving up smoking and the scarcity of quality online resources for smokers. It is necessary to develop interactive, customized online resources based on evidence and random clinical testing in order to improve the support available to Brazilian smokers.
Resumo:
It has been shown that in reality at least two general scenarios of data structuring are possible: (a) a self-similar (SS) scenario when the measured data form an SS structure and (b) a quasi-periodic (QP) scenario when the repeated (strongly correlated) data form random sequences that are almost periodic with respect to each other. In the second case it becomes possible to describe their behavior and express a part of their randomness quantitatively in terms of the deterministic amplitude–frequency response belonging to the generalized Prony spectrum. This possibility allows us to re-examine the conventional concept of measurements and opens a new way for the description of a wide set of different data. In particular, it concerns different complex systems when the ‘best-fit’ model pretending to be the description of the data measured is absent but the barest necessity of description of these data in terms of the reduced number of quantitative parameters exists. The possibilities of the proposed approach and detection algorithm of the QP processes were demonstrated on actual data: spectroscopic data recorded for pure water and acoustic data for a test hole. The suggested methodology allows revising the accepted classification of different incommensurable and self-affine spatial structures and finding accurate interpretation of the generalized Prony spectroscopy that includes the Fourier spectroscopy as a partial case.
Resumo:
This Thesis describes the application of automatic learning methods for a) the classification of organic and metabolic reactions, and b) the mapping of Potential Energy Surfaces(PES). The classification of reactions was approached with two distinct methodologies: a representation of chemical reactions based on NMR data, and a representation of chemical reactions from the reaction equation based on the physico-chemical and topological features of chemical bonds. NMR-based classification of photochemical and enzymatic reactions. Photochemical and metabolic reactions were classified by Kohonen Self-Organizing Maps (Kohonen SOMs) and Random Forests (RFs) taking as input the difference between the 1H NMR spectra of the products and the reactants. The development of such a representation can be applied in automatic analysis of changes in the 1H NMR spectrum of a mixture and their interpretation in terms of the chemical reactions taking place. Examples of possible applications are the monitoring of reaction processes, evaluation of the stability of chemicals, or even the interpretation of metabonomic data. A Kohonen SOM trained with a data set of metabolic reactions catalysed by transferases was able to correctly classify 75% of an independent test set in terms of the EC number subclass. Random Forests improved the correct predictions to 79%. With photochemical reactions classified into 7 groups, an independent test set was classified with 86-93% accuracy. The data set of photochemical reactions was also used to simulate mixtures with two reactions occurring simultaneously. Kohonen SOMs and Feed-Forward Neural Networks (FFNNs) were trained to classify the reactions occurring in a mixture based on the 1H NMR spectra of the products and reactants. Kohonen SOMs allowed the correct assignment of 53-63% of the mixtures (in a test set). Counter-Propagation Neural Networks (CPNNs) gave origin to similar results. The use of supervised learning techniques allowed an improvement in the results. They were improved to 77% of correct assignments when an ensemble of ten FFNNs were used and to 80% when Random Forests were used. This study was performed with NMR data simulated from the molecular structure by the SPINUS program. In the design of one test set, simulated data was combined with experimental data. The results support the proposal of linking databases of chemical reactions to experimental or simulated NMR data for automatic classification of reactions and mixtures of reactions. Genome-scale classification of enzymatic reactions from their reaction equation. The MOLMAP descriptor relies on a Kohonen SOM that defines types of bonds on the basis of their physico-chemical and topological properties. The MOLMAP descriptor of a molecule represents the types of bonds available in that molecule. The MOLMAP descriptor of a reaction is defined as the difference between the MOLMAPs of the products and the reactants, and numerically encodes the pattern of bonds that are broken, changed, and made during a chemical reaction. The automatic perception of chemical similarities between metabolic reactions is required for a variety of applications ranging from the computer validation of classification systems, genome-scale reconstruction (or comparison) of metabolic pathways, to the classification of enzymatic mechanisms. Catalytic functions of proteins are generally described by the EC numbers that are simultaneously employed as identifiers of reactions, enzymes, and enzyme genes, thus linking metabolic and genomic information. Different methods should be available to automatically compare metabolic reactions and for the automatic assignment of EC numbers to reactions still not officially classified. In this study, the genome-scale data set of enzymatic reactions available in the KEGG database was encoded by the MOLMAP descriptors, and was submitted to Kohonen SOMs to compare the resulting map with the official EC number classification, to explore the possibility of predicting EC numbers from the reaction equation, and to assess the internal consistency of the EC classification at the class level. A general agreement with the EC classification was observed, i.e. a relationship between the similarity of MOLMAPs and the similarity of EC numbers. At the same time, MOLMAPs were able to discriminate between EC sub-subclasses. EC numbers could be assigned at the class, subclass, and sub-subclass levels with accuracies up to 92%, 80%, and 70% for independent test sets. The correspondence between chemical similarity of metabolic reactions and their MOLMAP descriptors was applied to the identification of a number of reactions mapped into the same neuron but belonging to different EC classes, which demonstrated the ability of the MOLMAP/SOM approach to verify the internal consistency of classifications in databases of metabolic reactions. RFs were also used to assign the four levels of the EC hierarchy from the reaction equation. EC numbers were correctly assigned in 95%, 90%, 85% and 86% of the cases (for independent test sets) at the class, subclass, sub-subclass and full EC number level,respectively. Experiments for the classification of reactions from the main reactants and products were performed with RFs - EC numbers were assigned at the class, subclass and sub-subclass level with accuracies of 78%, 74% and 63%, respectively. In the course of the experiments with metabolic reactions we suggested that the MOLMAP / SOM concept could be extended to the representation of other levels of metabolic information such as metabolic pathways. Following the MOLMAP idea, the pattern of neurons activated by the reactions of a metabolic pathway is a representation of the reactions involved in that pathway - a descriptor of the metabolic pathway. This reasoning enabled the comparison of different pathways, the automatic classification of pathways, and a classification of organisms based on their biochemical machinery. The three levels of classification (from bonds to metabolic pathways) allowed to map and perceive chemical similarities between metabolic pathways even for pathways of different types of metabolism and pathways that do not share similarities in terms of EC numbers. Mapping of PES by neural networks (NNs). In a first series of experiments, ensembles of Feed-Forward NNs (EnsFFNNs) and Associative Neural Networks (ASNNs) were trained to reproduce PES represented by the Lennard-Jones (LJ) analytical potential function. The accuracy of the method was assessed by comparing the results of molecular dynamics simulations (thermal, structural, and dynamic properties) obtained from the NNs-PES and from the LJ function. The results indicated that for LJ-type potentials, NNs can be trained to generate accurate PES to be used in molecular simulations. EnsFFNNs and ASNNs gave better results than single FFNNs. A remarkable ability of the NNs models to interpolate between distant curves and accurately reproduce potentials to be used in molecular simulations is shown. The purpose of the first study was to systematically analyse the accuracy of different NNs. Our main motivation, however, is reflected in the next study: the mapping of multidimensional PES by NNs to simulate, by Molecular Dynamics or Monte Carlo, the adsorption and self-assembly of solvated organic molecules on noble-metal electrodes. Indeed, for such complex and heterogeneous systems the development of suitable analytical functions that fit quantum mechanical interaction energies is a non-trivial or even impossible task. The data consisted of energy values, from Density Functional Theory (DFT) calculations, at different distances, for several molecular orientations and three electrode adsorption sites. The results indicate that NNs require a data set large enough to cover well the diversity of possible interaction sites, distances, and orientations. NNs trained with such data sets can perform equally well or even better than analytical functions. Therefore, they can be used in molecular simulations, particularly for the ethanol/Au (111) interface which is the case studied in the present Thesis. Once properly trained, the networks are able to produce, as output, any required number of energy points for accurate interpolations.
Resumo:
Swarm Intelligence (SI) is the property of a system whereby the collective behaviors of (unsophisticated) agents interacting locally with their environment cause coherent functional global patterns to emerge. Particle swarm optimization (PSO) is a form of SI, and a population-based search algorithm that is initialized with a population of random solutions, called particles. These particles are flying through hyperspace and have two essential reasoning capabilities: their memory of their own best position and knowledge of the swarm's best position. In a PSO scheme each particle flies through the search space with a velocity that is adjusted dynamically according with its historical behavior. Therefore, the particles have a tendency to fly towards the best search area along the search process. This work proposes a PSO based algorithm for logic circuit synthesis. The results show the statistical characteristics of this algorithm with respect to number of generations required to achieve the solutions. It is also presented a comparison with other two Evolutionary Algorithms, namely Genetic and Memetic Algorithms.