922 resultados para binary sampling
Resumo:
A flow system based on multicommutation and binary sampling process was developed to implement the sample zone trapping technique in order to increase the spectrophotometric analytical range and to improve sensitivity. The flow network was designed with active devices in order to provide facilities to determine sequentialy analyte with a wide concentration range, employing a single pumping channel to propel sample and reagents solutions. The procedure was employed to determine ortophosphate ions in water samples of river and waste. Profitable features such as an analytical throughput of 60 samples determination per hour, relative standard deviation (r.s.d.) of 2% (n = 6) for a typical sample with concentration of 2.78 mg/L were achieved. By applying the paired t-test no significant difference at 95% confidence level was observed between the results obtained with the proposed system and those of the usual flow injection system.
Resumo:
A flow system procedure for spectrophotometric determination of ascorbic acid in drugs based on [Fe3+(SCN-)n]+3-n complex decomposition by reduction is described. The flow network was constituted by a set of three-way solenoid valves, controlled by a microcomputer running a software wrote in QuickBasic 4.5 language. The feasibility of the procedure was ascertained by determining ascorbic acid in drug samples with mass ranging from 0.0018 up to 0.0180 g. The results showed an agreement of about 7% when compared with recommended method. Other profitable features such as a standard deviation of 1.5% (n = 7) and a throughput of 120 determinations per hour was also achieved.
Resumo:
A binary sampling flow analysis system equipped with gas diffusion cell was developed for NH4+ and/or NH2Cl determination in wastewater and disinfection products samples based on the Berthelot reaction of the NH2Cl diffused through the semi-permeable PTFE membrane. The effect of the analytical conditions related to the reaction and flow parameters were evaluated and N-NH4+ and N-NH2Cl were determined in concentration ranges from 0.17 to 5 mg L-1 and from 0.5 to 14.5 mg L-1, respectively. Limits of detection (3σ) of 50 and 140 µg L-1 for N-NH4+ and N-NH2Cl were calculated, respectively, and RSD of 5 and 2% were calculated for 10 consecutive determinations of N-NH4+ (1 and 3 mg L-1) and N-NH2Cl (3 and 9 mg L-1), respectively with 30 determinations h-1.
Resumo:
This paper describes the use of the open source hardware platform, denominated "Arduino", for controlling solenoid valves for solutions handling in flow analysis systems. System assessment was carried out by spectrophotometric determination of iron (II) in natural water. The sampling rate was estimated as 45 determinations per hour and the coefficient of variation was lower than 3%. Per determination, 208 µg of 1-10-phenanthroline and ascorbic acid were consumed, generating 1.3 mL of waste. "Arduino" proved a reliable microcontroller with low cost and simple interfacing, allowing USB communication for solenoid device switching in flow systems.
Resumo:
HE PROBIT MODEL IS A POPULAR DEVICE for explaining binary choice decisions in econometrics. It has been used to describe choices such as labor force participation, travel mode, home ownership, and type of education. These and many more examples can be found in papers by Amemiya (1981) and Maddala (1983). Given the contribution of economics towards explaining such choices, and given the nature of data that are collected, prior information on the relationship between a choice probability and several explanatory variables frequently exists. Bayesian inference is a convenient vehicle for including such prior information. Given the increasing popularity of Bayesian inference it is useful to ask whether inferences from a probit model are sensitive to a choice between Bayesian and sampling theory techniques. Of interest is the sensitivity of inference on coefficients, probabilities, and elasticities. We consider these issues in a model designed to explain choice between fixed and variable interest rate mortgages. Two Bayesian priors are employed: a uniform prior on the coefficients, designed to be noninformative for the coefficients, and an inequality restricted prior on the signs of the coefficients. We often know, a priori, whether increasing the value of a particular explanatory variable will have a positive or negative effect on a choice probability. This knowledge can be captured by using a prior probability density function (pdf) that is truncated to be positive or negative. Thus, three sets of results are compared:those from maximum likelihood (ML) estimation, those from Bayesian estimation with an unrestricted uniform prior on the coefficients, and those from Bayesian estimation with a uniform prior truncated to accommodate inequality restrictions on the coefficients.
Resumo:
We compare Bayesian methodology utilizing free-ware BUGS (Bayesian Inference Using Gibbs Sampling) with the traditional structural equation modelling approach based on another free-ware package, Mx. Dichotomous and ordinal (three category) twin data were simulated according to different additive genetic and common environment models for phenotypic variation. Practical issues are discussed in using Gibbs sampling as implemented by BUGS to fit subject-specific Bayesian generalized linear models, where the components of variation may be estimated directly. The simulation study (based on 2000 twin pairs) indicated that there is a consistent advantage in using the Bayesian method to detect a correct model under certain specifications of additive genetics and common environmental effects. For binary data, both methods had difficulty in detecting the correct model when the additive genetic effect was low (between 10 and 20%) or of moderate range (between 20 and 40%). Furthermore, neither method could adequately detect a correct model that included a modest common environmental effect (20%) even when the additive genetic effect was large (50%). Power was significantly improved with ordinal data for most scenarios, except for the case of low heritability under a true ACE model. We illustrate and compare both methods using data from 1239 twin pairs over the age of 50 years, who were registered with the Australian National Health and Medical Research Council Twin Registry (ATR) and presented symptoms associated with osteoarthritis occurring in joints of the hand.
Resumo:
We investigate the phase behaviour of 2D mixtures of bi-functional and three-functional patchy particles and 3D mixtures of bi-functional and tetra-functional patchy particles by means of Monte Carlo simulations and Wertheim theory. We start by computing the critical points of the pure systems and then we investigate how the critical parameters change upon lowering the temperature. We extend the successive umbrella sampling method to mixtures to make it possible to extract information about the phase behaviour of the system at a fixed temperature for the whole range of densities and compositions of interest. (C) 2013 AIP Publishing LLC.
Resumo:
L’apprentissage supervisé de réseaux hiérarchiques à grande échelle connaît présentement un succès fulgurant. Malgré cette effervescence, l’apprentissage non-supervisé représente toujours, selon plusieurs chercheurs, un élément clé de l’Intelligence Artificielle, où les agents doivent apprendre à partir d’un nombre potentiellement limité de données. Cette thèse s’inscrit dans cette pensée et aborde divers sujets de recherche liés au problème d’estimation de densité par l’entremise des machines de Boltzmann (BM), modèles graphiques probabilistes au coeur de l’apprentissage profond. Nos contributions touchent les domaines de l’échantillonnage, l’estimation de fonctions de partition, l’optimisation ainsi que l’apprentissage de représentations invariantes. Cette thèse débute par l’exposition d’un nouvel algorithme d'échantillonnage adaptatif, qui ajuste (de fa ̧con automatique) la température des chaînes de Markov sous simulation, afin de maintenir une vitesse de convergence élevée tout au long de l’apprentissage. Lorsqu’utilisé dans le contexte de l’apprentissage par maximum de vraisemblance stochastique (SML), notre algorithme engendre une robustesse accrue face à la sélection du taux d’apprentissage, ainsi qu’une meilleure vitesse de convergence. Nos résultats sont présent ́es dans le domaine des BMs, mais la méthode est générale et applicable à l’apprentissage de tout modèle probabiliste exploitant l’échantillonnage par chaînes de Markov. Tandis que le gradient du maximum de vraisemblance peut-être approximé par échantillonnage, l’évaluation de la log-vraisemblance nécessite un estimé de la fonction de partition. Contrairement aux approches traditionnelles qui considèrent un modèle donné comme une boîte noire, nous proposons plutôt d’exploiter la dynamique de l’apprentissage en estimant les changements successifs de log-partition encourus à chaque mise à jour des paramètres. Le problème d’estimation est reformulé comme un problème d’inférence similaire au filtre de Kalman, mais sur un graphe bi-dimensionnel, où les dimensions correspondent aux axes du temps et au paramètre de température. Sur le thème de l’optimisation, nous présentons également un algorithme permettant d’appliquer, de manière efficace, le gradient naturel à des machines de Boltzmann comportant des milliers d’unités. Jusqu’à présent, son adoption était limitée par son haut coût computationel ainsi que sa demande en mémoire. Notre algorithme, Metric-Free Natural Gradient (MFNG), permet d’éviter le calcul explicite de la matrice d’information de Fisher (et son inverse) en exploitant un solveur linéaire combiné à un produit matrice-vecteur efficace. L’algorithme est prometteur: en terme du nombre d’évaluations de fonctions, MFNG converge plus rapidement que SML. Son implémentation demeure malheureusement inefficace en temps de calcul. Ces travaux explorent également les mécanismes sous-jacents à l’apprentissage de représentations invariantes. À cette fin, nous utilisons la famille de machines de Boltzmann restreintes “spike & slab” (ssRBM), que nous modifions afin de pouvoir modéliser des distributions binaires et parcimonieuses. Les variables latentes binaires de la ssRBM peuvent être rendues invariantes à un sous-espace vectoriel, en associant à chacune d’elles, un vecteur de variables latentes continues (dénommées “slabs”). Ceci se traduit par une invariance accrue au niveau de la représentation et un meilleur taux de classification lorsque peu de données étiquetées sont disponibles. Nous terminons cette thèse sur un sujet ambitieux: l’apprentissage de représentations pouvant séparer les facteurs de variations présents dans le signal d’entrée. Nous proposons une solution à base de ssRBM bilinéaire (avec deux groupes de facteurs latents) et formulons le problème comme l’un de “pooling” dans des sous-espaces vectoriels complémentaires.
Resumo:
Two direct sampling correlator-type receivers for differential chaos shift keying (DCSK) communication systems under frequency non-selective fading channels are proposed. These receivers operate based on the same hardware platform with different architectures. In the first scheme, namely sum-delay-sum (SDS) receiver, the sum of all samples in a chip period is correlated with its delayed version. The correlation value obtained in each bit period is then compared with a fixed threshold to decide the binary value of recovered bit at the output. On the other hand, the second scheme, namely delay-sum-sum (DSS) receiver, calculates the correlation value of all samples with its delayed version in a chip period. The sum of correlation values in each bit period is then compared with the threshold to recover the data. The conventional DCSK transmitter, frequency non-selective Rayleigh fading channel, and two proposed receivers are mathematically modelled in discrete-time domain. The authors evaluated the bit error rate performance of the receivers by means of both theoretical analysis and numerical simulation. The performance comparison shows that the two proposed receivers can perform well under the studied channel, where the performances get better when the number of paths increases and the DSS receiver outperforms the SDS one.
Resumo:
This paper deals with the emission of gravitational radiation in the context of a previously studied metric nonsymmetric theory of gravitation. The part coming from the symmetric part of the metric coincides with the mass quadrupole moment result of general relativity. The one associated to the antisymmetric part of the metric involves the dipole moment of the fermionic charge of the system. The results are applied to binary star systems and the decrease of the period of the elliptical motion is calculated.
Resumo:
The aim of this study was to determine how abiotic factors drive the phytoplankton community in a water supply reservoir within short sampling intervals. Samples were collected at the subsurface (0.1 m) and bottom of limnetic (8 m) and littoral (2 m) zones in both the dry and rainy seasons. The following abiotic variables were analyzed: water temperature, dissolved oxygen, electrical conductivity, total dissolved solids, turbidity, pH, total nitrogen, nitrite, nitrate, total phosphorus, total dissolved phosphorus and orthophosphate. Phytoplankton biomass was determined from biovolume values. The role abiotic variables play in the dynamics of phytoplankton species was determined by means of Canonical Correspondence Analysis. Algae biomass ranged from 1.17×10(4) to 9.21×10(4) µg.L-1; cyanobacteria had biomass values ranging from 1.07×10(4) to 8.21×10(4) µg.L-1. High availability of phosphorous, nitrogen limitation, alkaline pH and thermal stability all favored cyanobacteria blooms, particularly during the dry season. Temperature, pH, total phosphorous and turbidity were key factors in characterizing the phytoplankton community between sampling times and stations. Of the species studied, Cylindrospermopsis raciborskii populations were dominant in the phytoplankton in both the dry and rainy seasons. We conclude that the phytoplankton was strongly influenced by abiotic variables, particularly in relation to seasonal distribution patterns.
Resumo:
Ten young rumen-cannulated crossbred steers were randomly divided into two groups: a control group (C; n=4), which was fed a balanced diet for daily weight gain of 900g; and a pronounced energy-deprived group (PED; n=6), receiving 30% less of the required energy for maintenance. After 140 days of these alimentary regimes, rumen fluid and urine samples were collected for biochemical and functional tests, before feeding and at 1, 3, 6, and 9 hours after feeding. The energy-deprivation diet caused a significant reduction in the number of Entodinium, Eodinium, Isotricha, Dasytricha, Eremoplastron, Eudiplodinium, Metadinium, Charonina, Ostracodinium, and Epidinium protozoa. There was no effect of the time of sampling in both groups on the total number of ciliates in rumen fluid. A higher number of protozoan forms in binary division were recorded in the control group, at the 6th and 9th hours after feeding (P<0.019). There was a high positive correlation between the total count of protozoans in rumen fluid and glucose fermentation, ammonia, and urinary allantoin excretion index; and a negative correlation between the total count of protozoa and metilene blue reduction, and a medium correlation between the total count of protozoa and total volatile fatty acids concentration. The determination of the protozoa populations does not imply in the use of complex and hard-to-execute techniques, although it is time consuming and needs practice. This exam particularly helps in clinical expected diagnosis.
Resumo:
Some factors complicate comparisons between linkage maps from different studies. This problem can be resolved if measures of precision, such as confidence intervals and frequency distributions, are associated with markers. We examined the precision of distances and ordering of microsatellite markers in the consensus linkage maps of chromosomes 1, 3 and 4 from two F 2 reciprocal Brazilian chicken populations, using bootstrap sampling. Single and consensus maps were constructed. The consensus map was compared with the International Consensus Linkage Map and with the whole genome sequence. Some loci showed segregation distortion and missing data, but this did not affect the analyses negatively. Several inversions and position shifts were detected, based on 95% confidence intervals and frequency distributions of loci. Some discrepancies in distances between loci and in ordering were due to chance, whereas others could be attributed to other effects, including reciprocal crosses, sampling error of the founder animals from the two populations, F(2) population structure, number of and distance between microsatellite markers, number of informative meioses, loci segregation patterns, and sex. In the Brazilian consensus GGA1, locus LEI1038 was in a position closer to the true genome sequence than in the International Consensus Map, whereas for GGA3 and GGA4, no such differences were found. Extending these analyses to the remaining chromosomes should facilitate comparisons and the integration of several available genetic maps, allowing meta-analyses for map construction and quantitative trait loci (QTL) mapping. The precision of the estimates of QTL positions and their effects would be increased with such information.
Resumo:
Background: With nearly 1,100 species, the fish family Characidae represents more than half of the species of Characiformes, and is a key component of Neotropical freshwater ecosystems. The composition, phylogeny, and classification of Characidae is currently uncertain, despite significant efforts based on analysis of morphological and molecular data. No consensus about the monophyly of this group or its position within the order Characiformes has been reached, challenged by the fact that many key studies to date have non-overlapping taxonomic representation and focus only on subsets of this diversity. Results: In the present study we propose a new definition of the family Characidae and a hypothesis of relationships for the Characiformes based on phylogenetic analysis of DNA sequences of two mitochondrial and three nuclear genes (4,680 base pairs). The sequences were obtained from 211 samples representing 166 genera distributed among all 18 recognized families in the order Characiformes, all 14 recognized subfamilies in the Characidae, plus 56 of the genera so far considered incertae sedis in the Characidae. The phylogeny obtained is robust, with most lineages significantly supported by posterior probabilities in Bayesian analysis, and high bootstrap values from maximum likelihood and parsimony analyses. Conclusion: A monophyletic assemblage strongly supported in all our phylogenetic analysis is herein defined as the Characidae and includes the characiform species lacking a supraorbital bone and with a derived position of the emergence of the hyoid artery from the anterior ceratohyal. To recognize this and several other monophyletic groups within characiforms we propose changes in the limits of several families to facilitate future studies in the Characiformes and particularly the Characidae. This work presents a new phylogenetic framework for a speciose and morphologically diverse group of freshwater fishes of significant ecological and evolutionary importance across the Neotropics and portions of Africa.
Resumo:
Context. Unevolved metal-poor stars constitute a fossil record of the early Galaxy, and can provide invaluable information on the properties of the first generations of stars. Binary systems also provide direct information on the stellar masses of their member stars. Aims. The purpose of this investigation is a detailed abundance study of the double-lined spectroscopic binary CS 22876-032, which comprises the two most metal-poor dwarfs known. Methods. We used high-resolution, high-S/N ratio spectra from the UVES spectrograph at the ESO VLT telescope. Long-term radial-velocity measurements and broad-band photometry allowed us to determine improved orbital elements and stellar parameters for both components. We used OSMARCS 1D models and the TURBOSPECTRUM spectral synthesis code to determine the abundances of Li, O, Na, Mg, Al, Si, Ca, Sc, Ti, Cr, Mn, Fe, Co and Ni. We also used the (COBOLD)-B-5 model atmosphere code to compute the 3D abundance corrections, notably for Li and O. Results. We find a metallicity of [Fe/H] similar to -3.6 for both stars, using 1D models with 3D corrections of similar to -0.1 dex from averaged 3D models. We determine the oxygen abundance from the near-UV OH bands; the 3D corrections are large, -1 and -1.5 dex for the secondary and primary respectively, and yield [O/Fe] similar to 0.8, close to the high-quality results obtained from the [OI] 630 nm line in metal-poor giants. Other [alpha/Fe] ratios are consistent with those measured in other dwarfs and giants with similar [Fe/H], although Ca and Si are somewhat low ([X/Fe] less than or similar to 0). Other element ratios follow those of other halo stars. The Li abundance of the primary star is consistent with the Spite plateau, but the secondary shows a lower abundance; 3D corrections are small. Conclusions. The Li abundance in the primary star supports the extension of the Spite Plateau value at the lowest metallicities, without any decrease. The low abundance in the secondary star could be explained by endogenic Li depletion, due to its cooler temperature. If this is not the case, another, yet unknown mechanism may be causing increased scatter in A( Li) at the lowest metallicities.