990 resultados para RANDOM SEQUENTIAL ADSORPTION
Resumo:
Fusion techniques have received considerable attention for achieving performance improvement with biometrics. While a multi-sample fusion architecture reduces false rejects, it also increases false accepts. This impact on performance also depends on the nature of subsequent attempts, i.e., random or adaptive. Expressions for error rates are presented and experimentally evaluated in this work by considering the multi-sample fusion architecture for text-dependent speaker verification using HMM based digit dependent speaker models. Analysis incorporating correlation modeling demonstrates that the use of adaptive samples improves overall fusion performance compared to randomly repeated samples. For a text dependent speaker verification system using digit strings, sequential decision fusion of seven instances with three random samples is shown to reduce the overall error of the verification system by 26% which can be further reduced by 6% for adaptive samples. This analysis novel in its treatment of random and adaptive multiple presentations within a sequential fused decision architecture, is also applicable to other biometric modalities such as finger prints and handwriting samples.
Resumo:
A computationally efficient sequential Monte Carlo algorithm is proposed for the sequential design of experiments for the collection of block data described by mixed effects models. The difficulty in applying a sequential Monte Carlo algorithm in such settings is the need to evaluate the observed data likelihood, which is typically intractable for all but linear Gaussian models. To overcome this difficulty, we propose to unbiasedly estimate the likelihood, and perform inference and make decisions based on an exact-approximate algorithm. Two estimates are proposed: using Quasi Monte Carlo methods and using the Laplace approximation with importance sampling. Both of these approaches can be computationally expensive, so we propose exploiting parallel computational architectures to ensure designs can be derived in a timely manner. We also extend our approach to allow for model uncertainty. This research is motivated by important pharmacological studies related to the treatment of critically ill patients.
Resumo:
Our work is motivated by impromptu (or ``as-you-go'') deployment of wireless relay nodes along a path, a need that arises in many situations. In this paper, the path is modeled as starting at the origin (where there is the data sink, e.g., the control center), and evolving randomly over a lattice in the positive quadrant. A person walks along the path deploying relay nodes as he goes. At each step, the path can, randomly, either continue in the same direction or take a turn, or come to an end, at which point a data source (e.g., a sensor) has to be placed, that will send packets to the data sink. A decision has to be made at each step whether or not to place a wireless relay node. Assuming that the packet generation rate by the source is very low, and simple link-by-link scheduling, we consider the problem of sequential relay placement so as to minimize the expectation of an end-to-end cost metric (a linear combination of the sum of convex hop costs and the number of relays placed). This impromptu relay placement problem is formulated as a total cost Markov decision process. First, we derive the optimal policy in terms of an optimal placement set and show that this set is characterized by a boundary (with respect to the position of the last placed relay) beyond which it is optimal to place the next relay. Next, based on a simpler one-step-look-ahead characterization of the optimal policy, we propose an algorithm which is proved to converge to the optimal placement set in a finite number of steps and which is faster than value iteration. We show by simulations that the distance threshold based heuristic, usually assumed in the literature, is close to the optimal, provided that the threshold distance is carefully chosen. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
This paper describes the development of a sequential injection method to automate the fluorimetric determination of glyphosate based on a first step of oxidation to glycine by hypochlorite at 48 degrees C, followed by reaction with the fluorogenic reagent o-phthaldialdehyde in presence of 2-mercaptoethanol in borate buffer (pH > 9) to produce a fluorescent 1-(2`-hydroxyethylthio)-2-N-alkylisoindole. The proposed method has a linear response for glyphosate concentrations between 0.25 and 25.0 mu mol L(-1), with limits of detection and quantification of 0.08 and 0.25 mu mol L(-1), respectively. The sampling rate of the method is 18 samples per hour, consuming only a fraction of reagents consumed by the chromatographic method based on the same chemistry. The method was applied to study adsorption/desorption properties in a soil and in a sediment sample. Adsorption and desorption isotherms were properly fitted by Freundlich and Langmuir equations, leading to adsorption capacities of 1384 +/- 26 and 295 +/- 30 mg kg(-1) for the soil and sediment samples, respectively. These values are consistent with the literature, with the larger adsorption capacity of the soil being explained by its larger content of clay minerals, while the sediment was predominantly sandy. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Object segmentation is one of the fundamental steps for a number of robotic applications such as manipulation, object detection, and obstacle avoidance. This paper proposes a visual method for incorporating colour and depth information from sequential multiview stereo images to segment objects of interest from complex and cluttered environments. Rather than segmenting objects using information from a single frame in the sequence, we incorporate information from neighbouring views to increase the reliability of the information and improve the overall segmentation result. Specifically, dense depth information of a scene is computed using multiple view stereo. Depths from neighbouring views are reprojected into the reference frame to be segmented compensating for imperfect depth computations for individual frames. The multiple depth layers are then combined with color information from the reference frame to create a Markov random field to model the segmentation problem. Finally, graphcut optimisation is employed to infer pixels belonging to the object to be segmented. The segmentation accuracy is evaluated over images from an outdoor video sequence demonstrating the viability for automatic object segmentation for mobile robots using monocular cameras as a primary sensor.
Resumo:
With the overwhelming increase in the amount of data on the web and data bases, many text mining techniques have been proposed for mining useful patterns in text documents. Extracting closed sequential patterns using the Pattern Taxonomy Model (PTM) is one of the pruning methods to remove noisy, inconsistent, and redundant patterns. However, PTM model treats each extracted pattern as whole without considering included terms, which could affect the quality of extracted patterns. This paper propose an innovative and effective method that extends the random set to accurately weigh patterns based on their distribution in the documents and their terms distribution in patterns. Then, the proposed approach will find the specific closed sequential patterns (SCSP) based on the new calculated weight. The experimental results on Reuters Corpus Volume 1 (RCV1) data collection and TREC topics show that the proposed method significantly outperforms other state-of-the-art methods in different popular measures.
Resumo:
Studies on the dilute solution properties of methylmethacrylate-acrylonitrile random copolymers of three different compositions, 0.236, 0.5 and 0.74 mole fraction (m.f.) of acrylonitrile (AN) designated as MAa, MAb and MAc, respectively, have been made in good solvents and theta solvents. MAa has been studied in benzene (Bz) and ethylacetate (EAc). MAb in acetonitrile (MeCN), dimethyl sulphoxide (DMSO) and a binary solvent mixture of Bz and dimentyl formamide (DMF) in the volume ratio 6.5:1 designated as BM1 and MAc in MeCN, DMSO and Bz + DMF in the volume ratio 1.667:1 designated as BM2. The Mark-Houwink exponent ‘a’ reveals that Bz is a theta solvent for MAa at 20°C. For MAb and MAc, BM1 and BM2, respectively have ‘a’ values of 0.5 at all three temperatures studied (30°, 40° and 50°C). It is not clear whether they represent theta states or preferential adsorption plays a role complicating the behaviour in solution. The values of A2 are very low in MeCN considering that it is a very good solvent for the copolymer, ‘a’ values for MAb and MAc being 0.75 and 0.7, respectively.
Resumo:
Limitations in quality bedding material have resulted in the growing need to re-use litter during broiler farming in some countries, which can be of concern from a food-safety perspective. The aim of this study was to compare the Campylobacter levels in ceca and litter across three litter treatments under commercial farming conditions. The litter treatments were (a) the use of new litter after each farming cycle; (b) an Australian partial litter re-use practice; and (c) a full litter re-use practice. The study was carried out on two farms over two years (Farm 1, from 2009–2010 and Farm 2, from 2010–2011), across three sheds (35,000 to 40,000 chickens/shed) on each farm, adopting three different litter treatments across six commercial cycles. A random sampling design was adopted to test litter and ceca for Campylobacter and Escherichia coli, prior to commercial first thin-out and final pick-up. Campylobacter levels varied little across litter practices and farming cycles on each farm and were in the range of log 8.0–9.0 CFU/g in ceca and log 4.0–6.0 MPN/g for litter. Similarly the E. coli in ceca were ∼log 7.0 CFU/g. At first thin-out and final pick-up, the statistical analysis for both litter and ceca showed that the three-way interaction (treatments by farms by times) was highly significant (P < 0.01), indicating that the patterns of Campylobacter emergence/presence across time vary between the farms, cycles and pickups. The emergence and levels of both organisms were not influenced by litter treatments across the six farming cycles on both farms. Either C. jejuni or C. coli could be the dominant species across litter and ceca, and this phenomenon could not be attributed to specific litter treatments. Irrespective of the litter treatments in place, cycle 2 on Farm 2 remained campylobacter-free. These outcomes suggest that litter treatments did not directly influence the time of emergence and levels of Campylobacter and E. coli during commercial farming.
Resumo:
We consider the classical problem of sequential detection of change in a distribution (from hypothesis 0 to hypothesis 1), where the fusion centre receives vectors of periodic measurements, with the measurements being i.i.d. over time and across the vector components, under each of the two hypotheses. In our problem, the sensor devices ("motes") that generate the measurements constitute an ad hoc wireless network. The motes contend using a random access protocol (such as CSMA/CA) to transmit their measurement packets to the fusion centre. The fusion centre waits for vectors of measurements to accumulate before taking decisions. We formulate the optimal detection problem, taking into account the network delay experienced by the vectors of measurements, and find that, under periodic sampling, the detection delay decouples into network delay and decision delay. We obtain a lower bound on the network delay, and propose a censoring scheme, where lagging sensors drop their delayed observations in order to mitigate network delay. We show that this scheme can achieve the lower bound. This approach is explored via simulation. We also use numerical evaluation and simulation to study issues such as: the optimal sampling rate for a given number of sensors, and the optimal number of sensors for a given measurement rate
Resumo:
Structural Support Vector Machines (SSVMs) and Conditional Random Fields (CRFs) are popular discriminative methods used for classifying structured and complex objects like parse trees, image segments and part-of-speech tags. The datasets involved are very large dimensional, and the models designed using typical training algorithms for SSVMs and CRFs are non-sparse. This non-sparse nature of models results in slow inference. Thus, there is a need to devise new algorithms for sparse SSVM and CRF classifier design. Use of elastic net and L1-regularizer has already been explored for solving primal CRF and SSVM problems, respectively, to design sparse classifiers. In this work, we focus on dual elastic net regularized SSVM and CRF. By exploiting the weakly coupled structure of these convex programming problems, we propose a new sequential alternating proximal (SAP) algorithm to solve these dual problems. This algorithm works by sequentially visiting each training set example and solving a simple subproblem restricted to a small subset of variables associated with that example. Numerical experiments on various benchmark sequence labeling datasets demonstrate that the proposed algorithm scales well. Further, the classifiers designed are sparser than those designed by solving the respective primal problems and demonstrate comparable generalization performance. Thus, the proposed SAP algorithm is a useful alternative for sparse SSVM and CRF classifier design.
Macroporous three-dimensional graphene oxide foams for dye adsorption and antibacterial applications
Resumo:
Several reports illustrate the wide range applicability of graphene oxide (GO) in water remediation. However, a few layers of graphene oxide tend to aggregate under saline conditions thereby reducing its activity. The effects of aggregation can be minimized by having a random arrangement of GO layers in a three dimensional architecture. The current study emphasizes the potential benefits of highly porous, ultralight graphene oxide foams in environmental applications. These foams were prepared by a facile and cost effective lyophilization technique. The 3D architecture allowed the direct use of these foams in the removal of aqueous pollutants without any pretreatment such as ultrasonication. Due to its macroporous nature, the foams exhibited excellent adsorption abilities towards carcinogenic dyes such as rhodamine B (RB), malachite green (MG) and acriflavine (AF) with respective sorption capacities of 446, 321 and 228 mg g(-1) of foam. These foams were also further investigated for antibacterial activities against E. coli bacteria in aqueous and nutrient growth media. The random arrangement of GO layers in the porous foam architecture allowed it to exhibit excellent antibacterial activity even under physiological conditions by following the classical wrapping-perturbation mechanism. These results demonstrate the vast scope of GO foam in water remediation for both dye removal and antibacterial activity.
Resumo:
We propose a distributed sequential algorithm for quick detection of spectral holes in a Cognitive Radio set up. Two or more local nodes make decisions and inform the fusion centre (FC) over a reporting Multiple Access Channel (MAC), which then makes the final decision. The local nodes use energy detection and the FC uses mean detection in the presence of fading, heavy-tailed electromagnetic interference (EMI) and outliers. The statistics of the primary signal, channel gain and the EMI is not known. Different nonparametric sequential algorithms are compared to choose appropriate algorithms to be used at the local nodes and the Fe. Modification of a recently developed random walk test is selected for the local nodes for energy detection as well as at the fusion centre for mean detection. We show via simulations and analysis that the nonparametric distributed algorithm developed performs well in the presence of fading, EMI and outliers. The algorithm is iterative in nature making the computation and storage requirements minimal.