255 resultados para sequential coalescence
Resumo:
Vascular endothelial growth factor (VEGF) and bone morphogenetic proteins (BMP-7) are key regulators of angiogenesis and osteogenesis during bone regeneration. The aim of this study was to investigate the possibility of realizing sequential release of the two growth factors using a novel composite scaffold. Poly(lactic-co-glycolic acid) (PLGA)-Akermanite (AK) microspheres were used to make the composite scaffold, which was then loaded with BMP-7, followed by embedding in a gelatin hydrogel matrix loaded with VEGF. The release profiles of the growth factors were studied and selected osteogenic related markers of bone marrow stromal cells (BMSCs) were analysed. It was shown that the composite scaffolds exhibited a fast initial burst release of VEGF within the first 3 days and a sustained slow release of BMP-7 over the full period of 20 days. The in vitro proliferation and differentiation of the BMSCs cultured in the osteogenic medium were enhanced by 1 to 2 times, resulting from the additionally and sequentially release of growth factors from the PLGA-AK/gelatin composite scaffolds.
Resumo:
In many applications, e.g., bioinformatics, web access traces, system utilisation logs, etc., the data is naturally in the form of sequences. People have taken great interest in analysing the sequential data and finding the inherent characteristics or relationships within the data. Sequential association rule mining is one of the possible methods used to analyse this data. As conventional sequential association rule mining very often generates a huge number of association rules, of which many are redundant, it is desirable to find a solution to get rid of those unnecessary association rules. Because of the complexity and temporal ordered characteristics of sequential data, current research on sequential association rule mining is limited. Although several sequential association rule prediction models using either sequence constraints or temporal constraints have been proposed, none of them considered the redundancy problem in rule mining. The main contribution of this research is to propose a non-redundant association rule mining method based on closed frequent sequences and minimal sequential generators. We also give a definition for the non-redundant sequential rules, which are sequential rules with minimal antecedents but maximal consequents. A new algorithm called CSGM (closed sequential and generator mining) for generating closed sequences and minimal sequential generators is also introduced. A further experiment has been done to compare the performance of generating non-redundant sequential rules and full sequential rules, meanwhile, performance evaluation of our CSGM and other closed sequential pattern mining or generator mining algorithms has also been conducted. We also use generated non-redundant sequential rules for query expansion in order to improve recommendations for infrequently purchased products.
Resumo:
Here we present a sequential Monte Carlo approach that can be used to find optimal designs. Our focus is on the design of phase III clinical trials where the derivation of sampling windows is required, along with the optimal sampling schedule. The search is conducted via a particle filter which traverses a sequence of target distributions artificially constructed via an annealed utility. The algorithm derives a catalogue of highly efficient designs which, not only contain the optimal, but can also be used to derive sampling windows. We demonstrate our approach by designing a hypothetical phase III clinical trial.
Resumo:
Object segmentation is one of the fundamental steps for a number of robotic applications such as manipulation, object detection, and obstacle avoidance. This paper proposes a visual method for incorporating colour and depth information from sequential multiview stereo images to segment objects of interest from complex and cluttered environments. Rather than segmenting objects using information from a single frame in the sequence, we incorporate information from neighbouring views to increase the reliability of the information and improve the overall segmentation result. Specifically, dense depth information of a scene is computed using multiple view stereo. Depths from neighbouring views are reprojected into the reference frame to be segmented compensating for imperfect depth computations for individual frames. The multiple depth layers are then combined with color information from the reference frame to create a Markov random field to model the segmentation problem. Finally, graphcut optimisation is employed to infer pixels belonging to the object to be segmented. The segmentation accuracy is evaluated over images from an outdoor video sequence demonstrating the viability for automatic object segmentation for mobile robots using monocular cameras as a primary sensor.
Resumo:
In this paper we present a sequential Monte Carlo algorithm for Bayesian sequential experimental design applied to generalised non-linear models for discrete data. The approach is computationally convenient in that the information of newly observed data can be incorporated through a simple re-weighting step. We also consider a flexible parametric model for the stimulus-response relationship together with a newly developed hybrid design utility that can produce more robust estimates of the target stimulus in the presence of substantial model and parameter uncertainty. The algorithm is applied to hypothetical clinical trial or bioassay scenarios. In the discussion, potential generalisations of the algorithm are suggested to possibly extend its applicability to a wide variety of scenarios
Resumo:
Here we present a sequential Monte Carlo (SMC) algorithm that can be used for any one-at-a-time Bayesian sequential design problem in the presence of model uncertainty where discrete data are encountered. Our focus is on adaptive design for model discrimination but the methodology is applicable if one has a different design objective such as parameter estimation or prediction. An SMC algorithm is run in parallel for each model and the algorithm relies on a convenient estimator of the evidence of each model which is essentially a function of importance sampling weights. Other methods for this task such as quadrature, often used in design, suffer from the curse of dimensionality. Approximating posterior model probabilities in this way allows us to use model discrimination utility functions derived from information theory that were previously difficult to compute except for conjugate models. A major benefit of the algorithm is that it requires very little problem specific tuning. We demonstrate the methodology on three applications, including discriminating between models for decline in motor neuron numbers in patients suffering from neurological diseases such as Motor Neuron disease.
Resumo:
Fusion techniques have received considerable attention for achieving lower error rates with biometrics. A fused classifier architecture based on sequential integration of multi-instance and multi-sample fusion schemes allows controlled trade-off between false alarms and false rejects. Expressions for each type of error for the fused system have previously been derived for the case of statistically independent classifier decisions. It is shown in this paper that the performance of this architecture can be improved by modelling the correlation between classifier decisions. Correlation modelling also enables better tuning of fusion model parameters, ‘N’, the number of classifiers and ‘M’, the number of attempts/samples, and facilitates the determination of error bounds for false rejects and false accepts for each specific user. Error trade-off performance of the architecture is evaluated using HMM based speaker verification on utterances of individual digits. Results show that performance is improved for the case of favourable correlated decisions. The architecture investigated here is directly applicable to speaker verification from spoken digit strings such as credit card numbers in telephone or voice over internet protocol based applications. It is also applicable to other biometric modalities such as finger prints and handwriting samples.
Resumo:
Fusion techniques have received considerable attention for achieving performance improvement with biometrics. While a multi-sample fusion architecture reduces false rejects, it also increases false accepts. This impact on performance also depends on the nature of subsequent attempts, i.e., random or adaptive. Expressions for error rates are presented and experimentally evaluated in this work by considering the multi-sample fusion architecture for text-dependent speaker verification using HMM based digit dependent speaker models. Analysis incorporating correlation modeling demonstrates that the use of adaptive samples improves overall fusion performance compared to randomly repeated samples. For a text dependent speaker verification system using digit strings, sequential decision fusion of seven instances with three random samples is shown to reduce the overall error of the verification system by 26% which can be further reduced by 6% for adaptive samples. This analysis novel in its treatment of random and adaptive multiple presentations within a sequential fused decision architecture, is also applicable to other biometric modalities such as finger prints and handwriting samples.
Resumo:
Statistical dependence between classifier decisions is often shown to improve performance over statistically independent decisions. Though the solution for favourable dependence between two classifier decisions has been derived, the theoretical analysis for the general case of 'n' client and impostor decision fusion has not been presented before. This paper presents the expressions developed for favourable dependence of multi-instance and multi-sample fusion schemes that employ 'AND' and 'OR' rules. The expressions are experimentally evaluated by considering the proposed architecture for text-dependent speaker verification using HMM based digit dependent speaker models. The improvement in fusion performance is found to be higher when digit combinations with favourable client and impostor decisions are used for speaker verification. The total error rate of 20% for fusion of independent decisions is reduced to 2.1% for fusion of decisions that are favourable for both client and impostors. The expressions developed here are also applicable to other biometric modalities, such as finger prints and handwriting samples, for reliable identity verification.
Resumo:
The quick detection of abrupt (unknown) parameter changes in an observed hidden Markov model (HMM) is important in several applications. Motivated by the recent application of relative entropy concepts in the robust sequential change detection problem (and the related model selection problem), this paper proposes a sequential unknown change detection algorithm based on a relative entropy based HMM parameter estimator. Our proposed approach is able to overcome the lack of knowledge of post-change parameters, and is illustrated to have similar performance to the popular cumulative sum (CUSUM) algorithm (which requires knowledge of the post-change parameter values) when examined, on both simulated and real data, in a vision-based aircraft manoeuvre detection problem.
Resumo:
Here we present a sequential Monte Carlo approach to Bayesian sequential design for the incorporation of model uncertainty. The methodology is demonstrated through the development and implementation of two model discrimination utilities; mutual information and total separation, but it can also be applied more generally if one has different experimental aims. A sequential Monte Carlo algorithm is run for each rival model (in parallel), and provides a convenient estimate of the marginal likelihood (of each model) given the data, which can be used for model comparison and in the evaluation of utility functions. A major benefit of this approach is that it requires very little problem specific tuning and is also computationally efficient when compared to full Markov chain Monte Carlo approaches. This research is motivated by applications in drug development and chemical engineering.
Resumo:
In this paper, we review the sequential slotted amplify-decode-and-forward (SADF) protocol with half-duplex single-antenna and evaluate its performance in terms of pairwise error probability (PEP). We obtain the PEP upper bound of the protocol and find out that the achievable diversity order of the protocol is two with arbitrary number of relay terminals. To achieve the maximum achievable diversity order, we propose a simple precoder that is easy to implement with any number of relay terminals and transmission slots. Simulation results show that the proposed precoder achieves the maximum achievable diversity order and has similar BER performance compared to some of the existing precoders.
Resumo:
In this paper, we propose a novel relay ordering and scheduling strategy for the sequential slotted amplify-and-forward (SAF) protocol and evaluate its performance in terms of diversity-multiplexing trade-off (DMT). The relays between the source and destination are grouped into two relay clusters based on their respective locations. The proposed strategy achieves partial relay isolation and decreases the decoding complexity at the destination. We show that the DMT upper bound of sequential-SAF with the proposed strategy outperforms other amplify and forward protocols and is more practical compared to the relay isolation assumption made in the original paper [1]. Simulation result shows that the sequential-SAF protocol with the proposed strategy has better outage performance compared to the existing AF and non-cooperative protocols in high SNR regime.
Resumo:
In this paper, we propose a novel slotted hybrid cooperative protocol named the sequential slotted amplify-decodeand-forward (SADF) protocol and evaluate its performance in terms of diversity-multiplexing trade-off (DMT). The relays between the source and destination are divided into two different groups and each relay either amplifies or decodes the received signal. We first compute the optimal DMT of the proposed protocol with the assumption of perfect decoding at the DF relays. We then derive the DMT closed-form expression of the proposed sequential-SADF and obtain the proximity gain bound for achieving the optimal DMT. With the proximity gain bound, we then found the distance ratio to achieve the optimal DMT performance. Simulation result shows that the proposed protocol with high proximity gain outperforms other cooperative communication protocols in high SNR regime.