922 resultados para Application specific algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Facilitating the visual exploration of scientific data has received increasing attention in the past decade or so. Especially in life science related application areas the amount of available data has grown at a breath taking pace. In this paper we describe an approach that allows for visual inspection of large collections of molecular compounds. In contrast to classical visualizations of such spaces we incorporate a specific focus of analysis, for example the outcome of a biological experiment such as high throughout screening results. The presented method uses this experimental data to select molecular fragments of the underlying molecules that have interesting properties and uses the resulting space to generate a two dimensional map based on a singular value decomposition algorithm and a self organizing map. Experiments on real datasets show that the resulting visual landscape groups molecules of similar chemical properties in densely connected regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present a distributed computing framework for problems characterized by a highly irregular search tree, whereby no reliable workload prediction is available. The framework is based on a peer-to-peer computing environment and dynamic load balancing. The system allows for dynamic resource aggregation, does not depend on any specific meta-computing middleware and is suitable for large-scale, multi-domain, heterogeneous environments, such as computational Grids. Dynamic load balancing policies based on global statistics are known to provide optimal load balancing performance, while randomized techniques provide high scalability. The proposed method combines both advantages and adopts distributed job-pools and a randomized polling technique. The framework has been successfully adopted in a parallel search algorithm for subgraph mining and evaluated on a molecular compounds dataset. The parallel application has shown good calability and close-to linear speedup in a distributed network of workstations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of high throughput techniques ('chip' technology) for measurement of gene expression and gene polymorphisms (genomics), and techniques for measuring global protein expression (proteomics) and metabolite profile (metabolomics) are revolutionising life science research, including research in human nutrition. In particular, the ability to undertake large-scale genotyping and to identify gene polymorphisms that determine risk of chronic disease (candidate genes) could enable definition of an individual's risk at an early age. However, the search for candidate genes has proven to be more complex, and their identification more elusive, than previously thought. This is largely due to the fact that much of the variability in risk results from interactions between the genome and environmental exposures. Whilst the former is now very well defined via the Human Genome Project, the latter (e.g. diet, toxins, physical activity) are poorly characterised, resulting in inability to account for their confounding effects in most large-scale candidate gene studies. The polygenic nature of most chronic diseases offers further complexity, requiring very large studies to disentangle relatively weak impacts of large numbers of potential 'risk' genes. The efficacy of diet as a preventative strategy could also be considerably increased by better information concerning gene polymorphisms that determine variability in responsiveness to specific diet and nutrient changes. Much of the limited available data are based on retrospective genotyping using stored samples from previously conducted intervention trials. Prospective studies are now needed to provide data that can be used as the basis for provision of individualised dietary advice and development of food products that optimise disease prevention. Application of the new technologies in nutrition research offers considerable potential for development of new knowledge and could greatly advance the role of diet as a preventative disease strategy in the 21st century. Given the potential economic and social benefits offered, funding for research in this area needs greater recognition, and a stronger strategic focus, than is presently the case. Application of genomics in human health offers considerable ethical and societal as well as scientific challenges. Economic determinants of health care provision are more likely to resolve such issues than scientific developments or altruistic concerns for human health.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This presentation describes a system for measuring claddings as an example of the many possible advantages to be obtained by applying a personal computer to eddy current testing. A theoretical model and a learning algorithm are integrated into an instrument. They are supported in the PC, and serve to simplify and enhance multiparameter testing. The PC gives additional assistance by simplifying set-up procedures and data logging etc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The skill of numerical Lagrangian drifter trajectories in three numerical models is assessed by comparing these numerically obtained paths to the trajectories of drifting buoys in the real ocean. The skill assessment is performed using the two-sample Kolmogorov–Smirnov statistical test. To demonstrate the assessment procedure, it is applied to three different models of the Agulhas region. The test can either be performed using crossing positions of one-dimensional sections in order to test model performance in specific locations, or using the total two-dimensional data set of trajectories. The test yields four quantities: a binary decision of model skill, a confidence level which can be used as a measure of goodness-of-fit of the model, a test statistic which can be used to determine the sensitivity of the confidence level, and cumulative distribution functions that aid in the qualitative analysis. The ordering of models by their confidence levels is the same as the ordering based on the qualitative analysis, which suggests that the method is suited for model validation. Only one of the three models, a 1/10° two-way nested regional ocean model, might have skill in the Agulhas region. The other two models, a 1/2° global model and a 1/8° assimilative model, might have skill only on some sections in the region

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article assesses the extent to which sampling variation affects findings about Malmquist productivity change derived using data envelopment analysis (DEA), in the first stage by calculating productivity indices and in the second stage by investigating the farm-specific change in productivity. Confidence intervals for Malmquist indices are constructed using Simar and Wilson's (1999) bootstrapping procedure. The main contribution of this article is to account in the second stage for the information in the second stage provided by the first-stage bootstrap. The DEA SEs of the Malmquist indices given by bootstrapping are employed in an innovative heteroscedastic panel regression, using a maximum likelihood procedure. The application is to a sample of 250 Polish farms over the period 1996 to 2000. The confidence intervals' results suggest that the second half of 1990s for Polish farms was characterized not so much by productivity regress but rather by stagnation. As for the determinants of farm productivity change, we find that the integration of the DEA SEs in the second-stage regression is significant in explaining a proportion of the variance in the error term. Although our heteroscedastic regression results differ with those from the standard OLS, in terms of significance and sign, they are consistent with theory and previous research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Establishing the mechanisms by which microbes interact with their environment, including eukaryotic hosts, is a major challenge that is essential for the economic utilisation of microbes and their products. Techniques for determining global gene expression profiles of microbes, such as microarray analyses, are often hampered by methodological restraints, particularly the recovery of bacterial transcripts (RNA) from complex mixtures and rapid degradation of RNA. A pioneering technology that avoids this problem is In Vivo Expression Technology (IVET). IVET is a 'promoter-trapping' methodology that can be used to capture nearly all bacterial promoters (genes) upregulated during a microbe-environment interaction. IVET is especially useful because there is virtually no limit to the type of environment used (examples to date include soil, oomycete, a host plant or animal) to select for active microbial promoters. Furthermore, IVET provides a powerful method to identify genes that are often overlooked during genomic annotation, and has proven to be a flexible technology that can provide even more information than identification of gene expression profiles. A derivative of IVET, termed resolvase-IVET (RIVET), can be used to provide spatio-temporal information about environment-specific gene expression. More recently, niche-specific genes captured during an IVET screen have been exploited to identify the regulatory mechanisms controlling their expression. Overall, IVET and its various spin-offs have proven to be a valuable and robust set of tools for analysing microbial gene expression in complex environments and providing new targets for biotechnological development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many different reagents and methodologies have been utilised for the modification of synthetic and biological macromolecular systems. In addition, an area of intense research at present is the construction of hybrid biosynthetic polymers, comprised of biologically active species immobilised or complexed with synthetic polymers. One of the most useful and widely applicable techniques available for functionalisation of macromolecular systems involves indiscriminate carbene insertion processes. The highly reactive and non-specific nature of carbenes has enabled a multitude of macromolecular structures to be functionalised without the need for specialised reagents or additives. The use of diazirines as stable carbene precursors has increased dramatically over the past twenty years and these reagents are fast becoming the most popular photophors for photoaffinity labelling and biological applications in which covalent modification of macromolecular structures is the basis to understanding structure-activity relationships. This review reports the synthesis and application of a diverse range of diazirines in macromolecular systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Liquid chromatography-mass spectrometry (LC-MS) datasets can be compared or combined following chromatographic alignment. Here we describe a simple solution to the specific problem of aligning one LC-MS dataset and one LC-MS/MS dataset, acquired on separate instruments from an enzymatic digest of a protein mixture, using feature extraction and a genetic algorithm. First, the LC-MS dataset is searched within a few ppm of the calculated theoretical masses of peptides confidently identified by LC-MS/MS. A piecewise linear function is then fitted to these matched peptides using a genetic algorithm with a fitness function that is insensitive to incorrect matches but sufficiently flexible to adapt to the discrete shifts common when comparing LC datasets. We demonstrate the utility of this method by aligning ion trap LC-MS/MS data with accurate LC-MS data from an FTICR mass spectrometer and show how hybrid datasets can improve peptide and protein identification by combining the speed of the ion trap with the mass accuracy of the FTICR, similar to using a hybrid ion trap-FTICR instrument. We also show that the high resolving power of FTICR can improve precision and linear dynamic range in quantitative proteomics. The alignment software, msalign, is freely available as open source.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Climate change is one of the major challenges facing economic systems at the start of the 21st century. Reducing greenhouse gas emissions will require both restructuring the energy supply system (production) and addressing the efficiency and sufficiency of the social uses of energy (consumption). The energy production system is a complicated supply network of interlinked sectors with 'knock-on' effects throughout the economy. End use energy consumption is governed by complex sets of interdependent cultural, social, psychological and economic variables driven by shifts in consumer preference and technological development trajectories. To date, few models have been developed for exploring alternative joint energy production-consumption systems. The aim of this work is to propose one such model. This is achieved in a methodologically coherent manner through integration of qualitative input-output models of production, with Bayesian belief network models of consumption, at point of final demand. The resulting integrated framework can be applied either (relatively) quickly and qualitatively to explore alternative energy scenarios, or as a fully developed quantitative model to derive or assess specific energy policy options. The qualitative applications are explored here.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the SIMULINK implementation of a constrained predictive control algorithm based on quadratic programming and linear state space models, and its application to a laboratory-scale 3D crane system. The algorithm is compatible with Real Time. Windows Target and, in the case of the crane system, it can be executed with a sampling period of 0.01 s and a prediction horizon of up to 300 samples, using a linear state space model with 3 inputs, 5 outputs and 13 states.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless Personal Area Networks (WPANs) are offering high data rates suitable for interconnecting high bandwidth personal consumer devices (Wireless HD streaming, Wireless-USB and Bluetooth EDR). ECMA-368 is the Physical (PHY) and Media Access Control (MAC) backbone of many of these wireless devices. WPAN devices tend to operate in an ad-hoc based network and therefore it is important to successfully latch onto the network and become part of one of the available piconets. This paper presents a new algorithm for detecting the Packet/Fame Sync (PFS) signal in ECMA-368 to identify piconets and aid symbol timing. The algorithm is based on correlating the received PFS symbols with the expected locally stored symbols over the 24 or 12 PFS symbols, but selecting the likely TFC based on the highest statistical mode from the 24 or 12 best correlation results. The results are very favorable showing an improvement margin in the order of 11.5dB in reference sensitivity tests between the required performance using this algorithm and the performance of comparable systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Boolean input systems are in common used in the electric industry. Power supplies include such systems and the power converter represents these. For instance, in power electronics, the control variable are the switching ON and OFF of components as thyristors or transistors. The purpose of this paper is to use neural network (NN) to control continuous systems with Boolean inputs. This method is based on classification of system variations associated with input configurations. The classical supervised backpropagation algorithm is used to train the networks. The training of the artificial neural network and the control of Boolean input systems are presented. The design procedure of control systems is implemented on a nonlinear system. We apply those results to control an electrical system composed of an induction machine and its power converter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Synapsing Variable Length Crossover (SVLC) algorithm provides a biologically inspired method for performing meaningful crossover between variable length genomes. In addition to providing a rationale for variable length crossover it also provides a genotypic similarity metric for variable length genomes enabling standard niche formation techniques to be used with variable length genomes. Unlike other variable length crossover techniques which consider genomes to be rigid inflexible arrays and where some or all of the crossover points are randomly selected, the SVLC algorithm considers genomes to be flexible and chooses non-random crossover points based on the common parental sequence similarity. The SVLC Algorithm recurrently "glues" or synapses homogenous genetic sub-sequences together. This is done in such a way that common parental sequences are automatically preserved in the offspring with only the genetic differences being exchanged or removed, independent of the length of such differences. In a variable length test problem the SVLC algorithm is shown to outperform current variable length crossover techniques. The SVLC algorithm is also shown to work in a more realistic robot neural network controller evolution application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An analysis of various arithmetic averaging procedures for approximate Riemann solvers is made with a specific emphasis on efficiency and a jump capturing property. The various alternatives discussed are intended for future work, as well as the more immediate problem of steady, supercritical free-surface flows. Numerical results are shown for two test problems.