949 resultados para Information search – models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A study is presented which is aimed at developing techniques suitable for effective planning and efficient operation of fleets of aircraft typical of the air force of a developing country. An important aspect of fleet management, the problem of resource allocation for achieving prescribed operational effectiveness of the fleet, is considered. For analysis purposes, it is assumed that the planes operate in a single flying-base repair-depot environment. The perennial problem of resource allocation for fleet and facility buildup that faces planners is modeled and solved as an optimal control problem. These models contain two "policy" variables representing investments in aircraft and repair facilities. The feasibility of decentralized control is explored by assuming the two policy variables are under the control of two independent decisionmakers guided by different and not often well coordinated objectives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A study is presented which is aimed at developing techniques suitable for effective planning and efficient operation of fleets of aircraft typical of the air force of a developing country. An important aspect of fleet management, the problem of resource allocation for achieving prescribed operational effectiveness of the fleet, is considered. For analysis purposes, it is assumed that the planes operate in a single flying-base repair-depot environment. The perennial problem of resource allocation for fleet and facility buildup that faces planners is modeled and solved as an optimal control problem. These models contain two "policy" variables representing investments in aircraft and repair facilities. The feasibility of decentralized control is explored by assuming the two policy variables are under the control of two independent decisionmakers guided by different and not often well coordinated objectives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, thanks to developments in information technology, large-dimensional datasets have been increasingly available. Researchers now have access to thousands of economic series and the information contained in them can be used to create accurate forecasts and to test economic theories. To exploit this large amount of information, researchers and policymakers need an appropriate econometric model.Usual time series models, vector autoregression for example, cannot incorporate more than a few variables. There are two ways to solve this problem: use variable selection procedures or gather the information contained in the series to create an index model. This thesis focuses on one of the most widespread index model, the dynamic factor model (the theory behind this model, based on previous literature, is the core of the first part of this study), and its use in forecasting Finnish macroeconomic indicators (which is the focus of the second part of the thesis). In particular, I forecast economic activity indicators (e.g. GDP) and price indicators (e.g. consumer price index), from 3 large Finnish datasets. The first dataset contains a large series of aggregated data obtained from the Statistics Finland database. The second dataset is composed by economic indicators from Bank of Finland. The last dataset is formed by disaggregated data from Statistic Finland, which I call micro dataset. The forecasts are computed following a two steps procedure: in the first step I estimate a set of common factors from the original dataset. The second step consists in formulating forecasting equations including the factors extracted previously. The predictions are evaluated using relative mean squared forecast error, where the benchmark model is a univariate autoregressive model. The results are dataset-dependent. The forecasts based on factor models are very accurate for the first dataset (the Statistics Finland one), while they are considerably worse for the Bank of Finland dataset. The forecasts derived from the micro dataset are still good, but less accurate than the ones obtained in the first case. This work leads to multiple research developments. The results here obtained can be replicated for longer datasets. The non-aggregated data can be represented in an even more disaggregated form (firm level). Finally, the use of the micro data, one of the major contributions of this thesis, can be useful in the imputation of missing values and the creation of flash estimates of macroeconomic indicator (nowcasting).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Signal transduction events often involve transient, yet specific, interactions between structurally conserved protein domains and polypeptide sequences in target proteins. The identification and validation of these associating domains is crucial to understand signal transduction pathways that modulate different cellular or developmental processes. Bioinformatics strategies to extract and integrate information from diverse sources have been shown to facilitate the experimental design to understand complex biological events. These methods, primarily based on information from high-throughput experiments, have also led to the identification of new connections thus providing hypothetical models for cellular events. Such models, in turn, provide a framework for directing experimental efforts for validating the predicted molecular rationale for complex cellular processes. In this context, it is envisaged that the rational design of peptides for protein-peptide binding studies could substantially facilitate the experimental strategies to evaluate a predicted interaction. This rational design procedure involves the integration of protein-protein interaction data, gene ontology, physico-chemical calculations, domain-domain interaction data and information on functional sites or critical residues. Results: Here we describe an integrated approach called ``PeptideMine'' for the identification of peptides based on specific functional patterns present in the sequence of an interacting protein. This approach based on sequence searches in the interacting sequence space has been developed into a webserver, which can be used for the identification and analysis of peptides, peptide homologues or functional patterns from the interacting sequence space of a protein. To further facilitate experimental validation, the PeptideMine webserver also provides a list of physico-chemical parameters corresponding to the peptide to determine the feasibility of using the peptide for in vitro biochemical or biophysical studies. Conclusions: The strategy described here involves the integration of data and tools to identify potential interacting partners for a protein and design criteria for peptides based on desired biochemical properties. Alongside the search for interacting protein sequences using three different search programs, the server also provides the biochemical characteristics of candidate peptides to prune peptide sequences based on features that are most suited for a given experiment. The PeptideMine server is available at the URL: http://caps.ncbs.res.in/peptidemine

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multiple UAVs are deployed to carry out a search and destroy mission in a bounded region. The UAVs have limited sensor range and can carry limited resources which reduce with use. The UAVs perform a search task to detect targets. When a target is detected which requires different type and quantities of resources to completely destroy, then a team of UAVs called as a coalition is formed to attack the target. The coalition members have to modify their route to attack the target, in the process, the search task is affected, as search and destroy tasks are coupled. The performance of the mission is a function of the search and the task allocation strategies. Therefore, for a given task allocation strategy, we need to devise search strategies that are efficient. In this paper, we propose three different search strategies namely; random search strategy, lanes based search strategy and grid based search strategy and analyze their performance through Monte-Carlo simulations. The results show that the grid based search strategy performs the best but with high information overhead.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a frontier based algorithm for searching multiple goals in a fully unknown environment, with only information about the regions where the goals are most likely to be located. Our algorithm chooses an ``active goal'' from the ``active goal list'' generated by running a Traveling Salesman Problem (Tsp) routine with the given centroid locations of the goal regions. We use the concept of ``goal switching'' which helps not only in reaching more number of goals in given time, but also prevents unnecessary search around the goals that are not accessible (surrounded by walls). The simulation study shows that our algorithm outperforms Multi-Heuristic LRTA* (MELRTA*) which is a significant representative of multiple goal search approaches in an unknown environment, especially in environments with wall like obstacles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the problem of automated multiagent search in an unknown environment. Autonomous agents equipped with sensors carry out a search operation in a search space, where the uncertainty, or lack of information about the environment, is known a priori as an uncertainty density distribution function. The agents are deployed in the search space to maximize single step search effectiveness. The centroidal Voronoi configuration, which achieves a locally optimal deployment, forms the basis for the proposed sequential deploy and search strategy. It is shown that with the proposed control law the agent trajectories converge in a globally asymptotic manner to the centroidal Voronoi configuration. Simulation experiments are provided to validate the strategy. Note to Practitioners-In this paper, searching an unknown region to gather information about it is modeled as a problem of using search as a means of reducing information uncertainty about the region. Moreover, multiple automated searchers or agents are used to carry out this operation optimally. This problem has many applications in search and surveillance operations using several autonomous UAVs or mobile robots. The concept of agents converging to the centroid of their Voronoi cells, weighted with the uncertainty density, is used to design a search strategy named as sequential deploy and search. Finally, the performance of the strategy is validated using simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the absence of interlogs, building docking models is a time intensive task, involving generation of a large pool of docking decoys followed by refinement and screening to identify near native docking solutions. This limits the researcher interested in building docking methods with the choice of benchmarking only a limited number of protein complexes. We have created a repository called dockYard (http://pallab.serc.iisc.ernet.in/dockYard), that allows modelers interested in protein-protein interaction to access large volume of information on protein dimers and their interlogs, and also download decoys for their work if they are interested in building modeling methods. dockYard currently offers four categories of docking decoys derived from: Bound (native dimer co-crystallized), Unbound (individual subunits are crystallized, as well as the target dimer), Variants (match the previous two categories in at least one subunit with 100% sequence identity), and Interlogs (match the previous categories in at least one subunit with >= 90% or >= 50% sequence identity). The web service offers options for full or selective download based on search parameters. Our portal also serves as a repository to modelers who may want to share their decoy sets with the community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An attempt is made to present some challenging problems (mainly to the technically minded researchers) in the development of computational models for certain (visual) processes which are executed with, apparently, deceptive ease by the human visual system. However, in the interest of simplicity (and with a nonmathematical audience in mind), the presentation is almost completely devoid of mathematical formalism. Some of the findings in biological vision are presented in order to provoke some approaches to their computational models, The development of ideas is not complete, and the vast literature on biological and computational vision cannot be reviewed here. A related but rather specific aspect of computational vision (namely, detection of edges) has been discussed by Zucker, who brings out some of the difficulties experienced in the classical approaches.Space limitations here preclude any detailed analysis of even the elementary aspects of information processing in biological vision, However, the main purpose of the present paper is to highlight some of the fascinating problems in the frontier area of modelling mathematically the human vision system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The soil moisture characteristic (SMC) forms an important input to mathematical models of water and solute transport in the unsaturated-soil zone. Owing to their simplicity and ease of use, texture-based regression models are commonly used to estimate the SMC from basic soil properties. In this study, the performances of six such regression models were evaluated on three soils. Moisture characteristics generated by the regression models were statistically compared with the characteristics developed independently from laboratory and in-situ retention data of the soil profiles. Results of the statistical performance evaluation, while providing useful information on the errors involved in estimating the SMC, also highlighted the importance of the nature of the data set underlying the regression models. Among the models evaluated, the one possessing an underlying data set of in-situ measurements was found to be the best estimator of the in-situ SMC for all the soils. Considerable errors arose when a textural model based on laboratory data was used to estimate the field retention characteristics of unsaturated soils.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An understanding of application I/O access patterns is useful in several situations. First, gaining insight into what applications are doing with their data at a semantic level helps in designing efficient storage systems. Second, it helps create benchmarks that mimic realistic application behavior closely. Third, it enables autonomic systems as the information obtained can be used to adapt the system in a closed loop.All these use cases require the ability to extract the application-level semantics of I/O operations. Methods such as modifying application code to associate I/O operations with semantic tags are intrusive. It is well known that network file system traces are an important source of information that can be obtained non-intrusively and analyzed either online or offline. These traces are a sequence of primitive file system operations and their parameters. Simple counting, statistical analysis or deterministic search techniques are inadequate for discovering application-level semantics in the general case, because of the inherent variation and noise in realistic traces.In this paper, we describe a trace analysis methodology based on Profile Hidden Markov Models. We show that the methodology has powerful discriminatory capabilities that enable it to recognize applications based on the patterns in the traces, and to mark out regions in a long trace that encapsulate sets of primitive operations that represent higher-level application actions. It is robust enough that it can work around discrepancies between training and target traces such as in length and interleaving with other operations. We demonstrate the feasibility of recognizing patterns based on a small sampling of the trace, enabling faster trace analysis. Preliminary experiments show that the method is capable of learning accurate profile models on live traces in an online setting. We present a detailed evaluation of this methodology in a UNIX environment using NFS traces of selected commonly used applications such as compilations as well as on industrial strength benchmarks such as TPC-C and Postmark, and discuss its capabilities and limitations in the context of the use cases mentioned above.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experimental realization of quantum information processing in the field of nuclear magnetic resonance (NMR) has been well established. Implementation of conditional phase-shift gate has been a significant step, which has lead to realization of important algorithms such as Grover's search algorithm and quantum Fourier transform. This gate has so far been implemented in NMR by using coupling evolution method. We demonstrate here the implementation of the conditional phase-shift gate using transition selective pulses. As an application of the gate, we demonstrate Grover's search algorithm and quantum Fourier transform by simulations and experiments using transition selective pulses. (C) 2002 Elsevier Science (USA). All rights reserved.