917 resultados para Charged System Search


Relevância:

30.00% 30.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): H3.3, H.5.5, J5.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): I.2.8, G.1.6.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a Variable neighbourhood search (VNS) approach for solving the Maximum Set Splitting Problem (MSSP). The algorithm forms a system of neighborhoods based on changing the component for an increasing number of elements. An efficient local search procedure swaps the components of pairs of elements and yields a relatively short running time. Numerical experiments are performed on the instances known in the literature: minimum hitting set and Steiner triple systems. Computational results show that the proposed VNS achieves all optimal or best known solutions in short times. The experiments indicate that the VNS compares favorably with other methods previously used for solving the MSSP. ACM Computing Classification System (1998): I.2.8.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since multimedia data, such as images and videos, are way more expressive and informative than ordinary text-based data, people find it more attractive to communicate and express with them. Additionally, with the rising popularity of social networking tools such as Facebook and Twitter, multimedia information retrieval can no longer be considered a solitary task. Rather, people constantly collaborate with one another while searching and retrieving information. But the very cause of the popularity of multimedia data, the huge and different types of information a single data object can carry, makes their management a challenging task. Multimedia data is commonly represented as multidimensional feature vectors and carry high-level semantic information. These two characteristics make them very different from traditional alpha-numeric data. Thus, to try to manage them with frameworks and rationales designed for primitive alpha-numeric data, will be inefficient. An index structure is the backbone of any database management system. It has been seen that index structures present in existing relational database management frameworks cannot handle multimedia data effectively. Thus, in this dissertation, a generalized multidimensional index structure is proposed which accommodates the atypical multidimensional representation and the semantic information carried by different multimedia data seamlessly from within one single framework. Additionally, the dissertation investigates the evolving relationships among multimedia data in a collaborative environment and how such information can help to customize the design of the proposed index structure, when it is used to manage multimedia data in a shared environment. Extensive experiments were conducted to present the usability and better performance of the proposed framework over current state-of-art approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Three-Layer distributed mediation architecture, designed by Secure System Architecture laboratory, employed a layered framework of presence, integration, and homogenization mediators. The architecture does not have any central component that may affect the system reliability. A distributed search technique was adapted in the system to increase its reliability. An Enhanced Chord-like algorithm (E-Chord) was designed and deployed in the integration layer. The E-Chord is a skip-list algorithm based on Distributed Hash Table (DHT) which is a distributed but structured architecture. DHT is distributed in the sense that no central unit is required to maintain indexes, and it is structured in the sense that indexes are distributed over the nodes in a systematic manner. Each node maintains three kind of routing information: a frequency list, a successor/predecessor list, and a finger table. None of the nodes in the system maintains all indexes, and each node knows about some other nodes in the system. These nodes, also called composer mediators, were connected in a P2P fashion. ^ A special composer mediator called a global mediator initiates the keyword-based matching decomposition of the request using the E-Chord. It generates an Integrated Data Structure Graph (IDSG) on the fly, creates association and dependency relations between nodes in the IDSG, and then generates a Global IDSG (GIDSG). The GIDSG graph is a plan which guides the global mediator how to integrate data. It is also used to stream data from the mediators in the homogenization layer which connected to the data sources. The connectors start sending the data to the global mediator just after the global mediator creates the GIDSG and just before the global mediator sends the answer to the presence mediator. Using the E-Chord and GIDSG made the mediation system more scalable than using a central global schema repository since all the composers in the integration layer are capable of handling and routing requests. Also, when a composer fails, it would only minimally affect the entire mediation system. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the Web evolves unexpectedly fast, information grows explosively. Useful resources become more and more difficult to find because of their dynamic and unstructured characteristics. A vertical search engine is designed and implemented towards a specific domain. Instead of processing the giant volume of miscellaneous information distributed in the Web, a vertical search engine targets at identifying relevant information in specific domains or topics and eventually provides users with up-to-date information, highly focused insights and actionable knowledge representation. As the mobile device gets more popular, the nature of the search is changing. So, acquiring information on a mobile device poses unique requirements on traditional search engines, which will potentially change every feature they used to have. To summarize, users are strongly expecting search engines that can satisfy their individual information needs, adapt their current situation, and present highly personalized search results. ^ In my research, the next generation vertical search engine means to utilize and enrich existing domain information to close the loop of vertical search engine's system that mutually facilitate knowledge discovering, actionable information extraction, and user interests modeling and recommendation. I investigate three problems in which domain taxonomy plays an important role, including taxonomy generation using a vertical search engine, actionable information extraction based on domain taxonomy, and the use of ensemble taxonomy to catch user's interests. As the fundamental theory, ultra-metric, dendrogram, and hierarchical clustering are intensively discussed. Methods on taxonomy generation using my research on hierarchical clustering are developed. The related vertical search engine techniques are practically used in Disaster Management Domain. Especially, three disaster information management systems are developed and represented as real use cases of my research work.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Humoral and cells surface molecules of the mammalian immune system, grouped into the Immunoglobulin Gene Superfamily, share protein structure and gene sequence homologies with molecules found among diverse phylogenetic groups. In histocompatibility studies, the gorgonian coral Swiftia exserta has recently demonstrated specific alloimmunity with memory (Salter-Cid and Bigger, 1991. Biological Bulletin Vol 181). In an attempt to shed light on the origins of this gene family and the evolution of the vertebrate immune response, genomic DNA from Swiftia exserta was isolated, purified, and analyzed by Southern blot hybridization with mouse gene probes corresponding to two molecules of the Immunoglobulin Gene Superfamily, the Thy-1 antigen, and the alpha-3 domain of the MHC Class I histocompatibility marker. Hybridizations were conducted under low to non-stringent conditions to allow binding of mismatched homologs that may exist between the mouse gene probes and the Swiftia DNA. Removal of non-specific binding (sequences less than 70% homologous) occurred in washing steps. Results show that with the probes selected, the method chosen, and the conditions applied, no evidence of sequences of 70% or greater homology to the mouse Thy-1 or MHC Class I alpha-3 genes exist in Swiftia exserta genome.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dimensional and form inspections are key to the manufacturing and assembly of products. Product verification can involve a number of different measuring instruments operated using their dedicated software. Typically, each of these instruments with their associated software is more suitable for the verification of a pre-specified quality characteristic of the product than others. The number of different systems and software applications to perform a complete measurement of products and assemblies within a manufacturing organisation is therefore expected to be large. This number becomes even larger as advances in measurement technologies are made. The idea of a universal software application for any instrument still appears to be only a theoretical possibility. A need for information integration is apparent. In this paper, a design of an information system to consistently manage (store, search, retrieve, search, secure) measurement results from various instruments and software applications is introduced. Two of the main ideas underlying the proposed system include abstracting structures and formats of measurement files from the data so that complexity and compatibility between different approaches to measurement data modelling is avoided. Secondly, the information within a file is enriched with meta-information to facilitate its consistent storage and retrieval. To demonstrate the designed information system, a web application is implemented. © Springer-Verlag Berlin Heidelberg 2010.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation explores the complex process of organizational change, applying a behavioral lens to understand change in processes, products, and search behaviors. Chapter 1 examines new practice adoption, exploring factors that predict the extent to which routines are adopted “as designed” within the organization. Using medical record data obtained from the hospital’s Electronic Health Record (EHR) system I develop a novel measure of the “gap” between routine “as designed” and routine “as realized.” I link this to a survey administered to the hospital’s professional staff following the adoption of a new EHR system and find that beliefs about the expected impact of the change shape fidelity of the adopted practice to its design. This relationship is more pronounced in care units with experienced professionals and less pronounced when the care unit includes departmental leadership. This research offers new insights into the determinants of routine change in organizations, in particular suggesting the beliefs held by rank-and-file members of an organization are critical in new routine adoption. Chapter 2 explores changes to products, specifically examining culling behaviors in the mobile device industry. Using a panel of quarterly mobile device sales in Germany from 2004-2009, this chapter suggests that the organization’s response to performance feedback is conditional upon the degree to which decisions are centralized. While much of the research on product exit has pointed to economic drivers or prior experience, these central finding of this chapter—that performance below aspirations decreases the rate of phase-out—suggests that firms seek local solutions when doing poorly, which is consistent with behavioral explanations of organizational action. Chapter 3 uses a novel text analysis approach to examine how the allocation of attention within organizational subunits shapes adaptation in the form of search behaviors in Motorola from 1974-1997. It develops a theory that links organizational attention to search, and the results suggest a trade-off between both attentional specialization and coupling on search scope and depth. Specifically, specialized unit attention to a more narrow set of problems increases search scope but reduces search depth; increased attentional coupling also increases search scope at the cost of depth. This novel approach and these findings help clarify extant research on the behavioral outcomes of attention allocation, which have offered mixed results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this Rapid Communication we demonstrate the applicability of an augmented Gibbs ensemble Monte Carlo approach for the phase behavior determination of model colloidal systems with short-ranged depletion attraction and long-ranged repulsion. This technique allows for a quantitative determination of the phase boundaries and ground states in such systems. We demonstrate that gelation may occur in systems of this type as the result of arrested microphase separation, even when the equilibrium state of the system is characterized by compact microphase structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the Web evolves unexpectedly fast, information grows explosively. Useful resources become more and more difficult to find because of their dynamic and unstructured characteristics. A vertical search engine is designed and implemented towards a specific domain. Instead of processing the giant volume of miscellaneous information distributed in the Web, a vertical search engine targets at identifying relevant information in specific domains or topics and eventually provides users with up-to-date information, highly focused insights and actionable knowledge representation. As the mobile device gets more popular, the nature of the search is changing. So, acquiring information on a mobile device poses unique requirements on traditional search engines, which will potentially change every feature they used to have. To summarize, users are strongly expecting search engines that can satisfy their individual information needs, adapt their current situation, and present highly personalized search results. In my research, the next generation vertical search engine means to utilize and enrich existing domain information to close the loop of vertical search engine's system that mutually facilitate knowledge discovering, actionable information extraction, and user interests modeling and recommendation. I investigate three problems in which domain taxonomy plays an important role, including taxonomy generation using a vertical search engine, actionable information extraction based on domain taxonomy, and the use of ensemble taxonomy to catch user's interests. As the fundamental theory, ultra-metric, dendrogram, and hierarchical clustering are intensively discussed. Methods on taxonomy generation using my research on hierarchical clustering are developed. The related vertical search engine techniques are practically used in Disaster Management Domain. Especially, three disaster information management systems are developed and represented as real use cases of my research work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The text analyzes the impact of the economic crisis in some critical aspects of the National Health System: outcomes, health expenditure, remuneration policy and privatization through Private Public Partnership models. Some health outcomes related to social inequalities are worrying. Reducing public health spending has increased the fragility of the health system, reduced wage income of workers in the sector and increased heterogeneity between regions. Finally, the evidence indicates that privatization does not mean more efficiency and better governance. Deep reforms are needed to strengthen the National Health System.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The blast furnace is the main ironmaking production unit in the world which converts iron ore with coke and hot blast into liquid iron, hot metal, which is used for steelmaking. The furnace acts as a counter-current reactor charged with layers of raw material of very different gas permeability. The arrangement of these layers, or burden distribution, is the most important factor influencing the gas flow conditions inside the furnace, which dictate the efficiency of the heat transfer and reduction processes. For proper control the furnace operators should know the overall conditions in the furnace and be able to predict how control actions affect the state of the furnace. However, due to high temperatures and pressure, hostile atmosphere and mechanical wear it is very difficult to measure internal variables. Instead, the operators have to rely extensively on measurements obtained at the boundaries of the furnace and make their decisions on the basis of heuristic rules and results from mathematical models. It is particularly difficult to understand the distribution of the burden materials because of the complex behavior of the particulate materials during charging. The aim of this doctoral thesis is to clarify some aspects of burden distribution and to develop tools that can aid the decision-making process in the control of the burden and gas distribution in the blast furnace. A relatively simple mathematical model was created for simulation of the distribution of the burden material with a bell-less top charging system. The model developed is fast and it can therefore be used by the operators to gain understanding of the formation of layers for different charging programs. The results were verified by findings from charging experiments using a small-scale charging rig at the laboratory. A basic gas flow model was developed which utilized the results of the burden distribution model to estimate the gas permeability of the upper part of the blast furnace. This combined formulation for gas and burden distribution made it possible to implement a search for the best combination of charging parameters to achieve a target gas temperature distribution. As this mathematical task is discontinuous and non-differentiable, a genetic algorithm was applied to solve the optimization problem. It was demonstrated that the method was able to evolve optimal charging programs that fulfilled the target conditions. Even though the burden distribution model provides information about the layer structure, it neglects some effects which influence the results, such as mixed layer formation and coke collapse. A more accurate numerical method for studying particle mechanics, the Discrete Element Method (DEM), was used to study some aspects of the charging process more closely. Model charging programs were simulated using DEM and compared with the results from small-scale experiments. The mixed layer was defined and the voidage of mixed layers was estimated. The mixed layer was found to have about 12% less voidage than layers of the individual burden components. Finally, a model for predicting the extent of coke collapse when heavier pellets are charged over a layer of lighter coke particles was formulated based on slope stability theory, and was used to update the coke layer distribution after charging in the mathematical model. In designing this revision, results from DEM simulations and charging experiments for some charging programs were used. The findings from the coke collapse analysis can be used to design charging programs with more stable coke layers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We build a system to support search and visualization on heterogeneous information networks. We first build our system on a specialized heterogeneous information network: DBLP. The system aims to facilitate people, especially computer science researchers, toward a better understanding and user experience about academic information networks. Then we extend our system to the Web. Our results are much more intuitive and knowledgeable than the simple top-k blue links from traditional search engines, and bring more meaningful structural results with correlated entities. We also investigate the ranking algorithm, and we show that the personalized PageRank and proposed Hetero-personalized PageRank outperform the TF-IDF ranking or mixture of TF-IDF and authority ranking. Our work opens several directions for future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The search for patterns or motifs in data represents an area of key interest to many researchers. In this paper we present the Motif Tracking Algorithm, a novel immune inspired pattern identification tool that is able to identify unknown motifs which repeat within time series data. The power of the algorithm is derived from its use of a small number of parameters with minimal assumptions. The algorithm searches from a completely neutral perspective that is independent of the data being analysed and the underlying motifs. In this paper the motif tracking algorithm is applied to the search for patterns within sequences of low level system calls between the Linux kernel and the operating system’s user space. The MTA is able to compress data found in large system call data sets to a limited number of motifs which summarise that data. The motifs provide a resource from which a profile of executed processes can be built. The potential for these profiles and new implications for security research are highlighted. A higher level system call language for measuring similarity between patterns of such calls is also suggested.