943 resultados para Search space reduction


Relevância:

40.00% 40.00%

Publicador:

Resumo:

"October 1959."

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper the technique of shorter route determination of fire engine to the fire place on time minimization criterion with the use of evolutionary modeling is offered. The algorithm of its realization on the base of complete and optimized space of search of possible decisions is explored. The aspects of goal function forming and program realization of method having a special purpose are considered. Experimental verification is executed and the results of comparative analysis with the expert conclusions are considered.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

* The work is supported by RFBR, grant 04-01-00858-a.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work contributes to the development of search engines that self-adapt their size in response to fluctuations in workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computational resources to or from the engine. In this paper, we focus on the problem of regrouping the metric-space search index when the number of virtual machines used to run the search engine is modified to reflect changes in workload. We propose an algorithm for incrementally adjusting the index to fit the varying number of virtual machines. We tested its performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud, while calibrating the results to compensate for the performance fluctuations of the platform. Our experiments show that, when compared with computing the index from scratch, the incremental algorithm speeds up the index computation 2–10 times while maintaining a similar search performance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This research focuses on automatically adapting a search engine size in response to fluctuations in query workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computer resources to or from the engine. Our solution is to contribute an adaptive search engine that will repeatedly re-evaluate its load and, when appropriate, switch over to a dierent number of active processors. We focus on three aspects and break them out into three sub-problems as follows: Continually determining the Number of Processors (CNP), New Grouping Problem (NGP) and Regrouping Order Problem (ROP). CNP means that (in the light of the changes in the query workload in the search engine) there is a problem of determining the ideal number of processors p active at any given time to use in the search engine and we call this problem CNP. NGP happens when changes in the number of processors are determined and it must also be determined which groups of search data will be distributed across the processors. ROP is how to redistribute this data onto processors while keeping the engine responsive and while also minimising the switchover time and the incurred network load. We propose solutions for these sub-problems. For NGP we propose an algorithm for incrementally adjusting the index to t the varying number of virtual machines. For ROP we present an ecient method for redistributing data among processors while keeping the search engine responsive. Regarding the solution for CNP, we propose an algorithm determining the new size of the search engine by re-evaluating its load. We tested the solution performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud. Our experiments show that when we compare our NGP solution with computing the index from scratch, the incremental algorithm speeds up the index computation 2{10 times while maintaining a similar search performance. The chosen redistribution method is 25% to 50% faster than other methods and reduces the network load around by 30%. For CNP we present a deterministic algorithm that shows a good ability to determine a new size of search engine. When combined, these algorithms give an adapting algorithm that is able to adjust the search engine size with a variable workload.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digital collections are growing exponentially in size as the information age takes a firm grip on all aspects of society. As a result Information Retrieval (IR) has become an increasingly important area of research. It promises to provide new and more effective ways for users to find information relevant to their search intentions. Document clustering is one of the many tools in the IR toolbox and is far from being perfected. It groups documents that share common features. This grouping allows a user to quickly identify relevant information. If these groups are misleading then valuable information can accidentally be ignored. There- fore, the study and analysis of the quality of document clustering is important. With more and more digital information available, the performance of these algorithms is also of interest. An algorithm with a time complexity of O(n2) can quickly become impractical when clustering a corpus containing millions of documents. Therefore, the investigation of algorithms and data structures to perform clustering in an efficient manner is vital to its success as an IR tool. Document classification is another tool frequently used in the IR field. It predicts categories of new documents based on an existing database of (doc- ument, category) pairs. Support Vector Machines (SVM) have been found to be effective when classifying text documents. As the algorithms for classifica- tion are both efficient and of high quality, the largest gains can be made from improvements to representation. Document representations are vital for both clustering and classification. Representations exploit the content and structure of documents. Dimensionality reduction can improve the effectiveness of existing representations in terms of quality and run-time performance. Research into these areas is another way to improve the efficiency and quality of clustering and classification results. Evaluating document clustering is a difficult task. Intrinsic measures of quality such as distortion only indicate how well an algorithm minimised a sim- ilarity function in a particular vector space. Intrinsic comparisons are inherently limited by the given representation and are not comparable between different representations. Extrinsic measures of quality compare a clustering solution to a “ground truth” solution. This allows comparison between different approaches. As the “ground truth” is created by humans it can suffer from the fact that not every human interprets a topic in the same manner. Whether a document belongs to a particular topic or not can be subjective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Consider a person searching electronic health records, a search for the term ‘cracked skull’ should return documents that contain the term ‘cranium fracture’. A information retrieval systems is required that matches concepts, not just keywords. Further more, determining relevance of a query to a document requires inference – its not simply matching concepts. For example a document containing ‘dialysis machine’ should align with a query for ‘kidney disease’. Collectively we describe this problem as the ‘semantic gap’ – the difference between the raw medical data and the way a human interprets it. This paper presents an approach to semantic search of health records by combining two previous approaches: an ontological approach using the SNOMED CT medical ontology; and a distributional approach using semantic space vector space models. Our approach will be applied to a specific problem in health informatics: the matching of electronic patient records to clinical trials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The travel and hospitality industry is one which relies especially crucially on word of mouth, both at the level of overall destinations (Australia, Queensland, Brisbane) and at the level of travellers’ individual choices of hotels, restaurants, sights during their trips. The provision of such word-of-mouth information has been revolutionised over the past decade by the rise of community-based Websites which allow their users to share information about their past and future trips and advise one another on what to do or what to avoid during their travels. Indeed, the impact of such user-generated reviews, ratings, and recommendations sites has been such that established commercial travel advisory publishers such as Lonely Planet have experienced a pronounced downturn in sales ¬– unless they have managed to develop their own ways of incorporating user feedback and contributions into their publications. This report examines the overall significance of ratings and recommendation sites to the travel industry, and explores the community, structural, and business models of a selection of relevant ratings and recommendations sites. We identify a range of approaches which are appropriate to the respective target markets and business aims of these organisations, and conclude that there remain significant opportunities for further operators especially if they aim to cater for communities which are not yet appropriately served by specific existing sites. Additionally, we also point to the increasing importance of connecting stand-alone ratings and recommendations sites with general social media spaces like Facebook, Twitter, and LinkedIn, and of providing mobile interfaces which enable users to provide updates and ratings directly from the locations they happen to be visiting. In this report, we profile the following sites: * TripAdvisor, the international market leader for travel ratings and recommendations sites, with a membership of some 11 million users; * IgoUgo, the other leading site in this field, which aims to distinguish itself from the market leader by emphasising the quality of its content; * Zagat, a long-established publisher of restaurant guides which has translated its crowdsourcing model from the offline to the online world; * Lonely Planet’s Thorn Tree site, which attempts to respond to the rise of these travel communities by similarly harnessing user-generated content; * Stayz, which attempts to enhance its accommodation search and booking services by incorporating ratings and reviews functionality; and * BigVillage, an Australian-based site attempting to cater for a particularly discerning niche of travellers; * Dopplr, which connects travel and social networking in a bid to pursue the lucrative market of frequent and business travellers; * Foursquare, which builds on its mobile application to generate a steady stream of ‘check-ins’ and recommendations for hospitality and other services around the world; * Suite 101, which uses a revenue-sharing model to encourage freelance writers to contribute travel writing (amongst other genres of writing); * Yelp, the global leader in general user-generated product review and recommendation services. In combination, these profiles provide an overview of current developments in the travel ratings and recommendations space (and beyond), and offer an outlook for further possibilities. While no doubt affected by the global financial downturn and the reduction in travel that it has caused, travel ratings and recommendations remain important – perhaps even more so if a reduction in disposable income has resulted in consumers becoming more critical and discerning. The aggregated word of mouth from many tens of thousands of travellers which these sites provide certainly has a substantial influence on their users. Using these sites to research travel options has now become an activity which has spread well beyond the digirati. The same is true also for many other consumer industries, especially where there is a significant variety of different products available – and so, this report may also be read as a case study whose findings are able to be translated, mutatis mutandis, to purchasing decisions from household goods through consumer electronics to automobiles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Performance comparisons between File Signatures and Inverted Files for text retrieval have previously shown several significant shortcomings of file signatures relative to inverted files. The inverted file approach underpins most state-of-the-art search engine algorithms, such as Language and Probabilistic models. It has been widely accepted that traditional file signatures are inferior alternatives to inverted files. This paper describes TopSig, a new approach to the construction of file signatures. Many advances in semantic hashing and dimensionality reduction have been made in recent times, but these were not so far linked to general purpose, signature file based, search engines. This paper introduces a different signature file approach that builds upon and extends these recent advances. We are able to demonstrate significant improvements in the performance of signature file based indexing and retrieval, performance that is comparable to that of state of the art inverted file based systems, including Language models and BM25. These findings suggest that file signatures offer a viable alternative to inverted files in suitable settings and positions the file signatures model in the class of Vector Space retrieval models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is worldwide interest in reducing aircraft emissions. The difficulty of reducing emissions including water vapour, carbon dioxide (CO2) and oxides of nitrogen (NOx) is mainly due from the fact that a commercial aircraft is usually designed for a particular optimal cruise altitude but may be requested or required to operate and deviate at different altitude and speeds to archive a desired or commanded flight plan, resulting in increased emissions. This is a multi- disciplinary problem with multiple trade-offs such as optimising engine efficiency, minimising fuel burnt, minimise emissions while maintaining aircraft separation and air safety. This project presents the coupling of an advanced optimisation technique with mathematical models and algorithms for aircraft emission reduction through flight optimisation. Numerical results show that the method is able to capture a set of useful trade-offs between aircraft range and NOx, and mission fuel consumption and NOx. In addition, alternative cruise operating conditions including Mach and altitude that produce minimum NOx and CO2 (minimum mission fuel weight) are suggested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an analysis of the stream cipher Mixer, a bit-based cipher with structural components similar to the well-known Grain cipher and the LILI family of keystream generators. Mixer uses a 128-bit key and 64-bit IV to initialise a 217-bit internal state. The analysis is focused on the initialisation function of Mixer and shows that there exist multiple key-IV pairs which, after initialisation, produce the same initial state, and consequently will generate the same keystream. Furthermore, if the number of iterations of the state update function performed during initialisation is increased, then the number of distinct initial states that can be obtained decreases. It is also shown that there exist some distinct initial states which produce the same keystream, resulting in a further reduction of the effective key space

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cities have long held a fascination for people – as they grow and develop, there is a desire to know and understand the intricate interplay of elements that makes cities ‘live’. In part, this is a need for even greater efficiency in urban centres, yet the underlying quest is for a sustainable urban form. In order to make sense of the complex entities that we recognise cities to be, they have been compared to buildings, organisms and more recently machines. However the search for better and more elegant urban centres is hardly new, healthier and more efficient settlements were the aim of Modernism’s rational sub-division of functions, which has been translated into horizontal distribution through zoning, or vertical organisation thought highrise developments. However both of these approaches have been found to be unsustainable, as too many resources are required to maintain this kind or urbanisation and social consequences of either horizontal or vertical isolation must also be considered. From being absolute consumers of resources, of energy and of technology, cities need to change, to become sustainable in order to be more resilient and more efficient in supporting culture, society as well as economy. Our urban centres need to be re-imagined, re-conceptualised and re-defined, to match our changing society. One approach is to re-examine the compartmentalised, mono-functional approach of urban Modernism and to begin to investigate cities like ecologies, where every element supports and incorporates another, fulfilling more than just one function. This manner of seeing the city suggests a framework to guide the re-mixing of urban settlements. Beginning to understand the relationships between supporting elements and the nature of the connecting ‘web’ offers an invitation to investigate the often ignored, remnant spaces of cities. This ‘negative space’ is the residual from which space and place are carved out in the Contemporary city, providing the link between elements of urban settlement. Like all successful ecosystems, cities need to evolve and change over time in order to effectively respond to different lifestyles, development in culture and society as well as to meet environmental challenges. This paper seeks to investigate the role that negative space could have in the reorganisation of the re-mixed city. The space ‘in-between’ is analysed as an opportunity for infill development or re-development which provides to the urban settlement the variety that is a pre-requisite for ecosystem resilience. An analysis of the urban form is suggested as an empirical tool to map the opportunities already present in the urban environment and negative space is evaluated as a key element in achieving a positive development able to distribute diverse environmental and social facilities in the city.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Capacity reduction programmes, in the form of buybacks or decommissioning, have had relatively widespread application in fisheries in the US, Europe and Australia. A common criticism of such programmes is that they remove the least efficient vessels first, resulting in an increase in average efficiency of the remaining fleet, which tends to increase the effective fishing power of the remaining fleet. In this paper, the effects of a buyback programme on average technical efficiency in Australia’s Northern Prawn Fishery are examined using a multi-output production function approach with an explicit inefficiency model. As expected, the results indicate that average efficiency of the remaining vessels was generally greater than that of the removed vessels. Further, there was some evidence of an increase in average scale efficiency in the fleet as the remaining vessels were closer, on average, to the optimal scale. Key factors affecting technical efficiency included company structure and the number of vessels fishing. In regard to fleet size, our model suggests positive externalities associated with more boats fishing at any point in time (due to information sharing and reduced search costs), but also negative externalities due to crowding, with the latter effect dominating the former. Hence, the buyback resulted in a net increase in the individual efficiency of the remaining vessels due to reduced crowding, as well as raising average efficiency through removal of less efficient vessels.