483 resultados para Information search – models


Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Search engines have forever changed the way people access and discover knowledge, allowing information about almost any subject to be quickly and easily retrieved within seconds. As increasingly more material becomes available electronically the influence of search engines on our lives will continue to grow. This presents the problem of how to find what information is contained in each search engine, what bias a search engine may have, and how to select the best search engine for a particular information need. This research introduces a new method, search engine content analysis, in order to solve the above problem. Search engine content analysis is a new development of traditional information retrieval field called collection selection, which deals with general information repositories. Current research in collection selection relies on full access to the collection or estimations of the size of the collections. Also collection descriptions are often represented as term occurrence statistics. An automatic ontology learning method is developed for the search engine content analysis, which trains an ontology with world knowledge of hundreds of different subjects in a multilevel taxonomy. This ontology is then mined to find important classification rules, and these rules are used to perform an extensive analysis of the content of the largest general purpose Internet search engines in use today. Instead of representing collections as a set of terms, which commonly occurs in collection selection, they are represented as a set of subjects, leading to a more robust representation of information and a decrease of synonymy. The ontology based method was compared with ReDDE (Relevant Document Distribution Estimation method for resource selection) using the standard R-value metric, with encouraging results. ReDDE is the current state of the art collection selection method which relies on collection size estimation. The method was also used to analyse the content of the most popular search engines in use today, including Google and Yahoo. In addition several specialist search engines such as Pubmed and the U.S. Department of Agriculture were analysed. In conclusion, this research shows that the ontology based method mitigates the need for collection size estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Peer to peer systems have been widely used in the internet. However, most of the peer to peer information systems are still missing some of the important features, for example cross-language IR (Information Retrieval) and collection selection / fusion features. Cross-language IR is the state-of-art research area in IR research community. It has not been used in any real world IR systems yet. Cross-language IR has the ability to issue a query in one language and receive documents in other languages. In typical peer to peer environment, users are from multiple countries. Their collections are definitely in multiple languages. Cross-language IR can help users to find documents more easily. E.g. many Chinese researchers will search research papers in both Chinese and English. With Cross-language IR, they can do one query in Chinese and get documents in two languages. The Out Of Vocabulary (OOV) problem is one of the key research areas in crosslanguage information retrieval. In recent years, web mining was shown to be one of the effective approaches to solving this problem. However, how to extract Multiword Lexical Units (MLUs) from the web content and how to select the correct translations from the extracted candidate MLUs are still two difficult problems in web mining based automated translation approaches. Discovering resource descriptions and merging results obtained from remote search engines are two key issues in distributed information retrieval studies. In uncooperative environments, query-based sampling and normalized-score based merging strategies are well-known approaches to solve such problems. However, such approaches only consider the content of the remote database but do not consider the retrieval performance of the remote search engine. This thesis presents research on building a peer to peer IR system with crosslanguage IR and advance collection profiling technique for fusion features. Particularly, this thesis first presents a new Chinese term measurement and new Chinese MLU extraction process that works well on small corpora. An approach to selection of MLUs in a more accurate manner is also presented. After that, this thesis proposes a collection profiling strategy which can discover not only collection content but also retrieval performance of the remote search engine. Based on collection profiling, a web-based query classification method and two collection fusion approaches are developed and presented in this thesis. Our experiments show that the proposed strategies are effective in merging results in uncooperative peer to peer environments. Here, an uncooperative environment is defined as each peer in the system is autonomous. Peer like to share documents but they do not share collection statistics. This environment is a typical peer to peer IR environment. Finally, all those approaches are grouped together to build up a secure peer to peer multilingual IR system that cooperates through X.509 and email system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Smart State initiative requires both improved education and training, particularly in technical fields, plus entrepreneurship to commercialise new ideas. In this study, we propose an entrepreneurial intentions model as a guide to examine the educational choices and entrepreneurial intentions of first-year University students, focusing on the effect of role models. A survey of over 1000 first -year University students revealed that the most enterprising students were choosing to study in the disciplines of information technology and business, economics and law, or selecting dual degree programs that include business. The role models most often identified for their choice of field of study were parents, followed by teachers and peers, wish females identifying more role models than males. For entrepreneurship, students' role models were parents and peers, followed by famous persons and teachers. Males and females identified similar numbers of role models, but males found starting a business more desirable and more feasible, and reported higher entrepreneurial intention. The implications of these findings for Smart State policy are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

“SOH see significant benefit in digitising its drawings and operation and maintenance manuals. Since SOH do not currently have digital models of the Opera House structure or other components, there is an opportunity for this national case study to promote the application of Digital Facility Modelling using standardized Building Information Models (BIM)”. The digital modelling element of this project examined the potential of building information models for Facility Management focusing on the following areas: • The re-usability of building information for FM purposes • BIM as an Integrated information model for facility management • Extendibility of the BIM to cope with business specific requirements • Commercial facility management software using standardised building information models • The ability to add (organisation specific) intelligence to the model • A roadmap for SOH to adopt BIM for FM The project has established that BIM building information modelling - is an appropriate and potentially beneficial technology for the storage of integrated building, maintenance and management data for SOH. Based on the attributes of a BIM, several advantages can be envisioned: consistency in the data, intelligence in the model, multiple representations, source of information for intelligent programs and intelligent queries. The IFC open building exchange standard specification provides comprehensive support for asset and facility management functions, and offers new management, collaboration and procurement relationships based on sharing of intelligent building data. The major advantages of using an open standard are: information can be read and manipulated by any compliant software, reduced user “lock in” to proprietary solutions, third party software can be the “best of breed” to suit the process and scope at hand, standardised BIM solutions consider the wider implications of information exchange outside the scope of any particular vendor, information can be archived as ASCII files for archival purposes, and data quality can be enhanced as the now single source of users’ information has improved accuracy, correctness, currency, completeness and relevance. SOH current building standards have been successfully drafted for a BIM environment and are confidently expected to be fully developed when BIM is adopted operationally by SOH. There have been remarkably few technical difficulties in converting the House’s existing conventions and standards to the new model based environment. This demonstrates that the IFC model represents world practice for building data representation and management (see Sydney Opera House FM Exemplar Project Report Number 2005-001-C-3, Open Specification for BIM: Sydney Opera House Case Study). Availability of FM applications based on BIM is in its infancy but focussed systems are already in operation internationally and show excellent prospects for implementation systems at SOH. In addition to the generic benefits of standardised BIM described above, the following FM specific advantages can be expected from this new integrated facilities management environment: faster and more effective processes, controlled whole life costs and environmental data, better customer service, common operational picture for current and strategic planning, visual decision-making and a total ownership cost model. Tests with partial BIM data provided by several of SOH’s current consultants show that the creation of a SOH complete model is realistic, but subject to resolution of compliance and detailed functional support by participating software applications. The showcase has demonstrated successfully that IFC based exchange is possible with several common BIM based applications through the creation of a new partial model of the building. Data exchanged has been geometrically accurate (the SOH building structure represents some of the most complex building elements) and supports rich information describing the types of objects, with their properties and relationships.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Facility managers have to acquire, integrate, edit and update diverse facility information ranging from building elements & fabric data, operational costs, contract types, room allocation, logistics, maintenance, etc. With the advent of standardized Building Information Models (BIM) such as the Industry Foundation Classes (IFC) new opportunities are available for Facility Managers to manage their FM data. The usage of IFC supports data interoperability between different software systems including the use of operational data for facility management systems. Besides the re-use of building data, the Building Information Model can be used as an information framework for storing and retrieving FM related data. Currently several BIM driven FM systems are available including IFC compliant ones. These systems have the potential to not only manage primary data more effectively but also to offer practical systems for detailed monitoring, and analysis of facility performance that can underpin innovative and more cost effective management of complex facilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The field of research training (for students and supervisors) is becoming more heavily regulated by the Federal Government. At the same time, quality improvement imperatives are requiring staff across the University to have better access to information and knowledge about a wider range of activities each year. Within the Creative Industries Faculty at the Queensland University of Technology (QUT), the training provided to academic and research staff is organised differently and individually. This session will involve discussion of the dichotomies found in this differentiated approach to staff training, and begin a search for best practice through interaction and input from the audience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this computational study we investigate the role of turbulence in ideal axisymmetric vortex breakdown. A pipe geometry with a slight constriction near the inlet is used to stabilise the location of the breakdown within the computed domain. Eddy-viscosity and differential Reynolds stress models are used to model the turbulence. Changes in upstream turbulence levels, flow Reynolds and Swirl numbers are considered. The different computed solutions are monitored for indications of different breakdown flow configurations. Trends in vortex breakdown due to turbulent flow conditions are identified and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In May 2005, a research team began to investigate whether designing and implementing a whole-of-government information licensing framework was possible. This framework was needed to administer copyright in relation to information produced by the government and to deal properly with privately-owned copyright on which government works often rely. The outcome so far is the design of the Government Information Licensing Framework (GILF) and its gradual uptake within a number of Commonwealth and State government agencies. However, licensing is part of a larger issue in managing public sector information (PSI); and it has important parallels with the management of libraries and public archives. Among other things, managing the retention and supply of PSI requires an ability to search and locate information, ability to give public access to the information legally, and an ability to administer charges for supplying information wherever it is required by law. The aim here is to provide a summary overview of pricing principles as they relate to the supply of PSI.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With increasingly complex engineering assets and tight economic requirements, asset reliability becomes more crucial in Engineering Asset Management (EAM). Improving the reliability of systems has always been a major aim of EAM. Reliability assessment using degradation data has become a significant approach to evaluate the reliability and safety of critical systems. Degradation data often provide more information than failure time data for assessing reliability and predicting the remnant life of systems. In general, degradation is the reduction in performance, reliability, and life span of assets. Many failure mechanisms can be traced to an underlying degradation process. Degradation phenomenon is a kind of stochastic process; therefore, it could be modelled in several approaches. Degradation modelling techniques have generated a great amount of research in reliability field. While degradation models play a significant role in reliability analysis, there are few review papers on that. This paper presents a review of the existing literature on commonly used degradation models in reliability analysis. The current research and developments in degradation models are reviewed and summarised in this paper. This study synthesises these models and classifies them in certain groups. Additionally, it attempts to identify the merits, limitations, and applications of each model. It provides potential applications of these degradation models in asset health and reliability prediction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A configurable process model provides a consolidated view of a family of business processes. It promotes the reuse of proven practices by providing analysts with a generic modelling artifact from which to derive individual process models. Unfortunately, the scope of existing notations for configurable process modelling is restricted, thus hindering their applicability. Specifically, these notations focus on capturing tasks and control-flow dependencies, neglecting equally important ingredients of business processes such as data and resources. This research fills this gap by proposing a configurable process modelling notation incorporating features for capturing resources, data and physical objects involved in the performance of tasks. The proposal has been implemented in a toolset that assists analysts during the configuration phase and guarantees the correctness of the resulting process models. The approach has been validated by means of a case study from the film industry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study will cross-fertilise Information Systems (IS) and Services Marketing ideas through reconceptualising the information system as a service (ISaaS). The study addresses known limitations of arguably the two most significant dependent variables in these disciplines - Information System Success or IS-Impact, and Service Quality. Planned efforts to synthesise analogous conceptions across these disciplines, are expected to force a deeper theoretical understanding of the broad notions of success, quality, value and satisfaction and their interrelations. The aims of this research are to: (1) yield a conceptually superior and more extensively validated IS success measurement model, and (2) develop and operationalise a more rigorously validated Service Quality measurement model, while extending the ‘service’ notion to ‘operational computer-based information systems in organisations’. In the development of the new models the study will address contemporary validation issues.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evidence-based Practice (EBP) has recently emerged as a topic of discussion amongst professionals within the library and information services (LIS) industry. Simply stated, EBP is the process of using formal research skills and methods to assist in decision making and establishing best practice. The emerging interest in EBP within the library context serves to remind the library profession that research skills and methods can help ensure that the library industry remains current and relevant in changing times. The LIS sector faces ongoing challenges in terms of the expectation that financial and human resources will be managed efficiently, particularly if library budgets are reduced and accountability to the principal stakeholders is increased. Library managers are charged with the responsibility to deliver relevant and cost effective services, in an environment characterised by rapidly changing models of information provision, information access and user behaviours. Consequently they are called upon not only to justify the services they provide, or plan to introduce, but also to measure the effectiveness of these services and to evaluate the impact on the communities they serve. The imperative for innovation in and enhancements to library practice is accompanied by the need for a strong understanding of the processes of review, measurement, assessment and evaluation. In 2001 the Centre for Information Research was commissioned by the Chartered Institute of Library and Information Professionals (CILIP) in the UK to conduct an examination into the research landscape for library and information science. The examination concluded that research is “important for the LIS [library and information science] domain in a number of ways” (McNicol & Nankivell, 2001, p.77). At the professional level, research can inform practice, assist in the future planning of the profession, raise the profile of the discipline, and indeed the reputation and standing of the library and information service itself. At the personal level, research can “broaden horizons and offer individuals development opportunities” (McNicol & Nankivell, 2001, p.77). The study recommended that “research should be promoted as a valuable professional activity for practitioners to engage in” (McNicol & Nankivell, 2001, p.82). This chapter will consider the role of EBP within the library profession. A brief review of key literature in the area is provided. The review considers issues of definition and terminology, highlights the importance of research in professional practice and outlines the research approaches that underpin EBP. The chapter concludes with a consideration of the specific application of EBP within the dynamic and evolving field of information literacy (IL).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aims of this chapter are twofold. First, we show how experiments related to nonlinear dynamical systems theory can bring about insights on the interconnectedness of different information sources for action. These include the amount of information as emphasised in conventional models of cognition and action in sport and the nature of perceptual information typically emphasised in the ecological approach. The second aim was to show how, through examining the interconnectedness of these information sources, one can study the emergence of novel tactical solutions in sport; and design experiments where tactical/decisional creativity can be observed. Within this approach it is proposed that perceptual and affective information can be manipulated during practice so that the athlete's cognitive and action systems can be transposed to a meta-stable dynamical performance region where the creation of novel action information may reside.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Query reformulation is a key user behavior during Web search. Our research goal is to develop predictive models of query reformulation during Web searching. This article reports results from a study in which we automatically classified the query-reformulation patterns for 964,780 Web searching sessions, composed of 1,523,072 queries, to predict the next query reformulation. We employed an n-gram modeling approach to describe the probability of users transitioning from one query-reformulation state to another to predict their next state. We developed first-, second-, third-, and fourth-order models and evaluated each model for accuracy of prediction, coverage of the dataset, and complexity of the possible pattern set. The results show that Reformulation and Assistance account for approximately 45% of all query reformulations; furthermore, the results demonstrate that the first- and second-order models provide the best predictability, between 28 and 40% overall and higher than 70% for some patterns. Implications are that the n-gram approach can be used for improving searching systems and searching assistance.