861 resultados para personal information management model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recent emergence of intelligent agent technology and advances in information gathering have been the important steps forward in efficiently managing and using the vast amount of information now available on the Web to make informed decisions. There are, however, still many problems that need to be overcome in the information gathering research arena to enable the delivery of relevant information required by end users. Good decisions cannot be made without sufficient, timely, and correct information. Traditionally it is said that knowledge is power, however, nowadays sufficient, timely, and correct information is power. So gathering relevant information to meet user information needs is the crucial step for making good decisions. The ideal goal of information gathering is to obtain only the information that users need (no more and no less). However, the volume of information available, diversity formats of information, uncertainties of information, and distributed locations of information (e.g. World Wide Web) hinder the process of gathering the right information to meet the user needs. Specifically, two fundamental issues in regard to efficiency of information gathering are mismatch and overload. The mismatch means some information that meets user needs has not been gathered (or missed out), whereas, the overload means some gathered information is not what users need. Traditional information retrieval has been developed well in the past twenty years. The introduction of the Web has changed people's perceptions of information retrieval. Usually, the task of information retrieval is considered to have the function of leading the user to those documents that are relevant to his/her information needs. The similar function in information retrieval is to filter out the irrelevant documents (or called information filtering). Research into traditional information retrieval has provided many retrieval models and techniques to represent documents and queries. Nowadays, information is becoming highly distributed, and increasingly difficult to gather. On the other hand, people have found a lot of uncertainties that are contained in the user information needs. These motivate the need for research in agent-based information gathering. Agent-based information systems arise at this moment. In these kinds of systems, intelligent agents will get commitments from their users and act on the users behalf to gather the required information. They can easily retrieve the relevant information from highly distributed uncertain environments because of their merits of intelligent, autonomy and distribution. The current research for agent-based information gathering systems is divided into single agent gathering systems, and multi-agent gathering systems. In both research areas, there are still open problems to be solved so that agent-based information gathering systems can retrieve the uncertain information more effectively from the highly distributed environments. The aim of this thesis is to research the theoretical framework for intelligent agents to gather information from the Web. This research integrates the areas of information retrieval and intelligent agents. The specific research areas in this thesis are the development of an information filtering model for single agent systems, and the development of a dynamic belief model for information fusion for multi-agent systems. The research results are also supported by the construction of real information gathering agents (e.g., Job Agent) for the Internet to help users to gather useful information stored in Web sites. In such a framework, information gathering agents have abilities to describe (or learn) the user information needs, and act like users to retrieve, filter, and/or fuse the information. A rough set based information filtering model is developed to address the problem of overload. The new approach allows users to describe their information needs on user concept spaces rather than on document spaces, and it views a user information need as a rough set over the document space. The rough set decision theory is used to classify new documents into three regions: positive region, boundary region, and negative region. Two experiments are presented to verify this model, and it shows that the rough set based model provides an efficient approach to the overload problem. In this research, a dynamic belief model for information fusion in multi-agent environments is also developed. This model has a polynomial time complexity, and it has been proven that the fusion results are belief (mass) functions. By using this model, a collection fusion algorithm for information gathering agents is presented. The difficult problem for this research is the case where collections may be used by more than one agent. This algorithm, however, uses the technique of cooperation between agents, and provides a solution for this difficult problem in distributed information retrieval systems. This thesis presents the solutions to the theoretical problems in agent-based information gathering systems, including information filtering models, agent belief modeling, and collection fusions. It also presents solutions to some of the technical problems in agent-based information systems, such as document classification, the architecture for agent-based information gathering systems, and the decision in multiple agent environments. Such kinds of information gathering agents will gather relevant information from highly distributed uncertain environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a series of empirical case studies to discuss impacts of economic globalisation on the development of performing arts organisations in Vietnam (Hanoi Youth Theatre and Vietnam National Symphony Orchestra) and Australia (Melbourne Theatre Company and Sydney Symphony Orchestra), and focuses on how Vietnamese organisations have adapted to these changes. The paper also identifies cultural policy implications for the development of the sector; for arts management training in Vietnam so that the sector (and more importantly, the artists) may fully benefit from the open market context. The findings indicate that Vietnamese performing arts organisations have attempted to adapt to the new market context while struggling to balance artistic quality, freedom and financial viability in the new socialist regime. The Australian case studies offered a relevant management model to Vietnamese arts management practice and training.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Gippsland Lakes region in eastern Victoria is a partially flushed coastal lake system within a diverse catchment of rural and urban communities. Pressure from lakeside developments, re-occurring blue-green algal blooms, declining fisheries, sedimentation and infilling of the ocean entrance, has borne several decades of focussed studies and routine monitoring programs, along with a variety of engineering and management solutions. A recent review recommended that these disparate studies should be enhanced to formulate a coordinated monitoring network that could improve both spatial and temporal coverage, develop a capacity to trigger responsive investigations and was able to serve the needs of system management. Through a series of partnerships an integrated network was developed that comprises event and baseline monitoring of catchment loads, local meteorological forcing and an array of water quality sampling sites within the lakes system. A majority of these sites are incorporated with real-time telemetry that provides up to the minute information to stakeholders via a web-based information management system and vital operational status to technical system management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In autonomously managed distributed systems for collaboration, provenance can facilitate reuse of information that are interchanged, repetition of successful experiments, or to provide evidence for trust mechanisms that certain information existed at a certain period during collaboration. In this paper, we propose domain independent information provenance architecture for open collaborative distributed systems. The proposed system uses XML for interchanging information and RDF to track information provenance. The use of XML and RDF also ensures that information is universally acceptable even among heterogeneous nodes. Our proposed information provenance model can work on any operating systems or workflows.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many parties involved in the construction industry have convinced the importance of electronic commerce (EC) for improving business processes, cutting cost, and providing comprehensive information. However, currently the application of EC is relatively limited and ineffective. These systems are always non-interoperable which creates problems for the stakeholders in construction projects. This paper aims to analyze the E-commerce application system and develop a system for the parties to collaborate and share information effectively. In this paper, four information flats are summarized based on the literature survey and field investigation, and the information integration model for EC is built up using IFC and XML. It should provide a useful reference for China to promote the development of EC in the construction industry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – This paper develops a new decomposition method of the housing market variations to analyse the housing dynamics of the Australian eight capital cities.
Design/methodology/approach – This study reviews the prior research on analysing the housing market variations and classifies the previous methods into four main models. Based on this, the study develops a new decomposition of the variations, which is made up of regional information, homemarket information and time information. The panel data regression method, unit root test and F test are adopted to construct the model and interpret the housing market variations of the Australian capital cities.
Findings – This paper suggests that the Australian home-market information has the same elasticity to the housing market variations across cities and time. In contrast, the elasticities of the regional information are distinguished. However, similarities exit in the west and north of Australia or the south and east of Australia. The time information contributes differently along the observing period, although the similarities are found in certain periods.
Originality/value – This paper introduces the housing market variation decomposition into the research of housing market variations and develops a model based on the new method of the housing market variation decomposition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we exploit the discrete Coxian distribution and propose a novel form of stochastic model, termed as the Coxian hidden semi-Makov model (Cox-HSMM), and apply it to the task of recognising activities of daily living (ADLs) in a smart house environment. The use of the Coxian has several advantages over traditional parameterization (e.g. multinomial or continuous distributions) including the low number of free parameters needed, its computational efficiency, and the existing of closed-form solution. To further enrich the model in real-world applications, we also address the problem of handling missing observation for the proposed Cox-HSMM. In the domain of ADLs, we emphasize the importance of the duration information and model it via the Cox-HSMM. Our experimental results have shown the superiority of the Cox-HSMM in all cases when compared with the standard HMM. Our results have further shown that outstanding recognition accuracy can be achieved with relatively low number of phases required in the Coxian, thus making the Cox-HSMM particularly suitable in recognizing ADLs whose movement trajectories are typically very long in nature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electronic service marketplaces (ESMs) have become major exchange platforms for the online outsourcing of different services – especially software development – to providers. Provider profiles on ESMs encompass extensive information regarding the activities and transactions of providers and they are a main source of information for customers. Such profile information significantly facilitates the relationship development between customers and providers. The existing literature has focused on the impact of the ratings of providers, but so far has not investigated the impact of the other available profile information. Building on the integrated information response model, this study investigates how information presented by providers as well as information provided by the ESM influences the business outcomes of the providers. Based on data collected from one of the major ESMs, we found that profile information indeed has a significant impact on the business outcomes of providers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Farmers need information at all stages of the farming life cycle to make optimal decisions. The required information includes not only prior knowledge but also real time (dynamic) information such as market prices and current production levels. Some valuable information needed by the farmers is produced by government organizations and is available in different locations in different formats. Although farmer is the most important stakeholder in agriculture, there has not been much effort to provide the essential information to farmers on a real time basis. This lack of information is creating many difficulties for farmers as they are not being able to make the correct decisions relating to their farming activities. Through field studies we have identified information required by farmers at various stages of the farming cycle and official sources where this information is available. Next we developed an information flow model that connects various information sources to farmers’ information needs. Based on these findings we are now developing a mobile phone based information system to deliver the required information to farmers in real time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study describes the development of a decision framework to support multi-disciplinary information and knowledge management model which focuses on integrated design and delivery solutions for all construction supply chain actors. The framework was developed within the context of two national information technology research projects in Australia. The first study used diffusion theory to explain the barriers and enablers to future adoption of advanced information technology solutions such as building information modelling (BIM). A grounded theory methodology was deployed and a pathways model for innovative information technology diffusion accommodating diverse patterns of adoption and different levels of expertize was developed. The second study built on the findings of the first study but specifically focussed on innovators, early and late adopters of BIM and the development of a decision framework towards advanced collaborative platform solutions. This study summarizes the empirical results of the previous studies. The core of the decision framework is the creation, use and ownership of building information sub-models and integrated models. The decision framework relies on holistic collaborative design management. Design expertise is diffused and can be found in various locations along the construction supply chain within project teams. A wide definition of design is considered from conceptual to developed to detailed design. The recent development to the decision model offers much potential as the early upstream decisions are often made in a creative, collaborative and uncertain environment. However, decision making needs to balance both a reductionist and exploratory creative empowerment approach. Shared team expertise and competency and team mental models are explored as a fundamental requirement to collaborative BIM. New skills in interdisciplinarity are discussed as an implication of future construction industry collaborative platforms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We begin by briefly examining the achievements of the IUCN Red List of Threatened Species, and offering it as the model and motivator for the creation of the IUCN Red List of Ecosystems (RLE). The history of the RLE concept within IUCN is briefly summarized, from the first attempt to formally establish an RLE in 1996 to the present. Major activities since 2008, when the World Conservation Congress initiated a "consultation process for the development, implementation and monitoring of a global standard for the assessment of ecosystem status, applicable at local, regional and global levels," have included: Development of a research agenda for strengthening the scientific foundations of the RLE, publication of preliminary categories and criteria for examination by the scientific and conservation community, dissemination of the effort widely by presenting it at workshops and conferences around the world, and encouraging tests of the system for a diversity of ecosystem types and in a variety of institutional settings. Between 2009 and 2012, the Red List of Ecosystems Thematic Group of the IUCN Commission on Ecosystem Management organized 18 workshops and delivered 17 conferences in 20 countries on 5 continents, directly reaching hundreds of participants. Our vision for the future includes the integration of the RLE to the other three key IUCN knowledge products (IUCN Red List of Threatened Species, World Database on Protected Areas and Key Biodiversity Areas), in an on-line, user-driven, freely-accessible information management system for performing biodiversity assessments. In addition we wish to pilot the integration of the RLE into land/water use planning and macro-economic planning. Fundamental challenges for the future include: Substantial expansion in existing institutional and technical capacity (especially in biodiversity-rich countries in the developing world), progressive assessment of the status of all terrestrial, freshwater, marine and subterranean ecosystems, and development of a map of the ecosystems of the world. Our ultimate goal is that national, regional and global RLEs are used to inform conservation and land/water use decision-making by all sectors of society. © Author(s) 2012.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background/Aims Obesity has become a global epidemic, and a major preventable cause of morbidity and mortality. Management strategies and treatment protocols are however poorly developed and evaluated. The aim of the Counterweight Programme is to develop an evidence-based model for the management of obesity in primary care.

Methods The Counterweight Programme is based on the theoretical model of Evidence-Based Quality Assessment aimed at improving the management of obese adults (18–75 years) in primary care. The model consists of four phases: (1) practice audit and needs assessment, (2) practice support and training, (3) practice nurse-led patient intervention, and (4) evaluation. Patient intervention consisted of screening and treatment pathways incorporating evidence-based approaches, including patient-centred goal setting, prescribed eating plans, a group programme, physical activity and behavioural approaches, anti-obesity medication and weight maintenance strategies. Weight Management Advisers who are specialist obesity dietitians facilitated programme implementation. Eighty practices were recruited of which 18 practices were randomized to act as controls and receive deferred intervention 2 years after the initial audit.

Results By February 2004, 58 of the 62 (93.5%) intervention practices had been trained to run the intervention programme, 47 (75.8%) practices were active in implementing the model and 1256 patients had been recruited (74% female, 26% male, mean age 50.6 years, SD 14). At baseline, 75% of patients had at one or more co-morbidity, and the mean body mass index (BMI) was 36.9 kg/m2 (SD 5.4). Of the 1256 patients recruited, 91% received one of the core lifestyle interventions in the first 12 months. For all patients followed up at 12 months, 34% achieved a clinical meaningful weight loss of 5% or more. A total of 51% of patients were classed as compliant in that they attended the required level of appointments in 3, 6, and 12 months. For fully compliant patients, weight loss improved with 43% achieving a weight loss of 5% or more at 12 months.

Conclusion The Counterweight Programme is an evidence-based weight management model which is feasible to implement in primary care.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cyber-physical-social system (CPSS) allows individuals to share personal information collected from not only cyberspace but also physical space. This has resulted in generating numerous data at a user's local storage. However, it is very expensive for users to store large data sets, and it also causes problems in data management. Therefore, it is of critical importance to outsource the data to cloud servers, which provides users an easy, cost-effective, and flexible way to manage data, whereas users lose control on their data once outsourcing their data to cloud servers, which poses challenges on integrity of outsourced data. Many schemes have been proposed to allow a third-party auditor to verify data integrity using the public keys of users. Most of these schemes bear a strong assumption: the auditors are honest and reliable, and thereby are vulnerability in the case that auditors are malicious. Moreover, in most of these schemes, an auditor needs to manage users certificates to choose the correct public keys for verification. In this paper, we propose a secure certificateless public integrity verification scheme (SCLPV). The SCLPV is the first work that simultaneously supports certificateless public verification and resistance against malicious auditors to verify the integrity of outsourced data in CPSS. A formal security proof proves the correctness and security of our scheme. In addition, an elaborate performance analysis demonstrates that the SCLPV is efficient and practical. Compared with the only existing certificateless public verification scheme (CLPV), the SCLPV provides stronger security guarantees in terms of remedying the security vulnerability of the CLPV and resistance against malicious auditors. In comparison with the best of integrity verification scheme achieving resistance against malicious auditors, the communication cost between the auditor and the cloud server of the SCLPV is independent of the size of the processed data, meanwhile, the auditor in the SCLPV does not need to manage certificates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent decades, library associations have advocated for the adoption of privacy and confidentiality policies as practical support to the Library Code of Ethics with a threefold purpose to (1) define and uphold privacy practices within the library, (2) convey privacy practices to patrons and, (3) protect against potential liability and public relations problems. The adoption of such policies has been instrumental in providing libraries with effective responses to surveillance initiatives such as warrantless requests and the USA PATRIOT ACT. Nevertheless, as reflected in recent news stories, the rapid emergence of data brokerage relationships and technologies and the increasing need for libraries to utilize third party vendor services have increased opportunities for data surveillers to access patrons’ personal information and reading habits, which are funneled and made available through multiple online library service platforms. Additionally, the advice that libraries should “contract for the same level of privacy reflected in their privacy policies” is no longer realistic given that the existence of multiple vendor contracts negotiated at arms length is likely to produce varying privacy terms and even varying definitions of what constitutes personal information (PII). These conditions sharply threaten the effectiveness and relevance of library privacy policies and privacy initiatives in that such policies increasingly offer false comfort by failing to reflect privacy weaknesses in the data sharing landscape and vendor contracts when library-vendor contracts fail to keep up with vendor data sharing capabilities. While some argue that library privacy ethics are antiquated and rendered obscure in the current online sharing economy PEW studies point to pronounced public discomfort with increasing privacy erosion. At the same time, new directions in FTC enforcement raise the possibility that public institutions’ privacy policies may serve as swords to unfair or deceptive commercial trade practices – offering the potential of renewed relevance for library privacy and confidentiality policies. This dual coin of public concern and the potential for enhanced FTC enforcement suggests that when crafting privacy polices libraries must now walk the knife’s edge by offering patrons both realistic notice about the limitations of protections the library can ensure while at the same time publicly holding vendors accountable to library privacy ethics and expectations. Potential solutions for how to walk this edge are developed and offered as a subject for further discussion to assist the modification of model policies for both public and academic libraries alike.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O presente trabalho introduz o estabelecimento de um modelo para garantia da qualidade, particularmente aplicado ao segmento de prestação de serviços por bibliotecas. O estudo a ser apresentado baseia-se na utilização de um método de planejamento em específico, o Desdobramento da Função Qualidade (QFD), selecionada para viabilizar a análise da qualidade demandada pelo cliente, das características de qualidade críticas, dos serviços críticos e dos recursos críticos. Este trabalho discute o sistema de garantia da qualidade, cuja implantação está em curso, e que é parte integrante do modelo de gestão a ser adotado pela Biblioteca da Escola de Engenharia da Universidade Federal do Rio Grande do Sul. A análise a ser apresentada utiliza o QFD incorporando um conjunto de matrizes que facilita e garante o fluxo de informações através de todas as suas etapas de desenvolvimento.