408 resultados para information use
Resumo:
Sustainable natural resource management has been a concern of governments and legislators for the last 20 years. A key aspect of an effective management framework is easy access to information about rights and obligations in land and the natural resources in, on or below the land. Information about legal interests in land is managed through a Torrens register in each Australian State. These registers are primarily focused on the registration of a narrow group of legal interests in the land, and rights or obligations that fall outside of these recognised interests are not capable of registration. Practices have developed however for the recording of property rights in natural resources either on separate registers, with no link to the Torrens register or on a separate register managed by the Registrar of Titles but having no legal effect on the title to the land. This paper will discuss and analyse the various ways in which registers have been used in Queensland to provide access to information about rights in natural resources, and provide examples as to how this approach has impacted on the desire for sustainable management. It will also provide a critique of the Queensland model, and call for reform of the present system.
Resumo:
Over the past 20 years the nature of rural valuation practice has required most rural valuers to undertake studies in both agriculture (farm management) and valuation, especially if carrying out valuation work for financial institutions. The additional farm financial and management information obtained by rural valuers exceeds that level of information required to value commercial, retail and industrial by the capitalisation of net rent/profit valuation method and is very similar to the level of information required for the valuation of commercial and retail property by the Discounted Cash Flow valuation method. On this basis the valuers specialising in rural valuation practice have the necessary skills and information to value rural properties by an income valuation method, which can focus on the long term environmental and economic sustainability of the property being valued. This paper will review the results of an extensive survey carried out by rural property valuers in Australia, in relation to the impact of farm management on rural property values and sustainable rural land use. A particular focus of the research relates to the increased awareness of the problems of rural land degradation in Australia and the subsequent impact such problems have on the productivity of rural land. These problems of sustainable land use have resulted in the need to develop an approach to rural valuation practice that allows the valuer to factor the past management practices on the subject rural property into the actual valuation figure. An analysis of the past farm management and the inclusion of this data into the valuation methodology provides a much more reliable indication of farm sustainable economic value than the existing direct comparison valuation methodology.
Resumo:
The materials presented here are intended to: a) accompany the document Supervisor Resource and b) provide technology supervisors with materials that may be readily shared with students. These resources are not designed to be distributed to students without contextualization, they are intended for use in workshops or in discussions between supervisors and students. As authors, we anticipate that supervisors or workshop facilitators are most likely to extract individual resources of interest for particular occasions. The materials have been developed from conversations with supervisors from the technology disciplines.
Resumo:
Despite the advances that have been made in relation to the valuation of commercial, industrial and retail property, there has not been the same progress in relation to the valuation of rural property. Although the majority of rural property valuations also require the valuer to carry out a full analysis of the economic performance of the farming operations, this information is rarely used to assess the value of the property, nor is it even used for a secondary valuation method. Over the past 20 years the nature of rural valuation practice has required rural valuers to undertake studies in both agriculture (farm management) and valuation, especially if carrying out valuation work for financial institutions. The additional farm financial information obtained by rural valuers exceeds that level of information required to value commercial, retail and industrial by the capitalisation of net rent/profit valuation method and is very similar to the level of information required for the valuation of commercial and retail property by the Discounted Cash Flow valuation method. On this basis the valuers specialising in rural valuation practice should have the necessary skills and information to value rural properties by an income valuation method. Although the direct comparison method of valuation has been sufficient in the past to value rural properties the future use of the method as the main valuation method is limited and valuers need to adopt an income valuation method as at least a secondary valuation method to overcome the problems associated with the use of direct comparison as the only rural property valuation method. This paper will review the results of an extensive survey carried out by rural property valuers in New South Wales (NSW), Australia, in relation to the impact of farm management on rural property values and rural property income potential.
Resumo:
An examination of Information Security (IS) and Information Security Management (ISM) research in Saudi Arabia has shown the need for more rigorous studies focusing on the implementation and adoption processes involved with IS culture and practices. Overall, there is a lack of academic and professional literature about ISM and more specifically IS culture in Saudi Arabia. Therefore, the overall aim of this paper is to identify issues and factors that assist the implementation and the adoption of IS culture and practices within the Saudi environment. The goal of this paper is to identify the important conditions for creating an information security culture in Saudi Arabian organizations. We plan to use this framework to investigate whether security culture has emerged into practices in Saudi Arabian organizations.
Resumo:
Traffic congestion is an increasing problem with high costs in financial, social and personal terms. These costs include psychological and physiological stress, aggressivity and fatigue caused by lengthy delays, and increased likelihood of road crashes. Reliable and accurate traffic information is essential for the development of traffic control and management strategies. Traffic information is mostly gathered from in-road vehicle detectors such as induction loops. Traffic Message Chanel (TMC) service is popular service which wirelessly send traffic information to drivers. Traffic probes have been used in many cities to increase traffic information accuracy. A simulation to estimate the number of probe vehicles required to increase the accuracy of traffic information in Brisbane is proposed. A meso level traffic simulator has been developed to facilitate the identification of the optimal number of probe vehicles required to achieve an acceptable level of traffic reporting accuracy. Our approach to determine the optimal number of probe vehicles required to meet quality of service requirements, is to simulate runs with varying numbers of traffic probes. The simulated traffic represents Brisbane’s typical morning traffic. The road maps used in simulation are Brisbane’s TMC maps complete with speed limits and traffic lights. Experimental results show that that the optimal number of probe vehicles required for providing a useful supplement to TMC (induction loop) data lies between 0.5% and 2.5% of vehicles on the road. With less probes than 0.25%, little additional information is provided, while for more probes than 5%, there is only a negligible affect on accuracy for increasingly many probes on the road. Our findings are consistent with on-going research work on traffic probes, and show the effectiveness of using probe vehicles to supplement induction loops for accurate and timely traffic information.
Resumo:
It is a big challenge to clearly identify the boundary between positive and negative streams. Several attempts have used negative feedback to solve this challenge; however, there are two issues for using negative relevance feedback to improve the effectiveness of information filtering. The first one is how to select constructive negative samples in order to reduce the space of negative documents. The second issue is how to decide noisy extracted features that should be updated based on the selected negative samples. This paper proposes a pattern mining based approach to select some offenders from the negative documents, where an offender can be used to reduce the side effects of noisy features. It also classifies extracted features (i.e., terms) into three categories: positive specific terms, general terms, and negative specific terms. In this way, multiple revising strategies can be used to update extracted features. An iterative learning algorithm is also proposed to implement this approach on RCV1, and substantial experiments show that the proposed approach achieves encouraging performance.
Resumo:
Over the years, people have often held the hypothesis that negative feedback should be very useful for largely improving the performance of information filtering systems; however, we have not obtained very effective models to support this hypothesis. This paper, proposes an effective model that use negative relevance feedback based on a pattern mining approach to improve extracted features. This study focuses on two main issues of using negative relevance feedback: the selection of constructive negative examples to reduce the space of negative examples; and the revision of existing features based on the selected negative examples. The former selects some offender documents, where offender documents are negative documents that are most likely to be classified in the positive group. The later groups the extracted features into three groups: the positive specific category, general category and negative specific category to easily update the weight. An iterative algorithm is also proposed to implement this approach on RCV1 data collections, and substantial experiments show that the proposed approach achieves encouraging performance.
Resumo:
An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).
Resumo:
Purpose –The introduction of Building Information Model tools over the last 20 years is resulting in radical changes in the Architectural, Engineering and Construction industry. One of these changes concerns the use of Virtual Prototyping - an advanced technology integrating BIM with realistic graphical simulations. Construction Virtual Prototyping (CVP) has now been developed and implemented on ten real construction projects in Hong Kong in the past three years. This paper reports on a survey aimed at establishing the effects of adopting this new technology and obtaining recommendations for future development. Design/methodology/approach – A questionnaire survey was conducted in 2007 of 28 key participants involved in four major Hong Kong construction projects – these projects being chosen because the CVP approach was used in more than one stage in each project. In addition, several interviews were conducted with the project manager, planning manager and project engineer of an individual project. Findings –All the respondents and interviewees gave a positive response to the CVP approach, with the most useful software functions considered to be those relating to visualisation and communication. The CVP approach was thought to improve the collaboration efficiency of the main contractor and sub-contractors by approximately 30 percent, and with a concomitant 30 to 50 percent reduction in meeting time. The most important benefits of CPV in the construction planning stage are the improved accuracy of process planning and shorter planning times, while improved fieldwork instruction and reducing rework occur in the construction implementation stage. Although project teams are hesitant to attribute the use of CVP directly to any specific time savings, it was also acknowledged that the workload of project planners is decreased. Suggestions for further development of the approach include incorporation of automatic scheduling and advanced assembly study. Originality/value –Whilst the research, development and implementation of CVP is relatively new in the construction industry, it is clear from the applications and feedback to date that the approach provides considerable added value to the organisation and management of construction projects.
Resumo:
Objective: To summarise the extent to which narrative text fields in administrative health data are used to gather information about the event resulting in presentation to a health care provider for treatment of an injury, and to highlight best practise approaches to conducting narrative text interrogation for injury surveillance purposes.----- Design: Systematic review----- Data sources: Electronic databases searched included CINAHL, Google Scholar, Medline, Proquest, PubMed and PubMed Central.. Snowballing strategies were employed by searching the bibliographies of retrieved references to identify relevant associated articles.----- Selection criteria: Papers were selected if the study used a health-related database and if the study objectives were to a) use text field to identify injury cases or use text fields to extract additional information on injury circumstances not available from coded data or b) use text fields to assess accuracy of coded data fields for injury-related cases or c) describe methods/approaches for extracting injury information from text fields.----- Methods: The papers identified through the search were independently screened by two authors for inclusion, resulting in 41 papers selected for review. Due to heterogeneity between studies metaanalysis was not performed.----- Results: The majority of papers reviewed focused on describing injury epidemiology trends using coded data and text fields to supplement coded data (28 papers), with these studies demonstrating the value of text data for providing more specific information beyond what had been coded to enable case selection or provide circumstantial information. Caveats were expressed in terms of the consistency and completeness of recording of text information resulting in underestimates when using these data. Four coding validation papers were reviewed with these studies showing the utility of text data for validating and checking the accuracy of coded data. Seven studies (9 papers) described methods for interrogating injury text fields for systematic extraction of information, with a combination of manual and semi-automated methods used to refine and develop algorithms for extraction and classification of coded data from text. Quality assurance approaches to assessing the robustness of the methods for extracting text data was only discussed in 8 of the epidemiology papers, and 1 of the coding validation papers. All of the text interrogation methodology papers described systematic approaches to ensuring the quality of the approach.----- Conclusions: Manual review and coding approaches, text search methods, and statistical tools have been utilised to extract data from narrative text and translate it into useable, detailed injury event information. These techniques can and have been applied to administrative datasets to identify specific injury types and add value to previously coded injury datasets. Only a few studies thoroughly described the methods which were used for text mining and less than half of the studies which were reviewed used/described quality assurance methods for ensuring the robustness of the approach. New techniques utilising semi-automated computerised approaches and Bayesian/clustering statistical methods offer the potential to further develop and standardise the analysis of narrative text for injury surveillance.
Resumo:
Process modeling grammars are used by analysts to describe information systems domains in terms of the business operations an organization is conducting. While prior research has examined the factors that lead to continued usage behavior, little knowledge has been established as to what extent characteristics of the users of process modeling grammars inform usage behavior. In this study, a theoretical model is advanced that incorporates determinants of continued usage behavior as well as key antecedent individual difference factors of the grammar users, such as modeling experience, modeling background and perceived grammar familiarity. Findings from a global survey of 529 grammar users support the hypothesized relationships of the model. The study offers three central contributions. First, it provides a validated theoretical model of post-adoptive modeling grammar usage intentions. Second, it discusses the effects of individual difference factors of grammar users in the context of modeling grammar usage. Third, it provides implications for research and practice.
Resumo:
1. Ecological data sets often use clustered measurements or use repeated sampling in a longitudinal design. Choosing the correct covariance structure is an important step in the analysis of such data, as the covariance describes the degree of similarity among the repeated observations. 2. Three methods for choosing the covariance are: the Akaike information criterion (AIC), the quasi-information criterion (QIC), and the deviance information criterion (DIC). We compared the methods using a simulation study and using a data set that explored effects of forest fragmentation on avian species richness over 15 years. 3. The overall success was 80.6% for the AIC, 29.4% for the QIC and 81.6% for the DIC. For the forest fragmentation study the AIC and DIC selected the unstructured covariance, whereas the QIC selected the simpler autoregressive covariance. Graphical diagnostics suggested that the unstructured covariance was probably correct. 4. We recommend using DIC for selecting the correct covariance structure.
Resumo:
Recommender systems are widely used online to help users find other products, items etc that they may be interested in based on what is known about that user in their profile. Often however user profiles may be short on information and thus when there is not sufficient knowledge on a user it is difficult for a recommender system to make quality recommendations. This problem is often referred to as the cold-start problem. Here we investigate whether association rules can be used as a source of information to expand a user profile and thus avoid this problem, leading to improved recommendations to users. Our pilot study shows that indeed it is possible to use association rules to improve the performance of a recommender system. This we believe can lead to further work in utilising appropriate association rules to lessen the impact of the cold-start problem.
Resumo:
Information System (IS) success may be the most arguable and important dependent variable in the IS field. The purpose of the present study is to address IS success by empirically assess and compare DeLone and McLean’s (1992) and Gable’s et al. (2008) models of IS success in Australian Universities context. The two models have some commonalities and several important distinctions. Both models integrate and interrelate multiple dimensions of IS success. Hence, it would be useful to compare the models to see which is superior; as it is not clear how IS researchers should respond to this controversy.