998 resultados para Retrieval efficiency
Resumo:
Recent literature has emphasized the pivotal role of knowledge integration in Enterprise Systems (ES) success. This research-in-progress paper, building upon Knowledge Based Theory of the firm (KBT), examines the efficiency of knowledge integration in the context of ES implementation and identifies the factors contributing to its enhancement. The proposed model in this paper suggests that the efficiency of knowledge integration in an ES implementation process depends upon the level of common knowledge and the level of coordination in the ES adopting organization. It further suggests that the level of common knowledge can be enhanced by proper training, improving ES users’intrinsic and extrinsic motivations and business process modeling and the level of coordination can be improved by articulating a clear unified organizational goal for the ES adoption in the organization, forming a competent ES team, enhancing interdepartmental communication and the cross-functionality in the organization structure.
Resumo:
Information skills instruction for research candidates bas recently been formalised as coursework at the Queensland University of Technology. Feedback solicited from participants suggests that students benefit from such coursework in a number of ways. Their perception of the value of specific content areas to their literature review and thesis presentation is favourable. A small group of students who participated in Interviews identified five ways in which the coursework assisted the research process. As Instructors continue to work with the post·graduate community it would be useful to deepen our understanding of how such instruction is perceived and the benefits which can be derived from it.
Resumo:
Big Data is a rising IT trend similar to cloud computing, social networking or ubiquitous computing. Big Data can offer beneficial scenarios in the e-health arena. However, one of the scenarios can be that Big Data needs to be kept secured for a long period of time in order to gain its benefits such as finding cures for infectious diseases and protecting patient privacy. From this connection, it is beneficial to analyse Big Data to make meaningful information while the data is stored securely. Therefore, the analysis of various database encryption techniques is essential. In this study, we simulated 3 types of technical environments, namely, Plain-text, Microsoft Built-in Encryption, and custom Advanced Encryption Standard, using Bucket Index in Data-as-a-Service. The results showed that custom AES-DaaS has a faster range query response time than MS built-in encryption. Furthermore, while carrying out the scalability test, we acknowledged that there are performance thresholds depending on physical IT resources. Therefore, for the purpose of efficient Big Data management in eHealth it is noteworthy to examine their scalability limits as well even if it is under a cloud computing environment. In addition, when designing an e-health database, both patient privacy and system performance needs to be dealt as top priorities.
Resumo:
The Bluetooth technology is being increasingly used to track vehicles throughout their trips, within urban networks and across freeway stretches. One important opportunity offered by this type of data is the measurement of Origin-Destination patterns, emerging from the aggregation and clustering of individual trips. In order to obtain accurate estimations, however, a number of issues need to be addressed, through data filtering and correction techniques. These issues mainly stem from the use of the Bluetooth technology amongst drivers, and the physical properties of the Bluetooth sensors themselves. First, not all cars are equipped with discoverable Bluetooth devices and the Bluetooth-enabled vehicles may belong to some small socio-economic groups of users. Second, the Bluetooth datasets include data from various transport modes; such as pedestrian, bicycles, cars, taxi driver, buses and trains. Third, the Bluetooth sensors may fail to detect all of the nearby Bluetooth-enabled vehicles. As a consequence, the exact journey for some vehicles may become a latent pattern that will need to be extracted from the data. Finally, sensors that are in close proximity to each other may have overlapping detection areas, thus making the task of retrieving the correct travelled path even more challenging. The aim of this paper is twofold. We first give a comprehensive overview of the aforementioned issues. Further, we propose a methodology that can be followed, in order to cleanse, correct and aggregate Bluetooth data. We postulate that the methods introduced by this paper are the first crucial steps that need to be followed in order to compute accurate Origin-Destination matrices in urban road networks.
Resumo:
Democratic governments raise taxes and charges and spend revenue on delivering peace, order and good government. The delivery process begins with a legislature as that can provide a framework of legally enforceable rules enacted according to the government’s constitution. These rules confer rights and obligations that allow particular people to carry on particular functions at particular places and times. Metadata standards as applied to public records contain information about the functioning of government as distinct from the non-government sector of society. Metadata standards apply to database construction. Data entry, storage, maintenance, interrogation and retrieval depend on a controlled vocabulary needed to enable accurate retrieval of suitably catalogued records in a global information environment. Queensland’s socioeconomic progress now depends in part on technical efficiency in database construction to address queries about who does what, where and when; under what legally enforceable authority; and how the evidence of those facts is recorded. The Survey and Mapping Infrastructure Act 2003 (Qld) addresses technical aspects of where questions – typically the officially recognised name of a place and a description of its boundaries. The current 10-year review of the Survey and Mapping Regulation 2004 provides a valuable opportunity to consider whether the Regulation makes sense in the context of a number of later laws concerned with management of Public Sector Information (PSI) as well as policies for ICT hardware and software procurement. Removing ambiguities about how official place names are to be regarded on a whole-of-government basis can achieve some short term goals. Longer-term goals depend on a more holistic approach to information management – and current aspirations for more open government and community engagement are unlikely to occur without such a longer-term vision.
Resumo:
This paper details the participation of the Australian e- Health Research Centre (AEHRC) in the ShARe/CLEF 2013 eHealth Evaluation Lab { Task 3. This task aims to evaluate the use of information retrieval (IR) systems to aid consumers (e.g. patients and their relatives) in seeking health advice on the Web. Our submissions to the ShARe/CLEF challenge are based on language models generated from the web corpus provided by the organisers. Our baseline system is a standard Dirichlet smoothed language model. We enhance the baseline by identifying and correcting spelling mistakes in queries, as well as expanding acronyms using AEHRC's Medtex medical text analysis platform. We then consider the readability and the authoritativeness of web pages to further enhance the quality of the document ranking. Measures of readability are integrated in the language models used for retrieval via prior probabilities. Prior probabilities are also used to encode authoritativeness information derived from a list of top-100 consumer health websites. Empirical results show that correcting spelling mistakes and expanding acronyms found in queries signi cantly improves the e ectiveness of the language model baseline. Readability priors seem to increase retrieval e ectiveness for graded relevance at early ranks (nDCG@5, but not precision), but no improvements are found at later ranks and when considering binary relevance. The authoritativeness prior does not appear to provide retrieval gains over the baseline: this is likely to be because of the small overlap between websites in the corpus and those in the top-100 consumer-health websites we acquired.
Resumo:
Entity-oriented retrieval aims to return a list of relevant entities rather than documents to provide exact answers for user queries. The nature of entity-oriented retrieval requires identifying the semantic intent of user queries, i.e., understanding the semantic role of query terms and determining the semantic categories which indicate the class of target entities. Existing methods are not able to exploit the semantic intent by capturing the semantic relationship between terms in a query and in a document that contains entity related information. To improve the understanding of the semantic intent of user queries, we propose concept-based retrieval method that not only automatically identifies the semantic intent of user queries, i.e., Intent Type and Intent Modifier but introduces concepts represented by Wikipedia articles to user queries. We evaluate our proposed method on entity profile documents annotated by concepts from Wikipedia category and list structure. Empirical analysis reveals that the proposed method outperforms several state-of-the-art approaches.
Resumo:
Electrospun scaffolds manufactured using conventional electrospinning configurations have an intrinsic thickness limitation, due to a charge build-up at the collector. To overcome this limitation, an electrostatic lens has been developed that, at the same relative rate of deposition, focuses the polymer jet onto a smaller area of the collector, resulting in the fabrication of thick scaffolds within a shorter period of time. We also observed that a longer deposition time (up to 13 h, without the intervention of the operator) could be achieved when the electrostatic lens was utilised, compared to 9–10 h with a conventional processing set-up and also showed that fibre fusion was less likely to occur in the modified method. This had a significant impact on the mechanical properties, as the scaffolds obtained with the conventional process had a higher elastic modulus and ultimate stress and strain at short times. However, as the thickness of the scaffolds produced by the conventional electrospinning process increased, a 3-fold decrease in the mechanical properties was observed. This was in contrast to the modified method, which showed a continual increase in mechanical properties, with the properties of the scaffold finally having similar mechanical properties to the scaffolds obtained via the conventional process at longer times. This “focusing” device thus enabled the fabrication of thicker 3-dimensional electrospun scaffolds (of thicknesses up to 3.5 mm), representing an important step towards the production of scaffolds for tissue engineering large defect sites in a multitude of tissues.
Resumo:
This paper presents two efficiency models for the regenerative dynamometer to be built at the University of Queensland. The models incorporate an accurate accounting of the losses associated with the regenerative dynamometer and the battery modelling technique used. In addition to the models the cycle and instantaneous efficiencies were defined for a regenerative system that requires a desired torque output. The simulation of the models allowed the instantaneous and cycle efficiencies to be examined. The results show the intended dynamometer machine has significant efficiency draw backs but incorporating field winding control, the efficiency can be improved.
Resumo:
The measurement of losses in high efficiency / high power converters is difficult. Measuring the losses directly from the difference between the input and output power results in large errors. Calorimetric methods are usually used to bypass this issue but introduce different problems, such as, long measurement times, limited power loss measurement range and/or large set up cost. In this paper the total losses of a converter are measured directly and switching losses are exacted. The measurements can be taken with only three multimeters and a current probe and a standard bench power supply. After acquiring two or three power loss versus output current sweeps, a series of curve fitting processes are applied and the switching losses extracted.
Resumo:
The auxiliary load DC-DC converters of the Sunshark solar car have never been examined. An analysis of the current design reveals it is complicated, and inefficient. Some simple measures to greatly improve the efficiency are present which will achieve an overall worthwhile power saving. Two switch-mode power supply DC-DC converter designs are presented. One is a constant current supply for the LED brake and turn indicators, which allows them to be powered directly from the main DC bus, and switched only as necessary. The second is a low power flyback converter, which employs synchronous rectification among other techniques to achieve good efficiency and regulation over a large range of output powers. Practical results from both converters, and an indication of the overall improvement in system efficiency will be offered.
Resumo:
This paper evaluates the efficiency of a number of popular corpus-based distributional models in performing discovery on very large document sets, including online collections. Literature-based discovery is the process of identifying previously unknown connections from text, often published literature, that could lead to the development of new techniques or technologies. Literature-based discovery has attracted growing research interest ever since Swanson's serendipitous discovery of the therapeutic effects of fish oil on Raynaud's disease in 1986. The successful application of distributional models in automating the identification of indirect associations underpinning literature-based discovery has been heavily demonstrated in the medical domain. However, we wish to investigate the computational complexity of distributional models for literature-based discovery on much larger document collections, as they may provide computationally tractable solutions to tasks including, predicting future disruptive innovations. In this paper we perform a computational complexity analysis on four successful corpus-based distributional models to evaluate their fit for such tasks. Our results indicate that corpus-based distributional models that store their representations in fixed dimensions provide superior efficiency on literature-based discovery tasks.
Resumo:
Defence projects are typically undertaken within a multi-project-management environment where a common agenda of project managers is to achieve higher project efficiency. This study adopted a multi-facet qualitative approach to investigate factors contributing to or impeding project efficiency in the Defence sector. Semi-structured interviews were undertaken to identify additional factors to those compiled from the literature survey. This was followed by a three-round Delphi study to examine the perceived critical factors of project efficiency. The results showed that project efficiency in the Defence sector went beyond its traditional internally focused scope to one that is externally focused. As a result, efforts are needed on not only those factors related to individual projects but also those factors related to project inter-dependencies and external customers. The management of these factors will help to enhance the efficiency of a project within the Defence sector.
Resumo:
Bangkok Metropolitan Region (BMR) is the centre for various major activities in Thailand including political, industry, agriculture, and commerce. Consequently, the BMR is the highest and most densely populated area in Thailand. Thus, the demand for houses in the BMR is also the largest, especially in subdivision developments. For these reasons, the subdivision development in the BMR has increased substantially in the past 20 years and generated large numbers of subdivision developments (AREA, 2009; Kridakorn Na Ayutthaya & Tochaiwat, 2010). However, this dramatic growth of subdivision development has caused several problems including unsustainable development, especially for subdivision neighbourhoods, in the BMR. There have been rating tools that encourage the sustainability of neighbourhood design in subdivision development, but they still have practical problems. Such rating tools do not cover the scale of the development entirely; and they concentrate more on the social and environmental conservation aspects, which have not been totally accepted by the developers (Boonprakub, 2011; Tongcumpou & Harvey, 1994). These factors strongly confirm the need for an appropriate rating tool for sustainable subdivision neighbourhood design in the BMR. To improve level of acceptance from all stakeholders in subdivision developments industry, the new rating tool should be developed based on an approach that unites the social, environmental, and economic approaches, such as eco-efficiency principle. Eco-efficiency is the sustainability indicator introduced by the World Business Council for Sustainable Development (WBCSD) since 1992. The eco-efficiency is defined as the ratio of the product or service value according to its environmental impact (Lehni & Pepper, 2000; Sorvari et al., 2009). Eco-efficiency indicator is concerned to the business, while simultaneously, is concerned with to social and the environment impact. This study aims to develop a new rating tool named "Rating for sustainable subdivision neighbourhood design (RSSND)". The RSSND methodology is developed by a combination of literature reviews, field surveys, the eco-efficiency model development, trial-and-error technique, and the tool validation process. All required data has been collected by the field surveys from July to November 2010. The ecoefficiency model is a combination of three different mathematical models; the neighbourhood property price (NPP) model, the neighbourhood development cost (NDC) model, and the neighbourhood occupancy cost (NOC) model which are attributable to the neighbourhood subdivision design. The NPP model is formulated by hedonic price model approach, while the NDC model and NOC model are formulated by the multiple regression analysis approach. The trial-and-error technique is adopted for simplifying the complex mathematic eco-efficiency model to a user-friendly rating tool format. Credibility of the RSSND has been validated by using both rated and non-rated of eight subdivisions. It is expected to meet the requirements of all stakeholders which support the social activities of the residents, maintain the environmental condition of the development and surrounding areas, and meet the economic requirements of the developers.
Resumo:
This paper seeks to explain the lagging productivity in Singapore’s manufacturing noted in the statements of the Economic Strategies Committee Report 2010. Two methods are employed: the Malmquist productivity to measure total factor productivity (TFP) change and Simar and Wilson’s (2007) bootstrapped truncated regression approach which first derives bias-corrected efficiency estimates before being regressed against explanatory variables to help quantify sources of inefficiencies. The findings reveal that growth in total factor productivity was attributed to efficiency change with no technical progress. Sources of efficiency were attributed to quality of worker and flexible work arrangements while the use of foreign workers lowered efficiency.