96 resultados para library automated system
Resumo:
Masonry is one of the most ancient construction materials in the World. When compared to other civil engineering practices, masonry construction is highly labour intensive, which can affect the quality and productivity adversely. With a view to improving quality and in light of the limited skilled labour in the recent times several innovative masonry construction methods such as the dry stack and the thin bed masonry have been developed. This paper focuses on the thin bed masonry system, which is used in many parts of Europe. Thin bed masonry system utilises thin layer of polymer modified mortars connecting the accurately dimensioned and/or interlockable units. This assembly process has the potential for automated panelised construction system in the industry setting or being adopted in the site using less skilled labour, without sacrificing the quality. This is because unlike the conventional masonry construction, the thin bed technology uses thinner mortar (or glue) layer which can be controlled easily through some novel methods described in this paper. Structurally, reduction in the thickness of the mortar joint has beneficial effects; for example it increases the compressive strength of masonry; in addition polymer added glue mortar enhances lateral load capacity relative to conventional masonry. This paper reviews the details of the recent research outcomes on the structural characteristics and construction practices of thin bed masonry. Finally the suitability of thin bed masonry in developing countries where masonry remains as the most common material for residential building construction is discussed.
Resumo:
Trusted health care outcomes are patient centric. Requirements to ensure both the quality and sharing of patients’ health records are a key for better clinical decision making. In the context of maintaining quality health, the sharing of data and information between professionals and patients is paramount. This information sharing is a challenge and costly if patients’ trust and institutional accountability are not established. Establishment of an Information Accountability Framework (IAF) is one of the approaches in this paper. The concept behind the IAF requirements are: transparent responsibilities, relevance of the information being used, and the establishment and evidence of accountability that all lead to the desired outcome of a Trusted Health Care System. Upon completion of this IAF framework the trust component between the public and professionals will be constructed. Preservation of the confidentiality and integrity of patients’ information will lead to trusted health care outcomes.
Resumo:
At NTCIR-9, we participated in the cross-lingual link discovery (Crosslink) task. In this paper we describe our approaches to discovering Chinese, Japanese, and Korean (CJK) cross-lingual links for English documents in Wikipedia. Our experimental results show that a link mining approach that mines the existing link structure for anchor probabilities and relies on the “translation” using cross-lingual document name triangulation performs very well. The evaluation shows encouraging results for our system.
Resumo:
Recommender systems are one of the recent inventions to deal with ever growing information overload in relation to the selection of goods and services in a global economy. Collaborative Filtering (CF) is one of the most popular techniques in recommender systems. The CF recommends items to a target user based on the preferences of a set of similar users known as the neighbours, generated from a database made up of the preferences of past users. With sufficient background information of item ratings, its performance is promising enough but research shows that it performs very poorly in a cold start situation where there is not enough previous rating data. As an alternative to ratings, trust between the users could be used to choose the neighbour for recommendation making. Better recommendations can be achieved using an inferred trust network which mimics the real world "friend of a friend" recommendations. To extend the boundaries of the neighbour, an effective trust inference technique is required. This thesis proposes a trust interference technique called Directed Series Parallel Graph (DSPG) which performs better than other popular trust inference algorithms such as TidalTrust and MoleTrust. Another problem is that reliable explicit trust data is not always available. In real life, people trust "word of mouth" recommendations made by people with similar interests. This is often assumed in the recommender system. By conducting a survey, we can confirm that interest similarity has a positive relationship with trust and this can be used to generate a trust network for recommendation. In this research, we also propose a new method called SimTrust for developing trust networks based on user's interest similarity in the absence of explicit trust data. To identify the interest similarity, we use user's personalised tagging information. However, we are interested in what resources the user chooses to tag, rather than the text of the tag applied. The commonalities of the resources being tagged by the users can be used to form the neighbours used in the automated recommender system. Our experimental results show that our proposed tag-similarity based method outperforms the traditional collaborative filtering approach which usually uses rating data.
Resumo:
Experts’ views and commentary have been highly respected in every discipline. However, unlike traditional disciplines like medicine, mathematics and engineering, Information System (IS) expertise is difficult to define. Despite seeking expert advice and views is common in the areas of IS project management, system implementations and evaluations. This research-in-progress paper attempts to understand the characteristics of IS-expert through a comprehensive literature review of analogous disciplines and then derives a formative research model with three main constructs. A validated construct of expertise of IS will have a wide range of implications for research and practice.
Resumo:
Sustainability, smartness and safety are three sole components of a modern transportation system. The objective of this study is to introduce a modern transportation system in the light of a 3‘S’ approach: sustainable, smart and safe. In particular this paper studies the transportation system of Singapore to address how this system is progressing in this three-pronged approach towards a modern transportation system. While sustainability targets environmental justice and social equity without compromising economical efficiency, smartness incorporates qualities like automated sensing, processing and decision making, and action-taking into the transportation system. Since a system cannot be viable without being safe, the safety of the modern transportation system aims minimizing crash risks of all users including motorists, motorcyclists, pedestrians, and bicyclists. Various policy implications and technology applications inside the transportation system of Singapore are discussed to illustrate a modern transportation system within the framework of the 3‘S’ model.
Resumo:
Background Despite its efficacy and cost-effectiveness, exercise-based cardiac rehabilitation is undertaken by less than one-third of clinically eligible cardiac patients in every country for which data is available. Reasons for non-participation include the unavailability of hospital-based rehabilitation programs, or excessive travel time and distance. For this reason, there have been calls for the development of more flexible alternatives. Methodology and Principal Findings We developed a system to enable walking-based cardiac rehabilitation in which the patient's single-lead ECG, heart rate, GPS-based speed and location are transmitted by a programmed smartphone to a secure server for real-time monitoring by a qualified exercise scientist. The feasibility of this approach was evaluated in 134 remotely-monitored exercise assessment and exercise sessions in cardiac patients unable to undertake hospital-based rehabilitation. Completion rates, rates of technical problems, detection of ECG changes, pre- and post-intervention six minute walk test (6 MWT), cardiac depression and Quality of Life (QOL) were key measures. The system was rated as easy and quick to use. It allowed participants to complete six weeks of exercise-based rehabilitation near their homes, worksites, or when travelling. The majority of sessions were completed without any technical problems, although periodic signal loss in areas of poor coverage was an occasional limitation. Several exercise and post-exercise ECG changes were detected. Participants showed improvements comparable to those reported for hospital-based programs, walking significantly further on the post-intervention 6 MWT, 637 m (95% CI: 565–726), than on the pre-test, 524 m (95% CI: 420–655), and reporting significantly reduced levels of cardiac depression and significantly improved physical health-related QOL. Conclusions and Significance The system provided a feasible and very flexible alternative form of supervised cardiac rehabilitation for those unable to access hospital-based programs, with the potential to address a well-recognised deficiency in health care provision in many countries. Future research should assess its longer-term efficacy, cost-effectiveness and safety in larger samples representing the spectrum of cardiac morbidity and severity.
Resumo:
Queensland University of Technology (QUT) was one of the first universities in Australia to establish an institutional repository. Launched in November 2003, the repository (QUT ePrints) uses the EPrints open source repository software (from Southampton) and has enjoyed the benefit of an institutional deposit mandate since January 2004. Currently (April 2012), the repository holds over 36,000 records, including 17,909 open access publications with another 2,434 publications embargoed but with mediated access enabled via the ‘Request a copy’ button which is a feature of the EPrints software. At QUT, the repository is managed by the library.QUT ePrints (http://eprints.qut.edu.au) The repository is embedded into a number of other systems at QUT including the staff profile system and the University’s research information system. It has also been integrated into a number of critical processes related to Government reporting and research assessment. Internally, senior research administrators often look to the repository for information to assist with decision-making and planning. While some statistics could be drawn from the advanced search feature and the existing download statistics feature, they were rarely at the level of granularity or aggregation required. Getting the information from the ‘back end’ of the repository was very time-consuming for the Library staff. In 2011, the Library funded a project to enhance the range of statistics which would be available from the public interface of QUT ePrints. The repository team conducted a series of focus groups and individual interviews to identify and prioritise functionality requirements for a new statistics ‘dashboard’. The participants included a mix research administrators, early career researchers and senior researchers. The repository team identified a number of business criteria (eg extensible, support available, skills required etc) and then gave each a weighting. After considering all the known options available, five software packages (IRStats, ePrintsStats, AWStats, BIRT and Google Urchin/Analytics) were thoroughly evaluated against a list of 69 criteria to determine which would be most suitable. The evaluation revealed that IRStats was the best fit for our requirements. It was deemed capable of meeting 21 out of the 31 high priority criteria. Consequently, IRStats was implemented as the basis for QUT ePrints’ new statistics dashboards which were launched in Open Access Week, October 2011. Statistics dashboards are now available at four levels; whole-of-repository level, organisational unit level, individual author level and individual item level. The data available includes, cumulative total deposits, time series deposits, deposits by item type, % fulltexts, % open access, cumulative downloads, time series downloads, downloads by item type, author ranking, paper ranking (by downloads), downloader geographic location, domains, internal v external downloads, citation data (from Scopus and Web of Science), most popular search terms, non-search referring websites. The data is displayed in charts, maps and table format. The new statistics dashboards are a great success. Feedback received from staff and students has been very positive. Individual researchers have said that they have found the information to be very useful when compiling a track record. It is now very easy for senior administrators (including the Deputy Vice Chancellor-Research) to compare the full-text deposit rates (i.e. mandate compliance rates) across organisational units. This has led to increased ‘encouragement’ from Heads of School and Deans in relation to the provision of full-text versions.
Resumo:
This paper describes in detail our Security-Critical Program Analyser (SCPA). SCPA is used to assess the security of a given program based on its design or source code with regard to data flow-based metrics. Furthermore, it allows software developers to generate a UML-like class diagram of their program and annotate its confidential classes, methods and attributes. SCPA is also capable of producing Java source code for the generated design of a given program. This source code can then be compiled and the resulting Java bytecode program can be used by the tool to assess the program's overall security based on our security metrics.
Resumo:
Due to increased complexity, scale, and functionality of information and telecommunication (IT) infrastructures, every day new exploits and vulnerabilities are discovered. These vulnerabilities are most of the time used by ma¬licious people to penetrate these IT infrastructures for mainly disrupting business or stealing intellectual pro¬perties. Current incidents prove that it is not sufficient anymore to perform manual security tests of the IT infra¬structure based on sporadic security audits. Instead net¬works should be continuously tested against possible attacks. In this paper we present current results and challenges towards realizing automated and scalable solutions to identify possible attack scenarios in an IT in¬frastructure. Namely, we define an extensible frame¬work which uses public vulnerability databases to identify pro¬bable multi-step attacks in an IT infrastructure, and pro¬vide recommendations in the form of patching strategies, topology changes, and configuration updates.
Resumo:
QUT’s new metadata repository (data registry), Research Data Finder, has been designed to promote the visibility and discoverability of QUT research datasets. Funded by the Australian National Data Service (ANDS), it will provide a qualitative snapshot of research data outputs created or collected by members of the QUT research community that are available via open or mediated access. As a fully integrated metadata repository Research Data Finder aligns with institutional sources of truth, such as QUT’s research administrative system, ResearchMaster, as well as QUT’s Academic Profiles system to provide high quality data descriptions that increase awareness of, and access to, shareable research data. In addition, the repository and its workflows are designed to foster smoother data management practices, enhance opportunities for collaboration and research, promote cross-disciplinary research and maximize existing research datasets. The metadata schema used in Research Data Finder is the Registry Interchange Format - Collections and Services (RIF-CS), developed by ANDS in 2009. This comprehensive schema is potentially complex for researchers; unlike metadata for publications, which are often made publicly available with the official publication, metadata for datasets are not typically available and need to be created. Research Data Finder uses a hybrid self-deposit and mediated deposit system. In addition to automated ingests from ResearchMaster (research project information) and Academic Profiles system (researcher information), shareable data is identified at a number of key “trigger points” in the research cycle. These include: research grant proposals; ethics applications; Data Management Plans; Liaison Librarian data interviews; and thesis submissions. These ingested records can be supplemented with related metadata including links to related publications, such as those in QUT ePrints. Records deposited in Research Data Finder are harvested by ANDS and made available to a national and international audience via Research Data Australia, ANDS’ discovery service for Australian research data. Researcher and research group metadata records are also harvested by the National Library of Australia (NLA) and these records are then published in Trove (the NLA’s digital information portal). By contributing records to the national infrastructure, QUT data will become more visible. Within Australia and internationally, many funding bodies have already mandated the open access of publications produced from publicly funded research projects, such as those supported by the Australian Research Council (ARC), or the National Health and Medical Research Council (NHMRC). QUT will be well placed to respond to the rapidly evolving climate of research data management. This project is supported by the Australian National Data Service (ANDS). ANDS is supported by the Australian Government through the National Collaborative Research Infrastructure Strategy Program and the Education Investment Fund (EIF) Super Science Initiative.
Resumo:
This paper proposes a concrete approach for the automatic mitigation of risks that are detected during process enactment. Given a process model exposed to risks, e.g. a financial process exposed to the risk of approval fraud, we enact this process and as soon as the likelihood of the associated risk(s) is no longer tolerable, we generate a set of possible mitigation actions to reduce the risks' likelihood, ideally annulling the risks altogether. A mitigation action is a sequence of controlled changes applied to the running process instance, taking into account a snapshot of the process resources and data, and the current status of the system in which the process is executed. These actions are proposed as recommendations to help process administrators mitigate process-related risks as soon as they arise. The approach has been implemented in the YAWL environment and its performance evaluated. The results show that it is possible to mitigate process-related risks within a few minutes.
Resumo:
Session Initiation Protocol (SIP) is developed to provide advanced voice services over IP networks. SIP unites telephony and data world, permitting telephone calls to be transmitted over Intranets and Internet. Increase in network performance and new mechanisms for guaranteed quality of service encourage this consolidation to provide toll cost savings. Security comes up as one of the most important issues when voice communication and critical voice applications are considered. Not only the security methods provided by traditional telephony systems, but also additional methods are required to overcome security risks introduced by the public IP networks. SIP considers security problems of such a consolidation and provides a security framework. There are several security methods defined within SIP specifications and extensions. But, suggested methods can not solve all the security problems of SIP systems with various system requirements. In this thesis, a Kerberos based solution is proposed for SIP security problems, including SIP authentication and privacy. The proposed solution tries to establish flexible and scalable SIP system that will provide desired level of security for voice communications and critical telephony applications.
Resumo:
Automated process discovery techniques aim at extracting models from information system logs in order to shed light into the business processes supported by these systems. Existing techniques in this space are effective when applied to relatively small or regular logs, but otherwise generate large and spaghetti-like models. In previous work, trace clustering has been applied in an attempt to reduce the size and complexity of automatically discovered process models. The idea is to split the log into clusters and to discover one model per cluster. The result is a collection of process models -- each one representing a variant of the business process -- as opposed to an all-encompassing model. Still, models produced in this way may exhibit unacceptably high complexity. In this setting, this paper presents a two-way divide-and-conquer process discovery technique, wherein the discovered process models are split on the one hand by variants and on the other hand hierarchically by means of subprocess extraction. The proposed technique allows users to set a desired bound for the complexity of the produced models. Experiments on real-life logs show that the technique produces collections of models that are up to 64% smaller than those extracted under the same complexity bounds by applying existing trace clustering techniques.
Resumo:
Reasons for performing study: Many domestic horses and ponies are sedentary and obese due to confinement to small paddocks and stables and a diet of infrequent, high-energy rations. Severe health consequences can be associated with this altered lifestyle. Objectives: The aims of this study were to investigate the ability of horses to learn to use a dynamic feeder system and determine the movement and behavioural responses of horses to the novel system. Methods: A dynamic feed station was developed to encourage horses to exercise in order to access ad libitum hay. Five pairs of horses (n = 10) were studied using a randomised crossover design with each pair studied in a control paddock containing a standard hay feeder and an experimental paddock containing the novel hay feeder. Horse movement was monitored by a global positioning system (GPS) and horses observed and their ability to learn to use the system and the behavioural responses to its use assessed. Results: With initial human intervention all horses used the novel feeder within 1 h. Some aggressive behaviour was observed between horses not well matched in dominance behaviour. The median distance walked by the horses was less (P = 0.002) during a 4 h period (117 [57–185] m) in the control paddock than in the experimental paddock (630 [509–719] m). Conclusions: The use of an automated feeding system promotes increased activity levels in horses housed in small paddocks, compared with a stationary feeder.