978 resultados para Categorical data
Resumo:
Purpose – The purpose of this paper is to describe an innovative compliance control architecture for hybrid multi‐legged robots. The approach was verified on the hybrid legged‐wheeled robot ASGUARD, which was inspired by quadruped animals. The adaptive compliance controller allows the system to cope with a variety of stairs, very rough terrain, and is also able to move with high velocity on flat ground without changing the control parameters. Design/methodology/approach – The paper shows how this adaptivity results in a versatile controller for hybrid legged‐wheeled robots. For the locomotion control we use an adaptive model of motion pattern generators. The control approach takes into account the proprioceptive information of the torques, which are applied on the legs. The controller itself is embedded on a FPGA‐based, custom designed motor control board. An additional proprioceptive inclination feedback is used to make the same controller more robust in terms of stair‐climbing capabilities. Findings – The robot is well suited for disaster mitigation as well as for urban search and rescue missions, where it is often necessary to place sensors or cameras into dangerous or inaccessible areas to get a better situation awareness for the rescue personnel, before they enter a possibly dangerous area. A rugged, waterproof and dust‐proof corpus and the ability to swim are additional features of the robot. Originality/value – Contrary to existing approaches, a pre‐defined walking pattern for stair‐climbing was not used, but an adaptive approach based only on internal sensor information. In contrast to many other walking pattern based robots, the direct proprioceptive feedback was used in order to modify the internal control loop, thus adapting the compliance of each leg on‐line.
Resumo:
Large volumes of heterogeneous health data silos pose a big challenge when exploring for information to allow for evidence based decision making and ensuring quality outcomes. In this paper, we present a proof of concept for adopting data warehousing technology to aggregate and analyse disparate health data in order to understand the impact various lifestyle factors on obesity. We present a practical model for data warehousing with detailed explanation which can be adopted similarly for studying various other health issues.
Resumo:
Decision-making is such an integral aspect in health care routine that the ability to make the right decisions at crucial moments can lead to patient health improvements. Evidence-based practice, the paradigm used to make those informed decisions, relies on the use of current best evidence from systematic research such as randomized controlled trials. Limitations of the outcomes from randomized controlled trials (RCT), such as “quantity” and “quality” of evidence generated, has lowered healthcare professionals’ confidence in using EBP. An alternate paradigm of Practice-Based Evidence has evolved with the key being evidence drawn from practice settings. Through the use of health information technology, electronic health records (EHR) capture relevant clinical practice “evidence”. A data-driven approach is proposed to capitalize on the benefits of EHR. The issues of data privacy, security and integrity are diminished by an information accountability concept. Data warehouse architecture completes the data-driven approach by integrating health data from multi-source systems, unique within the healthcare environment.
Resumo:
What potential do artists working with environmental data in public space have for producing new forms of engagement with local environmental conditions? Operating on the edge of heavy bureaucracy, these types of data-driven artistic experiments probe the politics of environmental metrics and explore methods of engaging audiences with issues of environmental health. This discussion considers a small collection of cases studies representative of this growing field of practice. These are works by Natalie Jeremijenko and The Living, Tega Brain and Keith Deverell. The case studies considered are examples of strategic design, works that soften, reveal and potentially shift existing regulations and bureaucratic norms. In doing so they open up new possibilities and questions as to what the smart city is and how it might be realised.
Resumo:
The workshop is an activity of the IMIA Working Group ‘Security in Health Information Systems’ (SiHIS). It is focused to the growing global problem: how to protect personal health data in today’s global eHealth and digital health environment. It will review available trust building mechanisms, security measures and privacy policies. Technology alone does not solve this complex problem and current protection policies and legislation are considered woefully inadequate. Among other trust building tools, certification and accreditation mechanisms are dis-cussed in detail and the workshop will determine their acceptance and quality. The need for further research and international collective action are discussed. This workshop provides an opportunity to address a critical growing problem and make pragmatic proposals for sustainable and effective solutions for global eHealth and digital health.
Resumo:
A Powerpoint presentation on increasing research data management capability within your university, presented from the university library perspective, and focusing on collaborations with university partners to develop and implement university wide data management services and infrastructure.
Resumo:
QUT Library Research Support has simplified and streamlined the process of research data management planning, storage, discovery and reuse through collaboration and the use of integrated and tailored online tools, and a simplification of the metadata schema. This poster presents the integrated data management services a QUT, including QUT’s Data Management Planning Tool, Research Data Finder, Spatial Data Finder and Software Finder, and information on the simplified Registry Interchange Format – Collections and Services (RIF-CS) Schema. The QUT Data Management Planning (DMP) Tool was built using the Digital Curation Centre’s DMP Online Tool and modified to QUT’s needs and policies. The tool allows researchers and Higher Degree Research students to plan how to handle research data throughout the active phase of their research. The plan is promoted as a ‘live’ document’ and researchers are encouraged to update it as required. The information entered into the plan can be made private or shared with supervisors, project members and external examiners. A plan is mandatory when requesting storage space on the QUT Research Data Storage Service. QUT’s Research Data Finder is integrated with QUT’s Academic Profiles and the Data Management Planning Tool to create a seamless data management process. This process aims to encourage the creation of high quality rich records which facilitate discovery and reuse of quality data. The Registry Interchange Format – Collections and Services (RIF-CS) Schema that is used in the QUT Research Data Finder was simplified to “RIF-CS lite” to reflect mandatory and optional metadata requirements. RIF-CS lite removed schema fields that were underused or extra to the needs of the users and system. This has reduced the amount of metadata fields required from users and made integration of systems a far more simple process where field content is easily shared across services making the process of collecting metadata as transparent as possible.
Resumo:
The proliferation of the web presents an unsolved problem of automatically analyzing billions of pages of natural language. We introduce a scalable algorithm that clusters hundreds of millions of web pages into hundreds of thousands of clusters. It does this on a single mid-range machine using efficient algorithms and compressed document representations. It is applied to two web-scale crawls covering tens of terabytes. ClueWeb09 and ClueWeb12 contain 500 and 733 million web pages and were clustered into 500,000 to 700,000 clusters. To the best of our knowledge, such fine grained clustering has not been previously demonstrated. Previous approaches clustered a sample that limits the maximum number of discoverable clusters. The proposed EM-tree algorithm uses the entire collection in clustering and produces several orders of magnitude more clusters than the existing algorithms. Fine grained clustering is necessary for meaningful clustering in massive collections where the number of distinct topics grows linearly with collection size. These fine-grained clusters show an improved cluster quality when assessed with two novel evaluations using ad hoc search relevance judgments and spam classifications for external validation. These evaluations solve the problem of assessing the quality of clusters where categorical labeling is unavailable and unfeasible.
Resumo:
Clinical Data Warehousing: A Business Analytic approach for managing health data
Resumo:
In this paper we describe the design of DNA Jewelry, which is a wearable tangible data representation of personal DNA profile data. An iterative design process was followed to develop a 3D form-language that could be mapped to standard DNA profile data, with the aim of retaining readability of data while also producing an aesthetically pleasing and unique result in the area of personalised design. The work explores design issues with the production of data tangibles, contributes to a growing body of research exploring tangible representations of data and highlights the importance of approaches that move between technology, art and design.
Resumo:
Developing and maintaining a successful institutional repository for research publications requires a considerable investment by the institution. Most of the money is spent on developing the skill-sets of existing staff or hiring new staff with the necessary skills. The return on this investment can be magnified by using this valuable infrastructure to curate collections of other materials such as learning objects, student work, conference proceedings and institutional or local community heritage materials. When Queensland University of Technology (QUT) implemented its repository for research publications (QUT ePrints) over 11 years ago, it was one of the first institutional repositories to be established in Australia. Currently, the repository holds over 29,000 open access research publications and the cumulative total number of full-text downloads for these document now exceeds 16 million. The full-text deposit rate for recently-published peer reviewed papers (currently over 74%) shows how well the repository has been embraced by QUT researchers. The success of QUT ePrints has resulted in requests to accommodate a plethora of materials which are ‘out of scope’ for this repository. QUT Library saw this as an opportunity to use its repository infrastructure (software, technical know-how and policies) to develop and implement a metadata repository for its research datasets (QUT Research Data Finder), a repository for research-related software (QUT Software Finder) and to curate a number of digital collections of institutional and local community heritage materials (QUT Digital Collections). This poster describes the repositories and digital collections curated by QUT Library and outlines the value delivered to the institution, and the wider community, by these initiatives.
Resumo:
This cross disciplinary study was conducted as two research and development projects. The outcome is a multimodal and dynamic chronicle, which incorporates the tracking of spatial, temporal and visual elements of performative practice-led and design-led research journeys. The distilled model provides a strong new approach to demonstrate rigour in non-traditional research outputs including provenance and an 'augmented web of facticity'.
Resumo:
On 19 June 2015, representatives from over 40 Australian research institutions gathered in Canberra to launch their Open Data Collections. The one day event, hosted by the Australian National Data Service (ANDS), showcased to government and a range of national stakeholders the rich variety of data collections that have been generated through the Major Open Data Collections (MODC) project. Colin Eustace attended the showcase for QUT Library and presented a poster that reflected the work that he and Jodie Vaughan generated through the project. QUT’s Blueprint 4, the University’s five-year institutional strategic plan, outlines the key priorities of developing a commitment to working in partnership with industry, as well as combining disciplinary strengths with interdisciplinary application. The Division of Technology, Information and Learning Support (TILS) has undertaken a number of Australian National Data Service (ANDS) funded projects since 2009 with the aim of developing improved research data management services within the University to support these strategic aims. By leveraging existing tools and systems developed during these projects, the Major Open Data Collection (MODC) project delivered support to multi-disciplinary collaborative research activities through partnership building between QUT researchers and Queensland government agencies, in order to add to and promote the discovery and reuse of a collection of spatially referenced datasets. The MODC project built upon existing Research Data Finder infrastructure (which uses VIVO open source software, developed by Cornell University) to develop a separate collection, Spatial Data Finder (https://researchdatafinder.qut.edu.au/spatial) as the interface to display the spatial data collection. During the course of the project, 62 dataset descriptions were added to Spatial Data Finder, 7 added to Research Data Finder and two added to Software Finder, another separate collection. The project team met with 116 individual researchers and attended 13 school and faculty meetings to promote the MODC project and raise awareness of the Library’s services and resources for research data management.
Resumo:
This paper analyzes the limitations upon the amount of in- domain (NIST SREs) data required for training a probabilistic linear discriminant analysis (PLDA) speaker verification system based on out-domain (Switchboard) total variability subspaces. By limiting the number of speakers, the number of sessions per speaker and the length of active speech per session available in the target domain for PLDA training, we investigated the relative effect of these three parameters on PLDA speaker verification performance in the NIST 2008 and NIST 2010 speaker recognition evaluation datasets. Experimental results indicate that while these parameters depend highly on each other, to beat out-domain PLDA training, more than 10 seconds of active speech should be available for at least 4 sessions/speaker for a minimum of 800 speakers. If further data is available, considerable improvement can be made over solely out-domain PLDA training.
Resumo:
It is important to develop reliable finite element models for real structures not only in the design phase but also for the structural health monitoring and structural maintenance purposes. This paper describes the experience of the authors in using ambient vibration model identification techniques together with model updating tools to develop reliable finite element models of real civil engineering structures. Case studies of two real structures are presented in this paper. One is a 10 storey concrete building which is considered as a non-slender structure with complex boundary conditions. The other is a single span concrete foot bridge which is also a relatively inflexible planar structure with complex boundary conditions. Both structures are located at the Queensland University of Technology (QUT) and equipped with continuous structural health monitoring systems.