141 resultados para Multiresolution Visualization
Resumo:
Process models are often used to visualize and communicate workflows to involved stakeholders. Unfortunately, process modeling notations can be complex and need specific knowledge to be understood. Storyboards, as a visual language to illustrate workflows as sequences of images, provide natural visualization features that allow for better communication, to provide insight to people from non-process modelling expert domains. This paper proposes a visualization approach using a 3D virtual world environment to visualize storyboards for business process models. A prototype was built to present its applicability via generating output with examples of five major process model patterns and two non-trivial use cases. Illustrative results for the approach show the promise of using a 3D virtual world to visualize complex process models in an unambiguous and intuitive manner.
Resumo:
We have developed a virtual world environment for eliciting expert information from stakeholders. The intention is that the virtual world prompts the user to remember more about their work processes. Our example shows a sparse visualisation of the University of Vienna Department of Computer Science, our collaborators in this project.
Resumo:
QUT Library continues to rethink research support with eResearch as a primary driver. The support to the development of the Lens, an open global cyberinfrastructure, has been especially important in the light of technology transfer promotion, and partly in the response to researchers’ needs in following the innovation landscapes not only within the scientific but also patent literature. The Lens http://www.lens.org/lens/ project makes innovation more efficient, fair, transparent and inclusive. It is a joint effort between Cambia http://www.cambia.org.au and Queensland University of Technology (QUT). The Lens serves more than 84 million patent documents in the world as open, annotatable digital public goods that are integrated with scholarly and technical literature along with regulatory and business data. Users can link from search results to visualization and document clusters; from a patent document description to its full-text; from there, if applicable, the sequence data can also be found. Figure 1 shows a BLAST Alignment (DNA) using the Lens. A unique feature of the Lens is the ability to embed search and BLAST results into blogs and websites, and provide real-time updates to them. PatSeq Explorer http://www.lens.org/lens/bio/patseqexplorer allows users to navigate patent sequences that map onto the human genome and in the future, many other genomes. PatSeq Explorer offers three level views for the sequence information and links each group of sequences at the chromosomal level to their corresponding patent documents in the Lens. By integrating sequence and patent search and document clustering capabilities, users can now understand the big and small details on the true extent and scope of genetic sequence patents. QUT Library supported Cambia in developing, testing and promoting the Lens. This poster demonstrates QUT Library’s provision of best practice and holistic research support to a research group and how QUT Librarians have acquired new capabilities to meet the needs of the researchers beyond traditional research support practices.
Resumo:
INTRODUCTION It is known that the vascular morphology and functionality are changed following closed soft tissue trauma (CSTT) [1], and bone fractures [2]. The disruption of blood vessels may lead to hypoxia and necrosis. Currently, most clinical methods for the diagnosis and monitoring of CSTT with or without bone fractures are primarily based on qualitative measures or practical experience, making the diagnosis subjective and inaccurate. There is evidence that CSTT and early vascular changes following the injury delay the soft tissue tissue and bone healing [3]. However, a precise qualitative and quantitative morphological assessment of vasculature changes after trauma is currently missing. In this research, we aim to establish a diagnostic framework to assess the 3D vascular morphological changes after standardized CSTT in a rat model qualitatively and quantitatively using contrast-enhanced micro-CT imaging. METHODS An impact device was used for the application of a controlled reproducible CSTT to the left thigh (Biceps Femoris) of anaesthetized male Wistar rats. After euthanizing the animals at 6 hours, 24 hours, 3 days, 7 days, or 14 days after trauma, CSTT was qualitatively evaluated by macroscopic visual observation of the skin and muscles. For visualization of the vasculature, the blood vessels of sacrificed rats were flushed with heparinised saline and then perfused with a radio-opaque contrast agent (Microfil, MV 122, Flowtech, USA) using an infusion pump. After allowing the contrast agent to polymerize overnight, both hind-limbs were dissected, and then the whole injured and contra-lateral control limbs were imaged using a micro-CT scanner (µCT 40, Scanco Medical, Switzerland) to evaluate the vascular morphological changes. Correlated biopsy samples were also taken from the CSTT region of both injured and control legs. The morphological parameters such as the vessel volume ratio (VV/TV), vessel diameter (V.D), spacing (V.Sp), number (V.N), connectivity (V.Conn) and the degree of anisotropy (DA) were then quantified by evaluating the scans of biopsy samples using the micro-CT imaging system. RESULTS AND DISCUSSION A qualitative evaluation of the CSTT has shown that the developed impact protocols were capable of producing a defined and reproducible injury within the region of interest (ROI), resulting in a large hematoma and moderate swelling in both lateral and medial sides of the injured legs. Also, the visualization of the vascular network using 3D images confirmed the ability to perfuse the large vessels and a majority of the microvasculature consistently (Figure 1). Quantification of the vascular morphology obtained from correlated biopsy samples has demonstrated that V.D and V.N and V.Sp were significantly higher in the injured legs 24 hours after impact in comparison with the control legs (p<0.05). The evaluation of the other time points is currently progressing. CONCLUSIONS The findings of this research will contribute to a better understanding of the changes to the vascular network architecture following traumatic injuries and during healing process. When interpreted in context of functional changes, such as tissue oxygenation, this will allow for objective diagnosis and monitoring of CSTT and serve as validation for future non-invasive clinical assessment modalities.
Resumo:
Custom designed for display on the Cube Installation situated in the new Science and Engineering Centre (SEC) at QUT, the ECOS project is a playful interface that uses real-time weather data to simulate how a five-star energy building operates in climates all over the world. In collaboration with the SEC building managers, the ECOS Project incorporates energy consumption and generation data of the building into an interactive simulation, which is both engaging to users and highly informative, and which invites play and reflection on the roles of green buildings. ECOS focuses on the principle that humans can have both a positive and negative impact on ecosystems with both local and global consequence. The ECOS project draws on the practice of Eco-Visualisation, a term used to encapsulate the important merging of environmental data visualization with the philosophy of sustainability. Holmes (2007) uses the term Eco-Visualisation (EV) to refer to data visualisations that ‘display the real time consumption statistics of key environmental resources for the goal of promoting ecological literacy’. EVs are commonly artifacts of interaction design, information design, interface design and industrial design, but are informed by various intellectual disciplines that have shared interests in sustainability. As a result of surveying a number of projects, Pierce, Odom and Blevis (2008) outline strategies for designing and evaluating effective EVs, including ‘connecting behavior to material impacts of consumption, encouraging playful engagement and exploration with energy, raising public awareness and facilitating discussion, and stimulating critical reflection.’ Consequently, Froehlich (2010) and his colleagues also use the term ‘Eco-feedback technology’ to describe the same field. ‘Green IT’ is another variation which Tomlinson (2010) describes as a ‘field at the juncture of two trends… the growing concern over environmental issues’ and ‘the use of digital tools and techniques for manipulating information.’ The ECOS Project team is guided by these principles, but more importantly, propose an example for how these principles may be achieved. The ECOS Project presents a simplified interface to the very complex domain of thermodynamic and climate modeling. From a mathematical perspective, the simulation can be divided into two models, which interact and compete for balance – the comfort of ECOS’ virtual denizens and the ecological and environmental health of the virtual world. The comfort model is based on the study of psychometrics, and specifically those relating to human comfort. This provides baseline micro-climatic values for what constitutes a comfortable working environment within the QUT SEC buildings. The difference between the ambient outside temperature (as determined by polling the Google Weather API for live weather data) and the internal thermostat of the building (as set by the user) allows us to estimate the energy required to either heat or cool the building. Once the energy requirements can be ascertained, this is then balanced with the ability of the building to produce enough power from green energy sources (solar, wind and gas) to cover its energy requirements. Calculating the relative amount of energy produced by wind and solar can be done by, in the case of solar for example, considering the size of panel and the amount of solar radiation it is receiving at any given time, which in turn can be estimated based on the temperature and conditions returned by the live weather API. Some of these variables can be altered by the user, allowing them to attempt to optimize the health of the building. The variables that can be changed are the budget allocated to green energy sources such as the Solar Panels, Wind Generator and the Air conditioning to control the internal building temperature. These variables influence the energy input and output variables, modeled on the real energy usage statistics drawn from the SEC data provided by the building managers.
Resumo:
The ability to identify and assess user engagement with transmedia productions is vital to the success of individual projects and the sustainability of this mode of media production as a whole. It is essential that industry players have access to tools and methodologies that offer the most complete and accurate picture of how audiences/users engage with their productions and which assets generate the most valuable returns of investment. Drawing upon research conducted with Hoodlum Entertainment, a Brisbane-based transmedia producer, this project involved an initial assessment of the way engagement tends to be understood, why standard web analytics tools are ill-suited to measuring it, how a customised tool could offer solutions, and why this question of measuring engagement is so vital to the future of transmedia as a sustainable industry. Working with data provided by Hoodlum Entertainment and Foxtel Marketing, the outcome of the study was a prototype for a custom data visualisation tool that allowed access, manipulation and presentation of user engagement data, both historic and predictive. The prototyped interfaces demonstrate how the visualization tool would collect and organise data specific to multiplatform projects by aggregating data across a number of platform reporting tools. Such a tool is designed to encompass not only platforms developed by the transmedia producer but also sites developed by fans. This visualisation tool accounted for multiplatform experience projects whose top level is comprised of people, platforms and content. People include characters, actors, audience, distributors and creators. Platforms include television, Facebook and other relevant social networks, literature, cinema and other media that might be included in the multiplatform experience. Content refers to discreet media texts employed within the platform, such as tweet, a You Tube video, a Facebook post, an email, a television episode, etc. Core content is produced by the creators’ multiplatform experiences to advance the narrative, while complimentary content generated by audience members offers further contributions to the experience. Equally important is the timing with which the components of the experience are introduced and how they interact with and impact upon each other. Being able to combine, filter and sort these elements in multiple ways we can better understand the value of certain components of a project. It also offers insights into the relationship between the timing of the release of components and user activity associated with them, which further highlights the efficacy (or, indeed, failure) of assets as catalysts for engagement. In collaboration with Hoodlum we have developed a number of design scenarios experimenting with the ways in which data can be visualised and manipulated to tell a more refined story about the value of user engagement with certain project components and activities. This experimentation will serve as the basis for future research.
Resumo:
In the recent decision Association for Molecular Pathology v. Myriad Genetics1, the US Supreme Court held that naturally occurring sequences from human genomic DNA are not patentable subject matter. Only certain complementary DNAs (cDNA), modified sequences and methods to use sequences are potentially patentable. It is likely that this distinction will hold for all DNA sequences, whether animal, plant or microbial2. However, it is not clear whether this means that other naturally occurring informational molecules, such as polypeptides (proteins) or polysaccharides, will also be excluded from patents. The decision underscores a pressing need for precise analysis of patents that disclose and reference genetic sequences, especially in the claims. Similarly, data sets, standards compliance and analytical tools must be improved—in particular, data sets and analytical tools must be made openly accessible—in order to provide a basis for effective decision making and policy setting to support biological innovation. Here, we present a web-based platform that allows such data aggregation, analysis and visualization in an open, shareable facility. To demonstrate the potential for the extension of this platform to global patent jurisdictions, we discuss the results of a global survey of patent offices that shows that much progress is still needed in making these data freely available for aggregation in the first place.
Resumo:
Acoustic recordings of the environment are an important aid to ecologists monitoring biodiversity and environmental health. However, rapid advances in recording technology, storage and computing make it possible to accumulate thousands of hours of recordings, of which, ecologists can only listen to a small fraction. The big-data challenge is to visualize the content of long-duration audio recordings on multiple scales, from hours, days, months to years. The visualization should facilitate navigation and yield ecologically meaningful information. Our approach is to extract (at one minute resolution) acoustic indices which reflect content of ecological interest. An acoustic index is a statistic that summarizes some aspect of the distribution of acoustic energy in a recording. We combine indices to produce false-colour images that reveal acoustic content and facilitate navigation through recordings that are months or even years in duration.
Resumo:
INEX investigates focused retrieval from structured documents by providing large test collections of structured documents, uniform evaluation measures, and a forum for organizations to compare their results. This paper reports on the INEX 2013 evaluation campaign, which consisted of four activities addressing three themes: searching professional and user generated data (Social Book Search track); searching structured or semantic data (Linked Data track); and focused retrieval (Snippet Retrieval and Tweet Contextualization tracks). INEX 2013 was an exciting year for INEX in which we consolidated the collaboration with (other activities in) CLEF and for the second time ran our workshop as part of the CLEF labs in order to facilitate knowledge transfer between the evaluation forums. This paper gives an overview of all the INEX 2013 tracks, their aims and task, the built test-collections, and gives an initial analysis of the results
Resumo:
Human genetic association studies have shown gene variants in the α5 subunit of the neuronal nicotinic receptor (nAChR) influence both ethanol and nicotine dependence. The α5 subunit is an accessory subunit that facilitates α4* nAChRs assembly in vitro. However, it is unknown whether this occurs in the brain, as there are few research tools to adequately address this question. As the α4*-containing nAChRs are highly expressed in the ventral tegmental area (VTA) we assessed the molecular, functional and pharmacological roles of α5 in α4*-containing nAChRs in the VTA. We utilized transgenic mice α5+/+(α4YFP) and α5-/-(α4YFP) that allow the direct visualization and measurement of α4-YFP expression and the effect of the presence (α5+/+) and absence of α5 (-/-) subunit, as the antibodies for detecting the α4* subunits of the nAChR are not specific. We performed voltage clamp electrophysiological experiments to study baseline nicotinic currents in VTA dopaminergic neurons. We show that in the presence of the α5 subunit, the overall expression of α4 subunit is increased significantly by 60% in the VTA. Furthermore, the α5 subunit strengthens baseline nAChR currents, suggesting the increased expression of α4* nAChRs to be likely on the cell surface. While the presence of the α5 subunit blunts the desensitization of nAChRs following nicotine exposure, it does not alter the amount of ethanol potentiation of VTA dopaminergic neurons. Our data demonstrates a major regulatory role for the α5 subunit in both the maintenance of α4*-containing nAChRs expression and in modulating nicotinic currents in VTA dopaminergic neurons. Additionally, the α5α4* nAChR in VTA dopaminergic neurons regulates the effect of nicotine but not ethanol on currents. Together, the data suggest that the α5 subunit is critical for controlling the expression and functional role of a population of α4*-containing nAChRs in the VTA.
Resumo:
Many applications can benefit from the accurate surface temperature estimates that can be made using a passive thermal-infrared camera. However, the process of radiometric calibration which enables this can be both expensive and time consuming. An ad hoc approach for performing radiometric calibration is proposed which does not require specialized equipment and can be completed in a fraction of the time of the conventional method. The proposed approach utilizes the mechanical properties of the camera to estimate scene temperatures automatically, and uses these target temperatures to model the effect of sensor temperature on the digital output. A comparison with a conventional approach using a blackbody radiation source shows that the accuracy of the method is sufficient for many tasks requiring temperature estimation. Furthermore, a novel visualization method is proposed for displaying the radiometrically calibrated images to human operators. The representation employs an intuitive coloring scheme and allows the viewer to perceive a large variety of temperatures accurately.
Resumo:
Energy auditing is an effective but costly approach for reducing the long-term energy consumption of buildings. When well-executed, energy loss can be quickly identified in the building structure and its subsystems. This then presents opportunities for improving energy efficiency. We present a low-cost, portable technology called "HeatWave" which allows non-experts to generate detailed 3D surface temperature models for energy auditing. This handheld 3D thermography system consists of two commercially available imaging sensors and a set of software algorithms which can be run on a laptop. The 3D model can be visualized in real-time by the operator so that they can monitor their degree of coverage as the sensors are used to capture data. In addition, results can be analyzed offline using the proposed "Spectra" multispectral visualization toolbox. The presence of surface temperature data in the generated 3D model enables the operator to easily identify and measure thermal irregularities such as thermal bridges, insulation leaks, moisture build-up and HVAC faults. Moreover, 3D models generated from subsequent audits of the same environment can be automatically compared to detect temporal changes in conditions and energy use over time.
Resumo:
My practice-led research explores and maps workflows for generating experimental creative work involving inertia based motion capture technology. Motion capture has often been used as a way to bridge animation and dance resulting in abstracted visuals outcomes. In early works this process was largely done by rotoscoping, reference footage and mechanical forms of motion capture. With the evolution of technology, optical and inertial forms of motion capture are now more accessible and able to accurately capture a larger range of complex movements. The creative work titled “Contours in Motion” was the first in a series of studies on captured motion data used to generating experimental visual forms that reverberate in space and time. With the source or ‘seed’ comes from using an Xsens MVN - Inertial Motion Capture system to capture spontaneous dance movements, with the visual generation conducted through a customised dynamics simulation. The aim of the creative work was to diverge way from a standard practice of using particle system and/or a simple re-targeting of the motion data to drive a 3d character as a means to produce abstracted visual forms. To facilitate this divergence a virtual dynamic object was tether to a selection of data points from a captured performance. The proprieties of the dynamic object were then adjusted to balance the influences from the human movement data with the influence of computer based randomization. The resulting outcome was a visual form that surpassed simple data visualization to project the intent of the performer’s movements into a visual shape itself. The reported outcomes from this investigation have contributed to a larger study on the use of motion capture in the generative arts, furthering the understanding of and generating theories on practice.
Resumo:
Environmental monitoring is becoming critical as human activity and climate change place greater pressures on biodiversity, leading to an increasing need for data to make informed decisions. Acoustic sensors can help collect data across large areas for extended periods making them attractive in environmental monitoring. However, managing and analysing large volumes of environmental acoustic data is a great challenge and is consequently hindering the effective utilization of the big dataset collected. This paper presents an overview of our current techniques for collecting, storing and analysing large volumes of acoustic data efficiently, accurately, and cost-effectively.
Resumo:
Discharge summaries and other free-text reports in healthcare transfer information between working shifts and geographic locations. Patients are likely to have difficulties in understanding their content, because of their medical jargon, non-standard abbreviations,and ward-specific idioms. This paper reports on an evaluation lab with an aim to support the continuum of care by developing methods and resources that make clinical reports in English easier to understand for patients, and which helps them in finding information related to their condition.