283 resultados para Information content

em Queensland University of Technology - ePrints Archive


Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper conceptualizes a framework for bridging the BIM (building information modelling)-specifications divide through augmenting objects within BIM with specification parameters derived from a product library. We demonstrate how model information, enriched with data at various LODs (levels of development), can evolve simultaneously with design and construction using different representation of a window object embedded in a wall as lifecycle phase exemplars at different levels of granularity. The conceptual standpoint is informed by the need for exploring a methodological approach which extends beyond current limitations of current modelling platforms in enhancing the information content of BIM models. Therefore, this work demonstrates that BIM objects can be augmented with construction specification parameters leveraging product libraries.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we present the application of a non-linear dimensionality reduction technique for the learning and probabilistic classification of hyperspectral image. Hyperspectral image spectroscopy is an emerging technique for geological investigations from airborne or orbital sensors. It gives much greater information content per pixel on the image than a normal colour image. This should greatly help with the autonomous identification of natural and manmade objects in unfamiliar terrains for robotic vehicles. However, the large information content of such data makes interpretation of hyperspectral images time-consuming and userintensive. We propose the use of Isomap, a non-linear manifold learning technique combined with Expectation Maximisation in graphical probabilistic models for learning and classification. Isomap is used to find the underlying manifold of the training data. This low dimensional representation of the hyperspectral data facilitates the learning of a Gaussian Mixture Model representation, whose joint probability distributions can be calculated offline. The learnt model is then applied to the hyperspectral image at runtime and data classification can be performed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fibres are extremely common. They can originate directly from human and animal hair, and also from textiles in the form of clothing, upholstery and carpets. Hair and textile fibres are relatively easily shed and transferred, which means that it is highly likely that fibres will be found at crime scenes. If such fibres are carefully characterised they can be of immense value in the forensic environment. Vibrational spectroscopy is one of the most important methods for the characterisation of natural and synthetic fibres. The vibrational spectrum, whether mid-IR or Raman, can be considered to be a fingerprint of the molecular structure of the fibre and as such has a very high information content.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Virtual prototyping emerges as a new technology to replace existing physical prototypes for product evaluation, which are costly and time consuming to manufacture. Virtualization technology allows engineers and ergonomists to perform virtual builds and different ergonomic analyses on a product. Digital Human Modelling (DHM) software packages such as Siemens Jack, often integrate with CAD systems to provide a virtual environment which allows investigation of operator and product compatibility. Although the integration between DHM and CAD systems allows for the ergonomic analysis of anthropometric design, human musculoskeletal, multi-body modelling software packages such as the AnyBody Modelling System (AMS) are required to support physiologic design. They provide muscular force analysis, estimate human musculoskeletal strain and help address human comfort assessment. However, the independent characteristics of the modelling systems Jack and AMS constrain engineers and ergonomists in conducting a complete ergonomic analysis. AMS is a stand alone programming system without a capability to integrate into CAD environments. Jack is providing CAD integrated human-in-the-loop capability, but without considering musculoskeletal activity. Consequently, engineers and ergonomists need to perform many redundant tasks during product and process design. Besides, the existing biomechanical model in AMS uses a simplified estimation of body proportions, based on a segment mass ratio derived scaling approach. This is insufficient to represent user populations anthropometrically correct in AMS. In addition, sub-models are derived from different sources of morphologic data and are therefore anthropometrically inconsistent. Therefore, an interface between the biomechanical AMS and the virtual human model Jack was developed to integrate a musculoskeletal simulation with Jack posture modeling. This interface provides direct data exchange between the two man-models, based on a consistent data structure and common body model. The study assesses kinematic and biomechanical model characteristics of Jack and AMS, and defines an appropriate biomechanical model. The information content for interfacing the two systems is defined and a protocol is identified. The interface program is developed and implemented through Tcl and Jack-script(Python), and interacts with the AMS console application to operate AMS procedures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Appearance-based loop closure techniques, which leverage the high information content of visual images and can be used independently of pose, are now widely used in robotic applications. The current state-of-the-art in the field is Fast Appearance-Based Mapping (FAB-MAP) having been demonstrated in several seminal robotic mapping experiments. In this paper, we describe OpenFABMAP, a fully open source implementation of the original FAB-MAP algorithm. Beyond the benefits of full user access to the source code, OpenFABMAP provides a number of configurable options including rapid codebook training and interest point feature tuning. We demonstrate the performance of OpenFABMAP on a number of published datasets and demonstrate the advantages of quick algorithm customisation. We present results from OpenFABMAP’s application in a highly varied range of robotics research scenarios.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The redclaw crayfish Cherax quadricarinatus (von Martens) accounts for the entire commercial production of freshwater crayfish in Australia. Two forms have been recognized, an 'Eastern' form in northern Queensland and a 'Western' form in the Northern Territory and far northern Western Australia. To date, only the Eastern form has been exported overseas for culture (including to China). The genetic structure of three Chinese redclaw crayfish culture lines from three different geographical locations in China (Xiamen in Fujian Province, Guangzhou in Guangdong Province and Chongming in Shanghai) were investigated for their levels and patterns of genetic diversity using microsatellite markers. Twenty-eight SSR markers were isolated and used to analyse genetic diversity levels in three redclaw crayfish culture lines in China. This study set out to improve the current understanding of the molecular genetic characteristics of imported strains of redclaw crayfish reared in China. Microsatellite analysis revealed moderate allelic and high gene diversity in all three culture lines. Polymorphism information content estimates for polymorphic loci varied between 0.1168 and 0.8040, while pairwise F ST values among culture lines were moderate (0.0020-0.1244). The highest estimate of divergence was evident between the Xiamen and Guangzhou populations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Expert knowledge is used widely in the science and practice of conservation because of the complexity of problems, relative lack of data, and the imminent nature of many conservation decisions. Expert knowledge is substantive information on a particular topic that is not widely known by others. An expert is someone who holds this knowledge and who is often deferred to in its interpretation. We refer to predictions by experts of what may happen in a particular context as expert judgments. In general, an expert-elicitation approach consists of five steps: deciding how information will be used, determining what to elicit, designing the elicitation process, performing the elicitation, and translating the elicited information into quantitative statements that can be used in a model or directly to make decisions. This last step is known as encoding. Some of the considerations in eliciting expert knowledge include determining how to work with multiple experts and how to combine multiple judgments, minimizing bias in the elicited information, and verifying the accuracy of expert information. We highlight structured elicitation techniques that, if adopted, will improve the accuracy and information content of expert judgment and ensure uncertainty is captured accurately. We suggest four aspects of an expert elicitation exercise be examined to determine its comprehensiveness and effectiveness: study design and context, elicitation design, elicitation method, and elicitation output. Just as the reliability of empirical data depends on the rigor with which it was acquired so too does that of expert knowledge.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The price formation of financial assets is a complex process. It extends beyond the standard economic paradigm of supply and demand to the understanding of the dynamic behavior of price variability, the price impact of information, and the implications of trading behavior of market participants on prices. In this thesis, I study aggregate market and individual assets volatility, liquidity dimensions, and causes of mispricing for US equities over a recent sample period. How volatility forecasts are modeled, what determines intradaily jumps and causes changes in intradaily volatility and what drives the premium of traded equity indexes? Are they induced, for example, by the information content of lagged volatility and return parameters or by macroeconomic news, changes in liquidity and volatility? Besides satisfying our intellectual curiosity, answers to these questions are of direct importance to investors developing trading strategies, policy makers evaluating macroeconomic policies and to arbitrageurs exploiting mispricing in exchange-traded funds. Results show that the leverage effect and lagged absolute returns improve forecasts of continuous components of daily realized volatility as well as jumps. Implied volatility does not subsume the information content of lagged returns in forecasting realized volatility and its components. The reported results are linked to the heterogeneous market hypothesis and demonstrate the validity of extending the hypothesis to returns. Depth shocks, signed order flow, the number of trades, and resiliency are the most important determinants of intradaily volatility. In contrast, spread shock and resiliency are predictive of signed intradaily jumps. There are fewer macroeconomic news announcement surprises that cause extreme price movements or jumps than those that elevate intradaily volatility. Finally, the premium of exchange-traded funds is significantly associated with momentum in net asset value and a number of liquidity parameters including the spread, traded volume, and illiquidity. The mispricing of industry exchange traded funds suggest that limits to arbitrage are driven by potential illiquidity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Copyright, it is commonly said, matters in society because it encourages the production of socially beneficial, culturally significant expressive content. Our focus on copyright's recent history, however, blinds us to the social information practices that have always existed. In this Article, we examine these social information practices, and query copyright's role within them. We posit a functional model of what is necessary for creative content to move from creator to user. These are the functions dealing with the creation, selection, production, dissemination, promotion, sale, and use of expressive content. We demonstrate how centralized commercial control of information content has been the driving force behind copyright's expansion. All of the functions that copyright industries once controlled, however, are undergoing revolutionary decentralization and disintermediation. Different aspects of information technology, notably the digitization of information, widespread computer ownership, the rise of the Internet, and the development of social software, threaten the viability and desirability of centralized control over every one of the content functions. These functions are increasingly being performed by individuals and disaggregated groups. This raises an issue for copyright as the main regulatory force in information practices: copyright assumes a central control requirement that no longer applies for the development of expressive content. We examine the normative implications of this shift for our information policy in this new post-copyright era. Most notably, we conclude that copyright law needs to be adjusted in order to recognize the opportunity and desirability of decentralized content, and the expanded marketplace of ideas it promises.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a novel approach to video summarisation that makes use of a Bag-of-visual-Textures (BoT) approach. Two systems are proposed, one based solely on the BoT approach and another which exploits both colour information and BoT features. On 50 short-term videos from the Open Video Project we show that our BoT and fusion systems both achieve state-of-the-art performance, obtaining an average F-measure of 0.83 and 0.86 respectively, a relative improvement of 9% and 13% when compared to the previous state-of-the-art. When applied to a new underwater surveillance dataset containing 33 long-term videos, the proposed system reduces the amount of footage by a factor of 27, with only minor degradation in the information content. This order of magnitude reduction in video data represents significant savings in terms of time and potential labour cost when manually reviewing such footage.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In studies using macroinvertebrates as indicators for monitoring rivers and streams, species level identifications in comparison with lower resolution identifications can have greater information content and result in more reliable site classifications and better capacity to discriminate between sites, yet many such programmes identify specimens to the resolution of family rather than species. This is often because it is cheaper to obtain family level data than species level data. Choice of appropriate taxonomic resolution is a compromise between the cost of obtaining data at high taxonomic resolutions and the loss of information at lower resolutions. Optimum taxonomic resolution should be determined by the information required to address programme objectives. Costs saved in identifying macroinvertebrates to family level may not be justified if family level data can not give the answers required and expending the extra cost to obtain species level data may not be warranted if cheaper family level data retains sufficient information to meet objectives. We investigated the influence of taxonomic resolution and sample quantification (abundance vs. presence/absence) on the representation of aquatic macroinvertebrate species assemblage patterns and species richness estimates. The study was conducted in a physically harsh dryland river system (Condamine-Balonne River system, located in south-western Queensland, Australia), characterised by low macroinvertebrate diversity. Our 29 study sites covered a wide geographic range and a diversity of lotic conditions and this was reflected by differences between sites in macroinvertebrate assemblage composition and richness. The usefulness of expending the extra cost necessary to identify macroinvertebrates to species was quantified via the benefits this higher resolution data offered in its capacity to discriminate between sites and give accurate estimates of site species richness. We found that very little information (<6%) was lost by identifying taxa to family (or genus), as opposed to species, and that quantifying the abundance of taxa provided greater resolution for pattern interpretation than simply noting their presence/absence. Species richness was very well represented by genus, family and order richness, so that each of these could be used as surrogates of species richness if, for example, surveying to identify diversity hot-spots. It is suggested that sharing of common ecological responses among species within higher taxonomic units is the most plausible mechanism for the results. Based on a cost/benefit analysis, family level abundance data is recommended as the best resolution for resolving patterns in macroinvertebrate assemblages in this system. The relevance of these findings are discussed in the context of other low diversity, harsh, dryland river systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Texture information in the iris image is not uniform in discriminatory information content for biometric identity verification. The bits in an iris code obtained from the image differ in their consistency from one sample to another for the same identity. In this work, errors in bit strings are systematically analysed in order to investigate the effect of light-induced and drug-induced pupil dilation and constriction on the consistency of iris texture information. The statistics of bit errors are computed for client and impostor distributions as functions of radius and angle. Under normal conditions, a V-shaped radial trend of decreasing bit errors towards the central region of the iris is obtained for client matching, and it is observed that the distribution of errors as a function of angle is uniform. When iris images are affected by pupil dilation or constriction the radial distribution of bit errors is altered. A decreasing trend from the pupil outwards is observed for constriction, whereas a more uniform trend is observed for dilation. The main increase in bit errors occurs closer to the pupil in both cases.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper conceptualizes a framework for bridging the BIM-Specifications divide by embedding project-specific information in BIM objects by means of a product library. We demonstrate how model information, enriched with data at various levels of development (LODs), can evolve simultaneously with design and construction using a window object embedded in a wall as life-cycle phase exemplars at different levels of granularity. The conceptual approach is informed by the need for exploring an approach that takes cognizance of the limitations of current modelling tools in enhancing the information content of BIM models. Therefore, this work attempts to answer the question, “How can the modelling of building information be enhanced throughout the life-cycle phases of buildings utilizing building specification information?”

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Non-rigid image registration is an essential tool required for overcoming the inherent local anatomical variations that exist between images acquired from different individuals or atlases. Furthermore, certain applications require this type of registration to operate across images acquired from different imaging modalities. One popular local approach for estimating this registration is a block matching procedure utilising the mutual information criterion. However, previous block matching procedures generate a sparse deformation field containing displacement estimates at uniformly spaced locations. This neglects to make use of the evidence that block matching results are dependent on the amount of local information content. This paper presents a solution to this drawback by proposing the use of a Reversible Jump Markov Chain Monte Carlo statistical procedure to optimally select grid points of interest. Three different methods are then compared to propagate the estimated sparse deformation field to the entire image including a thin-plate spline warp, Gaussian convolution, and a hybrid fluid technique. Results show that non-rigid registration can be improved by using the proposed algorithm to optimally select grid points of interest.