951 resultados para Digital Human Modelling (DHM)
Resumo:
Hydrographers have traditionally referred to the nearshore area as the "white ribbon" area due to the challenges associated with the collection of elevation data in this highly dynamic transitional zone between terrestrial and marine environments. Accordingly, available information in this zone is typically characterised by a range of datasets from disparate sources. In this paper we propose a framework to 'fill' the white ribbon area of a coral reef system by integrating multiple elevation and bathymetric datasets acquired by a suite of remote-sensing technologies into a seamless digital elevation model (DEM). A range of datasets are integrated, including field-collected GPS elevation points, terrestrial and bathymetric LiDAR, single and multibeam bathymetry, nautical chart depths and empirically derived bathymetry estimations from optical remote sensing imagery. The proposed framework ranks data reliability internally, thereby avoiding the requirements to quantify absolute error and results in a high resolution, seamless product. Nested within this approach is an effective spatially explicit technique for improving the accuracy of bathymetry estimates derived empirically from optical satellite imagery through modelling the spatial structure of residuals. The approach was applied to data collected on and around Lizard Island in northern Australia. Collectively, the framework holds promise for filling the white ribbon zone in coastal areas characterised by similar data availability scenarios. The seamless DEM is referenced to the horizontal coordinate system MGA Zone 55 - GDA 1994, mean sea level (MSL) vertical datum and has a spatial resolution of 20 m.
Resumo:
Cover title.
Resumo:
The international perspectives on these issues are especially valuable in an increasingly connected, but still institutionally and administratively diverse world. The research addressed in several chapters in this volume includes issues around technical standards bodies like EpiDoc and the TEI, engaging with ways these standards are implemented, documented, taught, used in the process of transcribing and annotating texts, and used to generate publications and as the basis for advanced textual or corpus research. Other chapters focus on various aspects of philological research and content creation, including collaborative or community driven efforts, and the issues surrounding editorial oversight, curation, maintenance and sustainability of these resources. Research into the ancient languages and linguistics, in particular Greek, and the language teaching that is a staple of our discipline, are also discussed in several chapters, in particular for ways in which advanced research methods can lead into language technologies and vice versa and ways in which the skills around teaching can be used for public engagement, and vice versa. A common thread through much of the volume is the importance of open access publication or open source development and distribution of texts, materials, tools and standards, both because of the public good provided by such models (circulating materials often already paid for out of the public purse), and the ability to reach non-standard audiences, those who cannot access rich university libraries or afford expensive print volumes. Linked Open Data is another technology that results in wide and free distribution of structured information both within and outside academic circles, and several chapters present academic work that includes ontologies and RDF, either as a direct research output or as essential part of the communication and knowledge representation. Several chapters focus not on the literary and philological side of classics, but on the study of cultural heritage, archaeology, and the material supports on which original textual and artistic material are engraved or otherwise inscribed, addressing both the capture and analysis of artefacts in both 2D and 3D, the representation of data through archaeological standards, and the importance of sharing information and expertise between the several domains both within and without academia that study, record and conserve ancient objects. Almost without exception, the authors reflect on the issues of interdisciplinarity and collaboration, the relationship between their research practice and teaching and/or communication with a wider public, and the importance of the role of the academic researcher in contemporary society and in the context of cutting edge technologies. How research is communicated in a world of instant- access blogging and 140-character micromessaging, and how our expectations of the media affect not only how we publish but how we conduct our research, are questions about which all scholars need to be aware and self-critical.
Resumo:
Cybercrime and related malicious activity in our increasingly digital world has become more prevalent and sophisticated, evading traditional security mechanisms. Digital forensics has been proposed to help investigate, understand and eventually mitigate such attacks. The practice of digital forensics, however, is still fraught with various challenges. Some of the most prominent of these challenges include the increasing amounts of data and the diversity of digital evidence sources appearing in digital investigations. Mobile devices and cloud infrastructures are an interesting specimen, as they inherently exhibit these challenging circumstances and are becoming more prevalent in digital investigations today. Additionally they embody further characteristics such as large volumes of data from multiple sources, dynamic sharing of resources, limited individual device capabilities and the presence of sensitive data. These combined set of circumstances make digital investigations in mobile and cloud environments particularly challenging. This is not aided by the fact that digital forensics today still involves manual, time consuming tasks within the processes of identifying evidence, performing evidence acquisition and correlating multiple diverse sources of evidence in the analysis phase. Furthermore, industry standard tools developed are largely evidence-oriented, have limited support for evidence integration and only automate certain precursory tasks, such as indexing and text searching. In this study, efficiency, in the form of reducing the time and human labour effort expended, is sought after in digital investigations in highly networked environments through the automation of certain activities in the digital forensic process. To this end requirements are outlined and an architecture designed for an automated system that performs digital forensics in highly networked mobile and cloud environments. Part of the remote evidence acquisition activity of this architecture is built and tested on several mobile devices in terms of speed and reliability. A method for integrating multiple diverse evidence sources in an automated manner, supporting correlation and automated reasoning is developed and tested. Finally the proposed architecture is reviewed and enhancements proposed in order to further automate the architecture by introducing decentralization particularly within the storage and processing functionality. This decentralization also improves machine to machine communication supporting several digital investigation processes enabled by the architecture through harnessing the properties of various peer-to-peer overlays. Remote evidence acquisition helps to improve the efficiency (time and effort involved) in digital investigations by removing the need for proximity to the evidence. Experiments show that a single TCP connection client-server paradigm does not offer the required scalability and reliability for remote evidence acquisition and that a multi-TCP connection paradigm is required. The automated integration, correlation and reasoning on multiple diverse evidence sources demonstrated in the experiments improves speed and reduces the human effort needed in the analysis phase by removing the need for time-consuming manual correlation. Finally, informed by published scientific literature, the proposed enhancements for further decentralizing the Live Evidence Information Aggregator (LEIA) architecture offer a platform for increased machine-to-machine communication thereby enabling automation and reducing the need for manual human intervention.
Resumo:
The C2 domain is one of the most frequent and widely distributed calcium-binding motifs. Its structure comprises an eight-stranded beta-sandwich with two structural types as if the result of a circular permutation. Combining sequence, structural and modelling information, we have explored, at different levels of granularity, the functional characteristics of several families of C2 domains. At the coarsest level,the similarity correlates with key structural determinants of the C2 domain fold and, at the finest level, with the domain architecture of the proteins containing them, highlighting the functional diversity between the various subfamilies. The functional diversity appears as different conserved surface patches throughout this common fold. In some cases, these patches are related to substrate-binding sites whereas in others they correspond to interfaces of presumably permanent interaction between other domains within the same polypeptide chain. For those related to substrate-binding sites, the predictions overlap with biochemical data in addition to providing some novel observations. For those acting as protein-protein interfaces' our modelling analysis suggests that slight variations between families are a result of not only complementary adaptations in the interfaces involved but also different domain architecture. In the light of the sequence and structural genomic projects, the work presented here shows that modelling approaches along with careful sub-typing of protein families will be a powerful combination for a broader coverage in proteomics. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
A finite-difference time-domain (FDTD) thermal model has been developed to compute the temperature elevation in the Sprague Dawley rat due to electromagnetic energy deposition in high-field magnetic resonance imaging (MRI). The field strengths examined ranged from 11.75-23.5 T (corresponding to H-1 resonances of 0.5-1 GHz) and an N-stub birdcage resonator was used to both transmit radio-frequency energy and receive the MRI signals. With an in-plane resolution of 1.95 mm, the inhomogeneous rat phantom forms a segmented model of 12 different tissue types, each having its electrical and thermal parameters assigned. The steady-state temperature distribution was calculated using a Pennes 'bioheat' approach. The numerical algorithm used to calculate the induced temperature distribution has been successfully validated against analytical solutions in the form of simplified spherical models with electrical and thermal properties of rat muscle. As well as assisting with the design of MRI experiments and apparatus, the numerical procedures developed in this study could help in future research and design of tumour-treating hyperthermia applicators to be used on rats in vivo.
Resumo:
'Free will' and its corollary, the concept of individual responsibility are keystones of the justice system. This paper shows that if we accept a physics that disallows time reversal, the concept of 'free will' is undermined by an integrated understanding of the influence of genetics and environment on human behavioural responses. Analysis is undertaken by modelling life as a novel statistico-deterministic version of a Turing machine, i.e. as a series of transitions between states at successive instants of time. Using this model it is proven by induction that the entire course of life is independent of the action of free will. Although determined by prior state, the probability of transitions between states in response to a standard environmental stimulus is not equal to 1 and the transitions may differ quantitatively at the molecular level and qualitatively at the level of the whole organism. Transitions between states correspond to behaviours. It is shown that the behaviour of identical twins (or clones), although determined, would be incompletely predictable and non-identical, creating an illusion of the operation of 'free will'. 'Free will' is a convenient construct for current judicial systems and social control because it allows rationalization of punishment for those whose behaviour falls outside socially defined norms. Indeed, it is conceivable that maintenance of ideas of free will has co-evolved with community morality to reinforce its operation. If the concept is free will is to be maintained it would require revision of our current physical theories.
Resumo:
Purple acid phosphatases are a family of binuclear metallohydrolases that have been identified in plants, animals and fungi. Only one isoform of similar to 35 kDa has been isolated from animals, where it is associated with bone resorption and microbial killing through its phosphatase activity, and hydroxyl radical production, respectively. Using the sensitive PSI-BLAST search method, sequences representing new purple acid phosphatase-like proteins have been identified in mammals, insects and nematodes. These new putative isoforms are closely related to the similar to 55 kDa purple acid phosphatase characterized from plants. Secondary structure prediction of the new human isoform further confirms its similarity to a purple acid phosphatase from the red kidney bean. A structural model for the human enzyme was constructed based on the red kidney bean purple acid phosphatase structure. This model shows that the catalytic centre observed in other purple acid phosphatases is also present in this new isoform. These observations suggest that the sequences identified in this study represent a novel subfamily of plant-like purple acid phosphatases in animals and humans. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
In biologically mega-diverse countries that are undergoing rapid human landscape transformation, it is important to understand and model the patterns of land cover change. This problem is particularly acute in Colombia, where lowland forests are being rapidly cleared for cropping and ranching. We apply a conceptual model with a nested set of a priori predictions to analyse the spatial and temporal patterns of land cover change for six 50-100 km(2) case study areas in lowland ecosystems of Colombia. Our analysis included soil fertility, a cost-distance function, and neighbourhood of forest and secondary vegetation cover as independent variables. Deforestation and forest regrowth are tested using logistic regression analysis and an information criterion approach to rank the models and predictor variables. The results show that: (a) overall the process of deforestation is better predicted by the full model containing all variables, while for regrowth the model containing only the auto-correlated neighbourhood terms is a better predictor; (b) overall consistent patterns emerge, although there are variations across regions and time; and (c) during the transformation process, both the order of importance and significance of the drivers change. Forest cover follows a consistent logistic decline pattern across regions, with introduced pastures being the major replacement land cover type. Forest stabilizes at 2-10% of the original cover, with an average patch size of 15.4 (+/- 9.2) ha. We discuss the implications of the observed patterns and rates of land cover change for conservation planning in countries with high rates of deforestation. (c) 2005 Elsevier Ltd. All rights reserved.