960 resultados para Computational studies
Resumo:
Current complication rates for adolescent spinal deformity surgery are unacceptably high and in order to improve patient outcomes, the development of a simulation tool which enables the surgical strategy for an individual patient to be optimized is necessary. In this chapter we will present our work to date in developing and validating patient-specific modeling techniques to simulate and predict patient outcomes for surgery to correct adolescent scoliosis deformity. While these simulation tools are currently being developed to simulate adolescent idiopathic scoliosis patients, they will have broader applications in simulating spinal disorders and optimizing surgical planning for other types of spine surgery. Our studies to date have highlighted the need for not only patient-specific anatomical data, but also patient-specific tissue parameters and biomechanical loading data, in order to accurately predict the physiological behaviour of the spine. Even so, patient-specific computational models are the state-of-the art in computational biomechanics and offer much potential as a pre-operative surgical planning tool.
Resumo:
Virtual environments can provide, through digital games and online social interfaces, extremely exciting forms of interactive entertainment. Because of their capability in displaying and manipulating information in natural and intuitive ways, such environments have found extensive applications in decision support, education and training in the health and science domains amongst others. Currently, the burden of validating both the interactive functionality and visual consistency of a virtual environment content is entirely carried out by developers and play-testers. While considerable research has been conducted in assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. The aim of this thesis is to determine whether the correctness of the images generated by a virtual environment can be quantitatively defined, and automatically measured, in order to facilitate the validation of the content. In an attempt to provide an environment-independent definition of visual consistency, a number of classification approaches were developed. First, a novel model-based object description was proposed in order to enable reasoning about the color and geometry change of virtual entities during a play-session. From such an analysis, two view-based connectionist approaches were developed to map from geometry and color spaces to a single, environment-independent, geometric transformation space; we used such a mapping to predict the correct visualization of the scene. Finally, an appearance-based aliasing detector was developed to show how incorrectness too, can be quantified for debugging purposes. Since computer games heavily rely on the use of highly complex and interactive virtual worlds, they provide an excellent test bed against which to develop, calibrate and validate our techniques. Experiments were conducted on a game engine and other virtual worlds prototypes to determine the applicability and effectiveness of our algorithms. The results show that quantifying visual correctness in virtual scenes is a feasible enterprise, and that effective automatic bug detection can be performed through the techniques we have developed. We expect these techniques to find application in large 3D games and virtual world studios that require a scalable solution to testing their virtual world software and digital content.
Resumo:
Twin studies offer the opportunity to determine the relative contribution of genes versus environment in traits of interest. Here, we investigate the extent to which variance in brain structure is reduced in monozygous twins with identical genetic make-up. We investigate whether using twins as compared to a control population reduces variability in a number of common magnetic resonance (MR) structural measures, and we investigate the location of areas under major genetic influences. This is fundamental to understanding the benefit of using twins in studies where structure is the phenotype of interest. Twenty-three pairs of healthy MZ twins were compared to matched control pairs. Volume, T2 and diffusion MR imaging were performed as well as spectroscopy (MRS). Images were compared using (i) global measures of standard deviation and effect size, (ii) voxel-based analysis of similarity and (iii) intra-pair correlation. Global measures indicated a consistent increase in structural similarity in twins. The voxel-based and correlation analyses indicated a widespread pattern of increased similarity in twin pairs, particularly in frontal and temporal regions. The areas of increased similarity were most widespread for the diffusion trace and least widespread for T2. MRS showed consistent reduction in metabolite variation that was significant in the temporal lobe N-acetylaspartate (NAA). This study has shown the distribution and magnitude of reduced variability in brain volume, diffusion, T2 and metabolites in twins. The data suggest that evaluation of twins discordant for disease is indeed a valid way to attribute genetic or environmental influences to observed abnormalities in patients since evidence is provided for the underlying assumption of decreased variability in twins.
Resumo:
We report on analysis of discussions in an online community of people with chronic illness using socio-cognitively motivated, automatically produced semantic spaces. The analysis aims to further the emerging theory of "transition" (how people can learn to incorporate the consequences of illness into their lives). An automatically derived representation of sense of self for individuals is created in the semantic space by the analysis of the email utterances of the community members. The movement over time of the sense of self is visualised, via projection, with respect to axes of "ordinariness" and "extra-ordinariness". Qualitative evaluation shows that the visualisation is paralleled by the transitions of people during the course of their illness. The research aims to progress tools for analysis of textual data to promote greater use of tacit knowledge as found in online virtual communities. We hope it also encourages further interest in representation of sense-of-self.
Resumo:
While there is strong interest in teaching values in Australia and internationally there is little focus on young children’s moral values learning in the classroom. Research shows that personal epistemology influences teaching and learning in a range of education contexts, including moral education. This study examines relationships between personal epistemologies (children’s and teachers’), pedagogies, and school contexts for moral learning in two early years classrooms. Interviews with teachers and children and analysis of school policy revealed clear patterns of personal epistemologies and pedagogies within each school. A whole school approach to understanding personal epistemologies and practice for moral values learning is suggested.
Resumo:
The use of vibrational spectroscopic techniques to characterise historical artefacts and art works continues to grow and to provide the archaeologist and art historian with significant information with which to understand the nature and activities of previous peoples and civilizations. In addition, conservators can gain knowledge of the composition of artworks or historical objects and so are better equipped to ensure their preservation. Both infrared and Raman have been widely used. Microspectroscopy is the preferred sampling technique as it requires only a very small sample, which often can be recovered. The use of synchrotron radiation in conjunction with IR microspectroscopy is increasing because of the substantial benefits in terms of improved spatial resolution and signal-to-noise ratio. The key trend for the future is the growth in the use of portable instruments, both IR and Raman, which are becoming important because they allow non-destructive measurements to be made in situ, for example at an archaeological site or at a museum.
Resumo:
Discrete Markov random field models provide a natural framework for representing images or spatial datasets. They model the spatial association present while providing a convenient Markovian dependency structure and strong edge-preservation properties. However, parameter estimation for discrete Markov random field models is difficult due to the complex form of the associated normalizing constant for the likelihood function. For large lattices, the reduced dependence approximation to the normalizing constant is based on the concept of performing computationally efficient and feasible forward recursions on smaller sublattices which are then suitably combined to estimate the constant for the whole lattice. We present an efficient computational extension of the forward recursion approach for the autologistic model to lattices that have an irregularly shaped boundary and which may contain regions with no data; these lattices are typical in applications. Consequently, we also extend the reduced dependence approximation to these scenarios enabling us to implement a practical and efficient non-simulation based approach for spatial data analysis within the variational Bayesian framework. The methodology is illustrated through application to simulated data and example images. The supplemental materials include our C++ source code for computing the approximate normalizing constant and simulation studies.
Resumo:
Critical futures studies is not about the careers of a few scholars, rather it is about projects that transcend the narrow boundaries of the self. This biographical monograph examines the life and work of Richard Slaughter and Sohail Inayatullah.
Resumo:
Honing and Ladinig (2008) make the assertion that while the internal validity of web-based studies may be reduced, this is offset by an increase in external validity possible when experimenters can sample a wider range of participants and experimental settings. In this paper, the issue of internal validity is more closely examined, and it is agued that there is no necessary reason why internal validity of a web-based study should be worse than that of a lab-based one. Errors of measurement or inconsistencies of manipulation will typically balance across conditions of the experiment, and thus need not necessarily threaten the validity of a study’s findings.
Resumo:
This paper studies the missing covariate problem which is often encountered in survival analysis. Three covariate imputation methods are employed in the study, and the effectiveness of each method is evaluated within the hazard prediction framework. Data from a typical engineering asset is used in the case study. Covariate values in some time steps are deliberately discarded to generate an incomplete covariate set. It is found that although the mean imputation method is simpler than others for solving missing covariate problems, the results calculated by it can differ largely from the real values of the missing covariates. This study also shows that in general, results obtained from the regression method are more accurate than those of the mean imputation method but at the cost of a higher computational expensive. Gaussian Mixture Model (GMM) method is found to be the most effective method within these three in terms of both computation efficiency and predication accuracy.
Resumo:
Most of creativity in the digital world passes unnoticed by the industry practices and policies, and it isn't taken into account in the cultural and economic strategies of the creative industries. We should find ways to catalyze this creative production, showing how the user's contribution may contribute to social learning, cultural and economic advancement. To that effect, we must know what is an open creative system and how it works. Based on this diagnosis, the author that interdisciplinarity is urgent and there is also a need for a science of culture. What is at stake is a strategy of integrated development, as regards the upcoming innovation in its complex, productive and learning aspects.
Resumo:
This fourth edition of Communication, Cultural and Media Studies: The Key Concepts is an indispensible guide to the most important terms in the field. It offers clear explanations of the key concepts, exploring their origins, what they’re used for and why they provoke discussion. The author provides a multi-disciplinary explanation and assessment of the key concepts, from ‘authorship’ to ‘censorship’; ‘creative industries’ to ‘network theory’; ‘complexity’ to ‘visual culture’. The new edition of this classic text includes: * Over 200 entries including 50 new entries * All entries revised, rewritten and updated * Coverage of recent developments in the field * Insight into interactive media and the knowledge-based economy * A fully updated bibliography with 400 items and suggestions for further reading throughout the text
Resumo:
There has been growing interest in how to make tertiary education more global and international not only in context but, also, in approach and methodology. One area of the education sector that has come under specific focus is the higher education sector curriculum and its design. This paper addresses the process of ‘internationalising’ the curriculum through the specific example of designing a new literary unit for undergraduate students, mainly literary studies and creative writing students. The literary unit entitled: Imagining the Americas: Contemporary American Literature and Culture, has the added complexity of being a unit about national fiction. This paper explores the practical problems and obstacles encountered in setting up this unit while using a framework of internationalisation. The case study examines the practicalities in implementing strategies that reflect the overall objective of creating global thinkers within a tertiary environment.
Resumo:
The authors provide a theoretically generative definition of cyberinfrastructure (CI) by drawing from existing definitions and literature in social sciences, law, and policy studies. They propose two models of domestic and international influencers on CI emergence, development, and implementation in the early 21st century. Based on its historical emergence and computational power, they argue that cyberinfrastructure is built on, and yet distinct from the current notion of the internet. The authors seek to answer two research questions: firstly, what is cyberinfrastructure? And secondly, what national and international influencers shape its emergence, development and implementation (in e-science) in the early 21st century? Additionally, consideration will be given to the implications of the proposed definition and models, and future directions on CI research in Internet studies will be suggested.