371 resultados para Artefact


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Studies continue to report ancient DNA sequences and viable microbial cells that are many millions of years old. In this paper we evaluate some of the most extravagant claims of geologically ancient DNA. We conclude that although exciting, the reports suffer from inadequate experimental setup and insufficient authentication of results. Consequently, it remains doubtful whether amplifiable DNA sequences and viable bacteria can survive over geological timescales. To enhance the credibility of future studies and assist in discarding false-positive results, we propose a rigorous set of authentication criteria for work with geologically ancient DNA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

3D models of long bones are being utilised for a number of fields including orthopaedic implant design. Accurate reconstruction of 3D models is of utmost importance to design accurate implants to allow achieving a good alignment between two bone fragments. Thus for this purpose, CT scanners are employed to acquire accurate bone data exposing an individual to a high amount of ionising radiation. Magnetic resonance imaging (MRI) has been shown to be a potential alternative to computed tomography (CT) for scanning of volunteers for 3D reconstruction of long bones, essentially avoiding the high radiation dose from CT. In MRI imaging of long bones, the artefacts due to random movements of the skeletal system create challenges for researchers as they generate inaccuracies in the 3D models generated by using data sets containing such artefacts. One of the defects that have been observed during an initial study is the lateral shift artefact occurring in the reconstructed 3D models. This artefact is believed to result from volunteers moving the leg during two successive scanning stages (the lower limb has to be scanned in at least five stages due to the limited scanning length of the scanner). As this artefact creates inaccuracies in the implants designed using these models, it needs to be corrected before the application of 3D models to implant design. Therefore, this study aimed to correct the lateral shift artefact using 3D modelling techniques. The femora of five ovine hind limbs were scanned with a 3T MRI scanner using a 3D vibe based protocol. The scanning was conducted in two halves, while maintaining a good overlap between them. A lateral shift was generated by moving the limb several millimetres between two scanning stages. The 3D models were reconstructed using a multi threshold segmentation method. The correction of the artefact was achieved by aligning the two halves using the robust iterative closest point (ICP) algorithm, with the help of the overlapping region between the two. The models with the corrected artefact were compared with the reference model generated by CT scanning of the same sample. The results indicate that the correction of the artefact was achieved with an average deviation of 0.32 ± 0.02 mm between the corrected model and the reference model. In comparison, the model obtained from a single MRI scan generated an average error of 0.25 ± 0.02 mm when compared with the reference model. An average deviation of 0.34 ± 0.04 mm was seen when the models generated after the table was moved were compared to the reference models; thus, the movement of the table is also a contributing factor to the motion artefacts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this workshop proposal I discuss a case study physical computing environment named Talk2Me. This work was exhibited in February 2006 at The Block, Brisbane as an interactive installation in the early stages of its development. The major artefact in this work is a 10 metre wide X 3 metre high light-permeable white dome. There are other technologies and artefacts contained within the dome that make up this interactive environment. The dome artefact has impacted heavily on the design process, including shaping the types of interactions involved, the kinds of technologies employed, and the choice of other artefacts. In this workshop paper, I chart some of the various iterations Talk2Me has undergone in the design process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Frameworks such as activity theory, distributed cognition and structuration theory, amongst others, have shown that detailed study of contextual settings where users work (or live) can help the design of interactive systems. However, these frameworks do not adequately focus on accounting for the materiality (and embodiment) of the contextual settings. Within the IST-EU funded AMIDA project (Augmented Multiparty Interaction with Distance Access) we are looking into supporting meeting practices with distance access. Meetings are inherently embodied in everyday work life and that material artefacts associated with meeting practices play a critical role in their formation. Our eventual goal is to develop a deeper understanding of the dynamic and embodied nature of meeting practices and designing technologies to support these. In this paper we introduce the notion of "artefact ecologies" as a conceptual base for understanding embodied meeting practices with distance access. Artefact ecologies refer to a system consisting of different digital and physical artefacts, people, their work practices and values and lays emphasis on the role artefacts play in embodiment, work coordination and supporting remote awareness. In the end we layout our plans for designing technologies for supporting embodied meeting practices within the AMIDA project.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The growth of APIs and Web services on the Internet, especially through larger enterprise systems increasingly being leveraged for Cloud and software-as-a-service opportunities, poses challenges for improving the efficiency of integration with these services. Interfaces of enterprise systems are typically larger, more complex and overloaded, with single operations having multiple data entities and parameter sets, supporting varying requests, and reflecting versioning across different system releases, compared to fine-grained operations of contemporary interfaces. We propose a technique to support the refactoring of service interfaces by deriving business entities and their relationships. In this paper, we focus on the behavioural aspects of service interfaces, aiming to discover the sequential dependencies of operations (otherwise known as protocol extraction) based on the entities and relationships derived. Specifically, we propose heuristics according to these relationships, and in turn, deriving permissible orders in which operations are invoked. As a result of this, service operations can be refactored on business entity CRUD lines, with explicit behavioural protocols as part of an interface definition. This supports flexible service discovery, composition and integration. A prototypical implementation and analysis of existing Web services, including those of commercial logistic systems (Fedex), are used to validate the algorithms proposed through the paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a novel framework and algorithms for the analysis of Web service interfaces to improve the efficiency of application integration in wide-spanning business networks. Our approach addresses the notorious issue of large and overloaded operational signatures, which are becoming increasingly prevalent on the Internet and being opened up for third-party service aggregation. Extending upon existing techniques used to refactor service interfaces based on derived artefacts of applications, namely business entities, we propose heuristics for deriving relations between business entities, and in turn, deriving permissible orders in which operations are invoked. As a result, service operations are refactored on business entity CRUD which then leads to behavioural protocols generated, thus supportive of fine-grained and flexible service discovery, composition and interaction. A prototypical implementation and analysis of web services, including those of commercial logistic systems (Fedex), are used to validate the algorithms and open up further insights into service interface synthesis.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The electroencephalogram (EEG) is a medical technology that is used in the monitoring of the brain and in the diagnosis of many neurological illnesses. Although coarse in its precision, the EEG is a non-invasive tool that requires minimal set-up times, and is suitably unobtrusive and mobile to allow continuous monitoring of the patient, either in clinical or domestic environments. Consequently, the EEG is the current tool-of-choice with which to continuously monitor the brain where temporal resolution, ease-of- use and mobility are important. Traditionally, EEG data are examined by a trained clinician who identifies neurological events of interest. However, recent advances in signal processing and machine learning techniques have allowed the automated detection of neurological events for many medical applications. In doing so, the burden of work on the clinician has been significantly reduced, improving the response time to illness, and allowing the relevant medical treatment to be administered within minutes rather than hours. However, as typical EEG signals are of the order of microvolts (μV ), contamination by signals arising from sources other than the brain is frequent. These extra-cerebral sources, known as artefacts, can significantly distort the EEG signal, making its interpretation difficult, and can dramatically disimprove automatic neurological event detection classification performance. This thesis therefore, contributes to the further improvement of auto- mated neurological event detection systems, by identifying some of the major obstacles in deploying these EEG systems in ambulatory and clinical environments so that the EEG technologies can emerge from the laboratory towards real-world settings, where they can have a real-impact on the lives of patients. In this context, the thesis tackles three major problems in EEG monitoring, namely: (i) the problem of head-movement artefacts in ambulatory EEG, (ii) the high numbers of false detections in state-of-the-art, automated, epileptiform activity detection systems and (iii) false detections in state-of-the-art, automated neonatal seizure detection systems. To accomplish this, the thesis employs a wide range of statistical, signal processing and machine learning techniques drawn from mathematics, engineering and computer science. The first body of work outlined in this thesis proposes a system to automatically detect head-movement artefacts in ambulatory EEG and utilises supervised machine learning classifiers to do so. The resulting head-movement artefact detection system is the first of its kind and offers accurate detection of head-movement artefacts in ambulatory EEG. Subsequently, addtional physiological signals, in the form of gyroscopes, are used to detect head-movements and in doing so, bring additional information to the head- movement artefact detection task. A framework for combining EEG and gyroscope signals is then developed, offering improved head-movement arte- fact detection. The artefact detection methods developed for ambulatory EEG are subsequently adapted for use in an automated epileptiform activity detection system. Information from support vector machines classifiers used to detect epileptiform activity is fused with information from artefact-specific detection classifiers in order to significantly reduce the number of false detections in the epileptiform activity detection system. By this means, epileptiform activity detection which compares favourably with other state-of-the-art systems is achieved. Finally, the problem of false detections in automated neonatal seizure detection is approached in an alternative manner; blind source separation techniques, complimented with information from additional physiological signals are used to remove respiration artefact from the EEG. In utilising these methods, some encouraging advances have been made in detecting and removing respiration artefacts from the neonatal EEG, and in doing so, the performance of the underlying diagnostic technology is improved, bringing its deployment in the real-world, clinical domain one step closer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The artefact and techno-centricity of the research into the architecture process needs to be counterbalanced by other approaches. An increasing amount of information is collected and used in the process, resulting in challenges related to information and knowledge management, as this research evidences through interviews with practicing architects. However, emerging technologies are expected to resolve many of the traditional challenges, opening up new avenues for research. This research suggests that among them novel techniques addressing how architects interact with project information, especially that indirectly related to the artefacts, and tools which better address the social nature of work, notably communication between participants, become a higher priority. In the fields associated with the Human Computer Interaction generic solutions still frequently prevail, whereas it appears that specific alternative approaches would be particularly in demand for the dynamic and context dependent design process. This research identifies an opportunity for a process-centric and integrative approach for architectural practice and proposes an information management and communication software application, developed for the needs discovered in close collaboration with architects. Departing from the architects’ challenges, an information management software application, Mneme, was designed and developed until a working prototype. It proposes the use of visualizations as an interface to provide an overview of the process, facilitate project information retrieval and access, and visualize relationships between the pieces of information. Challenges with communication about visual content, such as images and 3D files, led to a development of a communication feature allowing discussions attached to any file format and searchable from a database. Based on the architects testing the prototype and literature recognizing the subjective side of usability, this thesis argues that visualizations, even 3D visualizations, present potential as an interface for information management in the architecture process. The architects confirmed that Mneme allowed them to have a better project overview, to easier locate heterogeneous content, and provided context for the project information. Communication feature in Mneme was seen to offer a lot of potential in design projects where diverse file formats are typically used. Through empirical understanding of the challenges in the architecture process, and through testing the resulting software proposal, this thesis suggests promising directions for future research into the architecture and design process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La phylogénie moléculaire fournit un outil complémentaire aux études paléontologiques et géologiques en permettant la construction des relations phylogénétiques entre espèces ainsi que l’estimation du temps de leur divergence. Cependant lorsqu’un arbre phylogénétique est inféré, les chercheurs se focalisent surtout sur la topologie, c'est-à-dire l’ordre de branchement relatif des différents nœuds. Les longueurs des branches de cette phylogénie sont souvent considérées comme des sous-produits, des paramètres de nuisances apportant peu d’information. Elles constituent cependant l’information primaire pour réaliser des datations moléculaires. Or la saturation, la présence de substitutions multiples à une même position, est un artefact qui conduit à une sous-estimation systématique des longueurs de branche. Nous avons décidé d’estimer l‘influence de la saturation et son impact sur l’estimation de l’âge de divergence. Nous avons choisi d’étudier le génome mitochondrial des mammifères qui est supposé avoir un niveau élevé de saturation et qui est disponible pour de nombreuses espèces. De plus, les relations phylogénétiques des mammifères sont connues, ce qui nous a permis de fixer la topologie, contrôlant ainsi un des paramètres influant la longueur des branches. Nous avons utilisé principalement deux méthodes pour améliorer la détection des substitutions multiples : (i) l’augmentation du nombre d’espèces afin de briser les plus longues branches de l’arbre et (ii) des modèles d’évolution des séquences plus ou moins réalistes. Les résultats montrèrent que la sous-estimation des longueurs de branche était très importante (jusqu'à un facteur de 3) et que l’utilisation d'un grand nombre d’espèces est un facteur qui influence beaucoup plus la détection de substitutions multiples que l’amélioration des modèles d’évolutions de séquences. Cela suggère que même les modèles d’évolution les plus complexes disponibles actuellement, (exemple: modèle CAT+Covarion, qui prend en compte l’hétérogénéité des processus de substitution entre positions et des vitesses d’évolution au cours du temps) sont encore loin de capter toute la complexité des processus biologiques. Malgré l’importance de la sous-estimation des longueurs de branche, l’impact sur les datations est apparu être relativement faible, car la sous-estimation est plus ou moins homothétique. Cela est particulièrement vrai pour les modèles d’évolution. Cependant, comme les substitutions multiples sont le plus efficacement détectées en brisant les branches en fragments les plus courts possibles via l’ajout d’espèces, se pose le problème du biais dans l’échantillonnage taxonomique, biais dû à l‘extinction pendant l’histoire de la vie sur terre. Comme ce biais entraine une sous-estimation non-homothétique, nous considérons qu’il est indispensable d’améliorer les modèles d’évolution des séquences et proposons que le protocole élaboré dans ce travail permettra d’évaluer leur efficacité vis-à-vis de la saturation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The focus of the study is on the values, priorities and arguments needed to advance ‘design for sustainability’. The discussion critiques conventions related to innovation and technology and offers a product design approach that emphasises minimal intervention, integrated thinking-and-doing, reinstating the familiar, localization and the particularities of place. These interdependent themes are discussed in terms of their relationship to design for sustainability and are clarified through the conceptualization, design, production and use of a simple functional object that is, essentially, a ‘symbolic sustainable artefact’. Although it is fully functional, its practical usefulness in contemporary society would probably be seen as marginal. Its potential contribution is as a symbol of an alternative direction. It asks us to consider aspects of our humanity that are beyond instrumental approaches. It challenges sustainable initiatives that become so caught up in practical and environmental concerns that they fail to question the underlying values, priorities and social drivers which affect how we act in the world; those behaviours and norms that have created the very problems we are so urgently trying to tackle. The discussion is accompanied by a parallel series of photographs that document the relationship between argument, locale and the creation of the conceptual artefact. These photographs convey some of the qualitative differences between the place of much contemporary artefact acquisition, i.e. the shopping mall, and the particular locale that yielded the artefact created during this study; they are also useful in conveying the potential relationship that exist between place and the aesthetics of functional objects.