896 resultados para Human-machine systems
Resumo:
Purpose: The aim of the present paper was to determine the effect of different types of ionizing radiation on the bond strength of three different dentin adhesive systems. Materials and Methods: One hundred twenty specimens of 60 human teeth (protocol number: 032/2007) sectioned mesiodistally were divided into 3 groups according to the adhesives systems used: SB (Adper Single Bond Plus), CB (Clearfil SE Bond) and AP (Adper Prompt Self-Etch). The adhesives were applied on dentin and photo-activated using LED (Lec 1000, MMoptics, 1000 mW/cm(2)). Customized elastomer molds (0.5 mm thickness) with three orifices of 1.2 mm diameter were placed onto the bonding areas and filled with composite resin (Filtek Z-250), which was photo-activated for 20 s. Each group was subdivided into 4 Subgroups for application of the different types of ionizing radiation: ultraviolet radiation (UV), diagnostic x-ray radiation (DX), therapeutic x-ray radiation (TX) and without irradiation (control group, CG). Microshear tests were carried out (Instron, model 4411), and afterwards the modes of failure were evaluated by optical and scanning electron microscope and classified using 5 scores: adhesive failure, mixed failures with 3 significance levels, and cohesive failure. The results of the shear bond strength test were submitted to ANOVA with Tukey`s test and Dunnett`s test, and the data from the failure pattern evaluation were analyzed with the Mann Whitney test (p = 0.05). Results: No change in bond strength of CB and AP was observed after application of the different radiation types, only SB showed increase in bond strength after UV (p = 0.0267) irradiation. The UV also changed the failure patterns of SB (p = 0.0001). Conclusion: The radio-induced changes did not cause degradation of the restorations, which means that they can be exposed to these types of ionizing radiation without weakening the bond strength.
Resumo:
The purpose of this study was to comparatively evaluate the response of human pulps after cavity preparation with different devices. Deep class I cavities were prepared in sound mandibular premolars using either a high-speed air-turbine handpiece (Group 1) or an Er: YAG laser (Group 2). Following total acid etching and the application of an adhesive system, all cavities were restored with composite resin. Fifteen days after the clinical procedure, the teeth were extracted and processed for analysis under optical microscopy. In Group 1 in which the average for the remaining dentin thickness (RDT) between the cavity floor and the coronal pulp was 909.5 mu m, a discrete inflammatory response occurred in only one specimen with an RDT of 214 mu m. However, tissue disorganization occurred in most specimens. In Group 2 (average RDT = 935.2 mu m), the discrete inflammatory pulp response was observed in only one specimen (average RDT = 413 mu m). It may be concluded that the high-speed air-turbine handpiece caused greater structural alterations in the pulp, although without inducing inflammatory processes.
Resumo:
Establishing metrics to assess machine translation (MT) systems automatically is now crucial owing to the widespread use of MT over the web. In this study we show that such evaluation can be done by modeling text as complex networks. Specifically, we extend our previous work by employing additional metrics of complex networks, whose results were used as input for machine learning methods and allowed MT texts of distinct qualities to be distinguished. Also shown is that the node-to-node mapping between source and target texts (English-Portuguese and Spanish-Portuguese pairs) can be improved by adding further hierarchical levels for the metrics out-degree, in-degree, hierarchical common degree, cluster coefficient, inter-ring degree, intra-ring degree and convergence ratio. The results presented here amount to a proof-of-principle that the possible capturing of a wider context with the hierarchical levels may be combined with machine learning methods to yield an approach for assessing the quality of MT systems. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Shwachman-Bodian-Diamond syndrome is an autosomal recessive genetic syndrome with pleiotropic phenotypes, including pancreatic deficiencies, bone marrow dysfunctions with increased risk of myelodysplasia or leukemia, and skeletal abnormalities. This syndrome has been associated with mutations in the SBDS gene, which encodes a conserved protein showing orthologs in Archaea and eukaryotes. The Shwachman-Bodian-Diamond syndrome pleiotropic phenotypes may be an indication of different cell type requirements for a fully functional SBDS protein. RNA-binding activity has been predicted for archaeal and yeast SBDS orthologs, with the latter also being implicated in ribosome biogenesis. However, full-length SBDS orthologs function in a species-specific manner, indicating that the knowledge obtained from model systems may be of limited use in understanding major unresolved issues regarding SBDS function, namely, the effect of mutations in human SBDS on its biochemical function and the specificity of RNA interaction. We determined the solution structure and backbone dynamics of the human SBDS protein and describe its RNA binding site using NMR spectroscopy. Similarly to the crystal structures of Archaea, the overall structure of human SBDS comprises three well-folded domains. However, significant conformational exchange was observed in NMR dynamics experiments for the flexible linker between the N-terminal domain and the central domain, and these experiments also reflect the relative motions of the domains. RNA titrations monitored by heteronuclear correlation experiments and chemical shift mapping analysis identified a classic RNA binding site at the N-terminal FYSH (fungal, Yhr087wp, Shwachman) domain that concentrates most of the mutations described for the human SBDS. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This project is based on Artificial Intelligence (A.I) and Digital Image processing (I.P) for automatic condition monitoring of sleepers in the railway track. Rail inspection is a very important task in railway maintenance for traffic safety issues and in preventing dangerous situations. Monitoring railway track infrastructure is an important aspect in which the periodical inspection of rail rolling plane is required.Up to the present days the inspection of the railroad is operated manually by trained personnel. A human operator walks along the railway track searching for sleeper anomalies. This monitoring way is not more acceptable for its slowness and subjectivity. Hence, it is desired to automate such intuitive human skills for the development of more robust and reliable testing methods. Images of wooden sleepers have been used as data for my project. The aim of this project is to present a vision based technique for inspecting railway sleepers (wooden planks under the railway track) by automatic interpretation of Non Destructive Test (NDT) data using A.I. techniques in determining the results of inspection.
Resumo:
Since last two decades researches have been working on developing systems that can assistsdrivers in the best way possible and make driving safe. Computer vision has played a crucialpart in design of these systems. With the introduction of vision techniques variousautonomous and robust real-time traffic automation systems have been designed such asTraffic monitoring, Traffic related parameter estimation and intelligent vehicles. Among theseautomatic detection and recognition of road signs has became an interesting research topic.The system can assist drivers about signs they don’t recognize before passing them.Aim of this research project is to present an Intelligent Road Sign Recognition System basedon state-of-the-art technique, the Support Vector Machine. The project is an extension to thework done at ITS research Platform at Dalarna University [25]. Focus of this research work ison the recognition of road signs under analysis. When classifying an image its location, sizeand orientation in the image plane are its irrelevant features and one way to get rid of thisambiguity is to extract those features which are invariant under the above mentionedtransformation. These invariant features are then used in Support Vector Machine forclassification. Support Vector Machine is a supervised learning machine that solves problemin higher dimension with the help of Kernel functions and is best know for classificationproblems.
Resumo:
In a global economy, manufacturers mainly compete with cost efficiency of production, as the price of raw materials are similar worldwide. Heavy industry has two big issues to deal with. On the one hand there is lots of data which needs to be analyzed in an effective manner, and on the other hand making big improvements via investments in cooperate structure or new machinery is neither economically nor physically viable. Machine learning offers a promising way for manufacturers to address both these problems as they are in an excellent position to employ learning techniques with their massive resource of historical production data. However, choosing modelling a strategy in this setting is far from trivial and this is the objective of this article. The article investigates characteristics of the most popular classifiers used in industry today. Support Vector Machines, Multilayer Perceptron, Decision Trees, Random Forests, and the meta-algorithms Bagging and Boosting are mainly investigated in this work. Lessons from real-world implementations of these learners are also provided together with future directions when different learners are expected to perform well. The importance of feature selection and relevant selection methods in an industrial setting are further investigated. Performance metrics have also been discussed for the sake of completion.
Resumo:
Agent-oriented cooperation techniques and standardized electronic healthcare record exchange protocols can be used to combine information regarding different facets of a therapy received by a patient from different healthcare providers at different locations. Provenance is an innovative approach to trace events in complex distributed processes, dependencies between such events, and associated decisions by human actors. We focus on three aspects of provenance in agent-mediated healthcare systems: first, we define the provenance concept and show how it can be applied to agent-mediated healthcare applications; second, we investigate and provide a method for independent and autonomous healthcare agents to document the processes they are involved in without directly interacting with each other; and third, we show that this method solves the privacy issues of provenance in agent-mediated healthcare systems.
Resumo:
Many contaminants are currently unregulated by the government and do not have a set limit, known as the Maximum Contaminant Level, which is dictated by cost and the best available treatment technology. The Maximum Contaminant Level Goal, on the other hand, is based solely upon health considerations and is non-enforceable. In addition to being naturally occurring, contaminants may enter drinking water supplies through industrial sources, agricultural practices, urban pollution, sprawl, and water treatment byproducts. Exposure to these contaminants is not limited to ingestion and can also occur through dermal absorption and inhalation in the shower. Health risks for the general public include skin damage, increased risk of cancer, circulatory problems, and multiple toxicities. At low levels, these contaminants generally are not harmful in our drinking water. However, children, pregnant women, and people with compromised immune systems are more vulnerable to the health risks associated with these contaminants. Vulnerable peoples should take additional precautions with drinking water. This research project was conducted in order to learn more about our local drinking water and to characterize our exposure to contaminants. We hope to increase public awareness of water quality issues by educating the local residents about their drinking water in order to promote public health and minimize exposure to some of the contaminants contained within public water supplies.
Resumo:
Developing successful navigation and mapping strategies is an essential part of autonomous robot research. However, hardware limitations often make for inaccurate systems. This project serves to investigate efficient alternatives to mapping an environment, by first creating a mobile robot, and then applying machine learning to the robot and controlling systems to increase the robustness of the robot system. My mapping system consists of a semi-autonomous robot drone in communication with a stationary Linux computer system. There are learning systems running on both the robot and the more powerful Linux system. The first stage of this project was devoted to designing and building an inexpensive robot. Utilizing my prior experience from independent studies in robotics, I designed a small mobile robot that was well suited for simple navigation and mapping research. When the major components of the robot base were designed, I began to implement my design. This involved physically constructing the base of the robot, as well as researching and acquiring components such as sensors. Implementing the more complex sensors became a time-consuming task, involving much research and assistance from a variety of sources. A concurrent stage of the project involved researching and experimenting with different types of machine learning systems. I finally settled on using neural networks as the machine learning system to incorporate into my project. Neural nets can be thought of as a structure of interconnected nodes, through which information filters. The type of neural net that I chose to use is a type that requires a known set of data that serves to train the net to produce the desired output. Neural nets are particularly well suited for use with robotic systems as they can handle cases that lie at the extreme edges of the training set, such as may be produced by "noisy" sensor data. Through experimenting with available neural net code, I became familiar with the code and its function, and modified it to be more generic and reusable for multiple applications of neural nets.
Resumo:
Esta dissertação examina a situação geral dos Acidentes Viários, no contexto do transporte rodoviário, cujas evidências apontam o Fator Humano como o maior responsável por tais eventos. Entende-se que um maior conhecimento sobre ele possibilitará melhorar a segurança do tráfego e da produção transporte. O estudo pretende destacar a importância das análises relacionadas com a atividade transporte rodoviário, as variações da demanda do sistema de circulação e a tarefa do motorista, sob a ótica da ergonomia. Objetiva ele, também, mostrar importância desses estudos para melhor avaliar as interações dos fatores homemmáquina- ambiente viário e para o desenvolvimento de novas tecnologias e produtos de segurança viária. A revisão bibliográfica dos capítulos iniciais revelam o estado da arte e a importância da segurança de trânsito, em nível internacional. Também revelaram que todas nações sofrem do mesmo mal em suas redes viárias, que varia de acordo com a realidade de cada um. Embora o acidente de trânsito seja um fenômeno comum às nações, aqui eles atingiram a dimensão de flagelo social, em razão da sua severidade; e de calamidade econômica, face a elevação dos custos de produção na atividade do transporte rodoviário. São analisadas as características do fator humano, fundamentais na tarefa de condução, e o respectivo nexo causal das falhas com a gênese do acidente, num sistema multifatorial e interativo. O trabalho fundamenta-se em extensa revisão bibliográfica. O estudo de caso, desenvolvido a partir da revisão dos dados de uma pesquisa anterior, comprova a hipótese que o “álcool-direção”, considerado na literatura como o maior causador de acidentes viários, tem sua presença marcada por elevados índices nas rodovias do RS, contrariando a conclusão da pesquisa anterior. Ao final, também oferece recomendações para o desenvolvimento de ações objetivas para melhorar a segurança viária.
Resumo:
The work described in this thesis aims to support the distributed design of integrated systems and considers specifically the need for collaborative interaction among designers. Particular emphasis was given to issues which were only marginally considered in previous approaches, such as the abstraction of the distribution of design automation resources over the network, the possibility of both synchronous and asynchronous interaction among designers and the support for extensible design data models. Such issues demand a rather complex software infrastructure, as possible solutions must encompass a wide range of software modules: from user interfaces to middleware to databases. To build such structure, several engineering techniques were employed and some original solutions were devised. The core of the proposed solution is based in the joint application of two homonymic technologies: CAD Frameworks and object-oriented frameworks. The former concept was coined in the late 80's within the electronic design automation community and comprehends a layered software environment which aims to support CAD tool developers, CAD administrators/integrators and designers. The latter, developed during the last decade by the software engineering community, is a software architecture model to build extensible and reusable object-oriented software subsystems. In this work, we proposed to create an object-oriented framework which includes extensible sets of design data primitives and design tool building blocks. Such object-oriented framework is included within a CAD Framework, where it plays important roles on typical CAD Framework services such as design data representation and management, versioning, user interfaces, design management and tool integration. The implemented CAD Framework - named Cave2 - followed the classical layered architecture presented by Barnes, Harrison, Newton and Spickelmier, but the possibilities granted by the use of the object-oriented framework foundations allowed a series of improvements which were not available in previous approaches: - object-oriented frameworks are extensible by design, thus this should be also true regarding the implemented sets of design data primitives and design tool building blocks. This means that both the design representation model and the software modules dealing with it can be upgraded or adapted to a particular design methodology, and that such extensions and adaptations will still inherit the architectural and functional aspects implemented in the object-oriented framework foundation; - the design semantics and the design visualization are both part of the object-oriented framework, but in clearly separated models. This allows for different visualization strategies for a given design data set, which gives collaborating parties the flexibility to choose individual visualization settings; - the control of the consistency between semantics and visualization - a particularly important issue in a design environment with multiple views of a single design - is also included in the foundations of the object-oriented framework. Such mechanism is generic enough to be also used by further extensions of the design data model, as it is based on the inversion of control between view and semantics. The view receives the user input and propagates such event to the semantic model, which evaluates if a state change is possible. If positive, it triggers the change of state of both semantics and view. Our approach took advantage of such inversion of control and included an layer between semantics and view to take into account the possibility of multi-view consistency; - to optimize the consistency control mechanism between views and semantics, we propose an event-based approach that captures each discrete interaction of a designer with his/her respective design views. The information about each interaction is encapsulated inside an event object, which may be propagated to the design semantics - and thus to other possible views - according to the consistency policy which is being used. Furthermore, the use of event pools allows for a late synchronization between view and semantics in case of unavailability of a network connection between them; - the use of proxy objects raised significantly the abstraction of the integration of design automation resources, as either remote or local tools and services are accessed through method calls in a local object. The connection to remote tools and services using a look-up protocol also abstracted completely the network location of such resources, allowing for resource addition and removal during runtime; - the implemented CAD Framework is completely based on Java technology, so it relies on the Java Virtual Machine as the layer which grants the independence between the CAD Framework and the operating system. All such improvements contributed to a higher abstraction on the distribution of design automation resources and also introduced a new paradigm for the remote interaction between designers. The resulting CAD Framework is able to support fine-grained collaboration based on events, so every single design update performed by a designer can be propagated to the rest of the design team regardless of their location in the distributed environment. This can increase the group awareness and allow a richer transfer of experiences among them, improving significantly the collaboration potential when compared to previously proposed file-based or record-based approaches. Three different case studies were conducted to validate the proposed approach, each one focusing one a subset of the contributions of this thesis. The first one uses the proxy-based resource distribution architecture to implement a prototyping platform using reconfigurable hardware modules. The second one extends the foundations of the implemented object-oriented framework to support interface-based design. Such extensions - design representation primitives and tool blocks - are used to implement a design entry tool named IBlaDe, which allows the collaborative creation of functional and structural models of integrated systems. The third case study regards the possibility of integration of multimedia metadata to the design data model. Such possibility is explored in the frame of an online educational and training platform.
Resumo:
Develop software is still a risky business. After 60 years of experience, this community is still not able to consistently build Information Systems (IS) for organizations with predictable quality, within previously agreed budget and time constraints. Although software is changeable we are still unable to cope with the amount and complexity of change that organizations demand for their IS. To improve results, developers followed two alternatives: Frameworks that increase productivity but constrain the flexibility of possible solutions; Agile ways of developing software that keep flexibility with less upfront commitments. With strict frameworks, specific hacks have to be put in place to get around the framework construction options. In time this leads to inconsistent architectures that are harder to maintain due to incomplete documentation and human resources turnover. The main goals of this work is to create a new way to develop flexible IS for organizations, using web technologies, in a faster, better and cheaper way that is more suited to handle organizational change. To do so we propose an adaptive object model that uses a new ontology for data and action with strict normalizing rules. These rules should bound the effects of changes that can be better tested and therefore corrected. Interfaces are built with templates of resources that can be reused and extended in a flexible way. The “state of the world” for each IS is determined by all production and coordination acts that agents performed over time, even those performed by external systems. When bugs are found during maintenance, their past cascading effects can be checked through simulation, re-running the log of transaction acts over time and checking results with previous records. This work implements a prototype with part of the proposed system in order to have a preliminary assessment its feasibility and limitations.
Resumo:
As digital systems move away from traditional desktop setups, new interaction paradigms are emerging that better integrate with users’ realworld surroundings, and better support users’ individual needs. While promising, these modern interaction paradigms also present new challenges, such as a lack of paradigm-specific tools to systematically evaluate and fully understand their use. This dissertation tackles this issue by framing empirical studies of three novel digital systems in embodied cognition – an exciting new perspective in cognitive science where the body and its interactions with the physical world take a central role in human cognition. This is achieved by first, focusing the design of all these systems on a contemporary interaction paradigm that emphasizes physical interaction on tangible interaction, a contemporary interaction paradigm; and second, by comprehensively studying user performance in these systems through a set of novel performance metrics grounded on epistemic actions, a relatively well established and studied construct in the literature on embodied cognition. The first system presented in this dissertation is an augmented Four-in-a-row board game. Three different versions of the game were developed, based on three different interaction paradigms (tangible, touch and mouse), and a repeated measures study involving 36 participants measured the occurrence of three simple epistemic actions across these three interfaces. The results highlight the relevance of epistemic actions in such a task and suggest that the different interaction paradigms afford instantiation of these actions in different ways. Additionally, the tangible version of the system supports the most rapid execution of these actions, providing novel quantitative insights into the real benefits of tangible systems. The second system presented in this dissertation is a tangible tabletop scheduling application. Two studies with single and paired users provide several insights into the impact of epistemic actions on the user experience when these are performed outside of a system’s sensing boundaries. These insights are clustered by the form, size and location of ideal interface areas for such offline epistemic actions to occur, as well as how can physical tokens be designed to better support them. Finally, and based on the results obtained to this point, the last study presented in this dissertation directly addresses the lack of empirical tools to formally evaluate tangible interaction. It presents a video-coding framework grounded on a systematic literature review of 78 papers, and evaluates its value as metric through a 60 participant study performed across three different research laboratories. The results highlight the usefulness and power of epistemic actions as a performance metric for tangible systems. In sum, through the use of such novel metrics in each of the three studies presented, this dissertation provides a better understanding of the real impact and benefits of designing and developing systems that feature tangible interaction.
Resumo:
The industrial automation is directly linked to the development of information tecnology. Better hardware solutions, as well as improvements in software development methodologies make possible the rapid growth of the productive process control. In this thesis, we propose an architecture that will allow the joining of two technologies in hardware (industrial network) and software field (multiagent systems). The objective of this proposal is to join those technologies in a multiagent architecture to allow control strategies implementations in to field devices. With this, we intend develop an agents architecture to detect and solve problems which may occur in the industrial network environment. Our work ally machine learning with industrial context, become proposed multiagent architecture adaptable to unfamiliar or unexpected production environment. We used neural networks and presented an allocation strategies of these networks in industrial network field devices. With this we intend to improve decision support at plant level and allow operations human intervention independent