6 resultados para ELECTRONIC DIFFERENTIAL SYSTEM (EDS)
em Universidade do Minho
Resumo:
Electroactive polymers are one of the most interesting class of polymers used as smart materials in various applications, such as the development of sensors and actuators for biomedical applications in areas such as smart prosthesis, implantable biosensors and biomechanical signal monitoring, among others. For acquiring or applying the electrical signal from/to the piezoelectric material, suitable electrodes can be produced from Ti based coatings with tailored multifunctional properties, conductivity and antibacterial characteristics, through Ag inclusions. This work reports on Ag-TiNx electrodes, deposited by d. c. and pulsed magnetron sputtering at room temperature on poly(vinylidene fluoride), PVDF, the all-round best piezoelectric polymer.. Composition of the electrodes was assessed by microanalysis X-ray system (EDS - energy dispersive spectrometer). The XRD results revealed that the deposition conditions preserve the polymer structure and suggested the presence of crystalline fcc-TiN phase and fcc-Ag phase in samples with N2 flow above 3 sccm. According to the results obtained from SEM analysis, the coatings are homogeneous and Ag clusters were found for samples with nitrogen flow above 3 sccm. With increasing nitrogen flow, the sheet resistivity tend to be lower than the samples without nitrogen, leading also to a decrease of the piezoelectric response. It is concluded that the deposition conditions do significantly affect the piezoelectric polymer, which maintain its characteristics for sensor/actuator applications.
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for human-computer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of visionbased interaction systems could be the same for all applications and thus facilitate the implementation. For hand posture recognition, a SVM (Support Vector Machine) model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM (Hidden Markov Model) model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications. To validate the proposed framework two applications were implemented. The first one is a real-time system able to interpret the Portuguese Sign Language. The second one is an online system able to help a robotic soccer game referee judge a game in real time.
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for humancomputer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of vision-based interaction systems can be the same for all applications and thus facilitate the implementation. In order to test the proposed solutions, three prototypes were implemented. For hand posture recognition, a SVM model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications.
Resumo:
"Lecture notes in computer science series, ISSN 0302-9743, vol. 9273"
Resumo:
Various differential cross-sections are measured in top-quark pair (tt¯) events produced in proton--proton collisions at a centre-of-mass energy of s√=7 TeV at the LHC with the ATLAS detector. These differential cross-sections are presented in a data set corresponding to an integrated luminosity of 4.6 fb−1. The differential cross-sections are presented in terms of kinematic variables of a top-quark proxy referred to as the pseudo-top-quark whose dependence on theoretical models is minimal. The pseudo-top-quark can be defined in terms of either reconstructed detector objects or stable particles in an analogous way. The measurements are performed on tt¯ events in the lepton+jets channel, requiring exactly one charged lepton and at least four jets with at least two of them tagged as originating from a b-quark. The hadronic and leptonic pseudo-top-quarks are defined via the leptonic or hadronic decay mode of the W boson produced by the top-quark decay in events with a single charged lepton.The cross-section is measured as a function of the transverse momentum and rapidity of both the hadronic and leptonic pseudo-top-quark as well as the transverse momentum, rapidity and invariant mass of the pseudo-top-quark pair system. The measurements are corrected for detector effects and are presented within a kinematic range that closely matches the detector acceptance. Differential cross-section measurements of the pseudo-top-quark variables are compared with several Monte Carlo models that implement next-to-leading order or leading-order multi-leg matrix-element calculations.
Resumo:
In previous work we have presented a model capable of generating human-like movements for a dual arm-hand robot involved in human-robot cooperative tasks. However, the focus was on the generation of reach-to-grasp and reach-to-regrasp bimanual movements and no synchrony in timing was taken into account. In this paper we extend the previous model in order to accomplish bimanual manipulation tasks by synchronously moving both arms and hands of an anthropomorphic robotic system. Specifically, the new extended model has been designed for two different tasks with different degrees of difficulty. Numerical results were obtained by the implementation of the IPOPT solver embedded in our MATLAB simulator.