983 resultados para Software clones Detection
Resumo:
The subject of investigation of the present research is the use of smart hydrogels with fibre optic sensor technology. The aim was to develop a costeffective sensor platform for the detection of water in hydrocarbon media, and of dissolved inorganic analytes, namely potassium, calcium and aluminium. The fibre optic sensors in this work depend upon the use of hydrogels to either entrap chemotropic agents or to respond to external environmental changes, by changing their inherent properties, such as refractive index (RI). A review of current fibre optic technology for sensing outlined that the main principles utilised are either the measurement of signal loss or a change in wavelength of the light transmitted through the system. The signal loss principle relies on changing the conditions required for total internal reflection to occur. Hydrogels are cross-linked polymer networks that swell but do not dissolve in aqueous environments. Smart hydrogels are synthetic materials that exhibit additional properties to those inherent in their structure. In order to control the non-inherent properties, the hydrogels were fabricated with the addition of chemotropic agents. For the detection of water, hydrogels of low refractive index were synthesized using fluorinated monomers. Sulfonated monomers were used for their extreme hydrophilicity as a means of water sensing through an RI change. To enhance the sensing capability of the hydrogel, chemotropic agents, such as pH indicators and cobalt salts, were used. The system comprises of the smart hydrogel coated onto an exposed section of the fibre optic core, connected to the interrogation system measuring the difference in the signal. Information obtained was analysed using a purpose designed software. The developed sensor platform showed that an increase in the target species caused an increase in the signal lost from the sensor system, allowing for a detection of the target species. The system has potential applications in areas such as clinical point of care, water detection in fuels and the detection of dissolved ions in the water industry.
Resumo:
Purpose: The Nidek F-10 is a scanning laser ophthalmoscope that is capable of a novel fundus imaging technique, so-called ‘retro-mode’ imaging. The standard method of imaging drusen in age-related macular degeneration (AMD) is by fundus photography. The aim of the study was to assess drusen quantification using retro-mode imaging. Methods: Stereoscopic fundus photographs and retro-mode images were captured in 31 eyes of 20 patients with varying stages of AMD. Two experienced masked retinal graders independently assessed images for the number and size of drusen, using purpose-designed software. Drusen were further assessed in a subset of eight patients using optical coherence tomography (OCT) imaging. Results: Drusen observed by fundus photography (mean 33.5) were significantly fewer in number than subretinal deposits seen in retro-mode (mean 81.6; p < 0.001). The predominant deposit diameter was on average 5 µm smaller in retro-mode imaging than in fundus photography (p = 0.004). Agreement between graders for both types of imaging was substantial for number of deposits (weighted ? = 0.69) and moderate for size of deposits (weighted ? = 0.42). Retro-mode deposits corresponded to drusen on OCT imaging in all eight patients. Conclusion: The subretinal deposits detected by retro-mode imaging were consistent with the appearance of drusen on OCT imaging; however, a larger longitudinal study would be required to confirm this finding. Retro-mode imaging detected significantly more deposits than conventional colour fundus photography. Retro-mode imaging provides a rapid non-invasive technique, useful in monitoring subtle changes and progression of AMD, which may be useful in monitoring the response of drusen to future therapeutic interventions.
Resumo:
We present a study of the influence of dispersion induced phase noise for CO-OFDM systems using FFT multiplexing/IFFT demultiplexing techniques (software based). The software based system provides a method for a rigorous evaluation of the phase noise variance caused by Common Phase Error (CPE) and Inter-Carrier Interference (ICI) including - for the first time to our knowledge - in explicit form the effect of equalization enhanced phase noise (EEPN). This, in turns, leads to an analytic BER specification. Numerical results focus on a CO-OFDM system with 10-25 GS/s QPSK channel modulation. A worst case constellation configuration is identified for the phase noise influence and the resulting BER is compared to the BER of a conventional single channel QPSK system with the same capacity as the CO-OFDM implementation. Results are evaluated as a function of transmission distance. For both types of systems, the phase noise variance increases significantly with increasing transmission distance. For a total capacity of 400 (1000) Gbit/s, the transmission distance to have the BER < 10-2 for the worst case CO-OFDM design is less than 800 and 460 km, respectively, whereas for a single channel QPSK system it is less than 1400 and 560 km.
Resumo:
Computer software plays an important role in business, government, society and sciences. To solve real-world problems, it is very important to measure the quality and reliability in the software development life cycle (SDLC). Software Engineering (SE) is the computing field concerned with designing, developing, implementing, maintaining and modifying software. The present paper gives an overview of the Data Mining (DM) techniques that can be applied to various types of SE data in order to solve the challenges posed by SE tasks such as programming, bug detection, debugging and maintenance. A specific DM software is discussed, namely one of the analytical tools for analyzing data and summarizing the relationships that have been identified. The paper concludes that the proposed techniques of DM within the domain of SE could be well applied in fields such as Customer Relationship Management (CRM), eCommerce and eGovernment. ACM Computing Classification System (1998): H.2.8.
Resumo:
The move from Standard Definition (SD) to High Definition (HD) represents a six times increases in data, which needs to be processed. With expanding resolutions and evolving compression, there is a need for high performance with flexible architectures to allow for quick upgrade ability. The technology advances in image display resolutions, advanced compression techniques, and video intelligence. Software implementation of these systems can attain accuracy with tradeoffs among processing performance (to achieve specified frame rates, working on large image data sets), power and cost constraints. There is a need for new architectures to be in pace with the fast innovations in video and imaging. It contains dedicated hardware implementation of the pixel and frame rate processes on Field Programmable Gate Array (FPGA) to achieve the real-time performance. ^ The following outlines the contributions of the dissertation. (1) We develop a target detection system by applying a novel running average mean threshold (RAMT) approach to globalize the threshold required for background subtraction. This approach adapts the threshold automatically to different environments (indoor and outdoor) and different targets (humans and vehicles). For low power consumption and better performance, we design the complete system on FPGA. (2) We introduce a safe distance factor and develop an algorithm for occlusion occurrence detection during target tracking. A novel mean-threshold is calculated by motion-position analysis. (3) A new strategy for gesture recognition is developed using Combinational Neural Networks (CNN) based on a tree structure. Analysis of the method is done on American Sign Language (ASL) gestures. We introduce novel point of interests approach to reduce the feature vector size and gradient threshold approach for accurate classification. (4) We design a gesture recognition system using a hardware/ software co-simulation neural network for high speed and low memory storage requirements provided by the FPGA. We develop an innovative maximum distant algorithm which uses only 0.39% of the image as the feature vector to train and test the system design. Database set gestures involved in different applications may vary. Therefore, it is highly essential to keep the feature vector as low as possible while maintaining the same accuracy and performance^
Resumo:
The work described in this thesis was conducted with the aim of: 1) investigating the binding capabilities of calix[4]arene-functionalized microcantilevers towards specific metal ions and 2) developing a new16-microcantilever array sensing system for the rapid, and simultaneous detection of metal ions in fresh water. Part I of this thesis reports on the use of three new bimodal calix[4]arenes (methoxy, ethoxy and crown) as potential host/guest sensing layers for detecting selected ions in dilute aqueous solutions using single microcantilever experimental system. In this work it was shown that modifying the upper rim of the calix[4]arenes with a thioacetate end group allow calix[4]arenes to self-assemble on Au(111) forming complete highly ordered monolayers. It was also found that incubating the microcantilevers coated with 5 nm of Inconel and 40 nm of Au for 1 h in a 1.0 M solution of calix[4]arene produced the highest sensitivity. Methoxy-functionalized microcantilevers showed a definite preference for Ca²⁺ ions over other cationic guests and were able to detect trace concentration as low as 10⁻¹² M in aqueous solutions. Microcantilevers modified with ethoxy calix[4]arene displayed their highest sensitivity towards Sr²⁺ and to a lesser extent Ca²⁺ ions. Crown calix[4]arene-modified microcantilevers were however found to bind selectively towards Cs⁺ ions. In addition, the counter anion was also found to contribute to the deflection. For example methoxy calix[4]arene-modified microcantilever was found to be more sensitive to CaCl₂ over other water-soluble calcium salts such as Ca(NO₃)₂ , CaBr₂ and CaI₂. These findings suggest that the response of calix[4]arene-modified microcantilevers should be attributed to the target ionic species as a whole instead of only considering the specific cation and/or anion. Part II presents the development of a 16-microcantilever sensor setup. The implementation of this system involved the creation of data analysis software that incorporates data from the motorized actuator and a two-axis photosensitive detector to obtain the deflection signal originating from each individual microcantilever in the array. The system was shown to be capable of simultaneous measurements of multiple microcantilevers with different coatings. A functionalization unit was also developed that allows four microcantilevers in the array to be coated with an individual sensing layer one at the time. Because of the variability of the spring constants of different cantilevers within the array, results presented were quoted in units of surface stress unit in order to compare values between the microcantilevers in the array.
Resumo:
Software assets are key output of the RAGE project and they can be used by applied game developers to enhance the pedagogical and educational value of their games. These software assets cover a broad spectrum of functionalities – from player analytics including emotion detection to intelligent adaptation and social gamification. In order to facilitate integration and interoperability, all of these assets adhere to a common model, which describes their properties through a set of metadata. In this paper the RAGE asset model and asset metadata model is presented, capturing the detail of assets and their potential usage within three distinct dimensions – technological, gaming and pedagogical. The paper highlights key issues and challenges in constructing the RAGE asset and asset metadata model and details the process and design of a flexible metadata editor that facilitates both adaptation and improvement of the asset metadata model.
Resumo:
The large upfront investments required for game development pose a severe barrier for the wider uptake of serious games in education and training. Also, there is a lack of well-established methods and tools that support game developers at preserving and enhancing the games’ pedagogical effectiveness. The RAGE project, which is a Horizon 2020 funded research project on serious games, addresses these issues by making available reusable software components that aim to support the pedagogical qualities of serious games. In order to easily deploy and integrate these game components in a multitude of game engines, platforms and programming languages, RAGE has developed and validated a hybrid component-based software architecture that preserves component portability and interoperability. While a first set of software components is being developed, this paper presents selected examples to explain the overall system’s concept and its practical benefits. First, the Emotion Detection component uses the learners’ webcams for capturing their emotional states from facial expressions. Second, the Performance Statistics component is an add-on for learning analytics data processing, which allows instructors to track and inspect learners’ progress without bothering about the required statistics computations. Third, a set of language processing components accommodate the analysis of textual inputs of learners, facilitating comprehension assessment and prediction. Fourth, the Shared Data Storage component provides a technical solution for data storage - e.g. for player data or game world data - across multiple software components. The presented components are exemplary for the anticipated RAGE library, which will include up to forty reusable software components for serious gaming, addressing diverse pedagogical dimensions.
Resumo:
Lo scopo della tesi è di stimare le prestazioni del rivelatore ALICE nella rivelazione del barione Lambda_c nelle collisioni PbPb usando un approccio innovativo per l'identificazione delle particelle. L'idea principale del nuovo approccio è di sostituire l'usuale selezione della particella, basata su tagli applicati ai segnali del rivelatore, con una selezione che usi le probabilità derivate dal teorema di Bayes (per questo è chiamato "pesato Bayesiano"). Per stabilire quale metodo è il più efficiente , viene presentato un confronto con altri approcci standard utilizzati in ALICE. Per fare ciò è stato implementato un software di simulazione Monte Carlo "fast", settato con le abbondanze di particelle che ci si aspetta nel nuovo regime energetico di LHC e con le prestazioni osservate del rivelatore. E' stata quindi ricavata una stima realistica della produzione di Lambda_c, combinando i risultati noti da esperimenti precedenti e ciò è stato usato per stimare la significatività secondo la statistica al RUN2 e RUN3 dell'LHC. Verranno descritti la fisica di ALICE, tra cui modello standard, cromodinamica quantistica e quark gluon plasma. Poi si passerà ad analizzare alcuni risultati sperimentali recenti (RHIC e LHC). Verrà descritto il funzionamento di ALICE e delle sue componenti e infine si passerà all'analisi dei risultati ottenuti. Questi ultimi hanno mostrato che il metodo risulta avere una efficienza superiore a quella degli usuali approcci in ALICE e che, conseguentemente, per quantificare ancora meglio le prestazioni del nuovo metodo si dovrebbe eseguire una simulazione "full", così da verificare i risultati ottenuti in uno scenario totalmente realistico.
Resumo:
This paper presents a study that was undertaken to examine human interaction with a pedagogical agent and the passive and active detection of such agents within a synchronous, online environment. A pedagogical agent is a software application which can provide a human like interaction using a natural language interface. These may be familiar from the smartphone interfaces such as ‘Siri’ or ‘Cortana’, or the virtual online assistants found on some websites, such as ‘Anna’ on the Ikea website. Pedagogical agents are characters on the computer screen with embodied life-like behaviours such as speech, emotions, locomotion, gestures, and movements of the head, the eye, or other parts of the body. The passive detection test is where participants are not primed to the potential presence of a pedagogical agent within the online environment. The active detection test is where participants are primed to the potential presence of a pedagogical agent. The purpose of the study was to examine how people passively detected pedagogical agents that were presenting themselves as humans in an online environment. In order to locate the pedagogical agent in a realistic higher education online environment, problem-based learning online was used. Problem-based learning online provides a focus for discussions and participation, without creating too much artificiality. The findings indicated that the ways in which students positioned the agent tended to influence the interaction between them. One of the key findings was that since the agent was focussed mainly on the pedagogical task this may have hampered interaction with the students, however some of its non-task dialogue did improve students' perceptions of the autonomous agents’ ability to interact with them. It is suggested that future studies explore the differences between the relationships and interactions of learner and pedagogical agent within authentic situations, in order to understand if students' interactions are different between real and virtual mentors in an online setting.
Resumo:
In cardiovascular disease the definition and the detection of the ECG parameters related to repolarization dynamics in post MI patients is still a crucial unmet need. In addition, the use of a 3D sensor in the implantable medical devices would be a crucial mean in the assessment or prediction of Heart Failure status, but the inclusion of such feature is limited by hardware and firmware constraints. The aim of this thesis is the definition of a reliable surrogate of the 500 Hz ECG signal to reach the aforementioned objective. To evaluate the worsening of reliability due to sampling frequency reduction on delineation performance, the signals have been consecutively down sampled by a factor 2, 4, 8 thus obtaining the ECG signals sampled at 250, 125 and 62.5 Hz, respectively. The final goal is the feasibility assessment of the detection of the fiducial points in order to translate those parameters into meaningful clinical parameter for Heart Failure prediction, such as T waves intervals heterogeneity and variability of areas under T waves. An experimental setting for data collection on healthy volunteers has been set up at the Bakken Research Center in Maastricht. A 16 – channel ambulatory system, provided by TMSI, has recorded the standard 12 – Leads ECG, two 3D accelerometers and a respiration sensor. The collection platform has been set up by the TMSI property software Polybench, the data analysis of such signals has been performed with Matlab. The main results of this study show that the 125 Hz sampling rate has demonstrated to be a good candidate for a reliable detection of fiducial points. T wave intervals proved to be consistently stable, even at 62.5 Hz. Further studies would be needed to provide a better comparison between sampling at 250 Hz and 125 Hz for areas under the T waves.
Resumo:
The recent trend on embedded system development opens a new prospect for applications that in the past were not possible. The eye tracking for sleep and fatigue detection has become an important and useful application in industrial and automotive scenarios since fatigue is one of the most prevalent causes of earth-moving equipment accidents. Typical applications such as cameras, accelerometers and dermal analyzers are present on the market but have some inconvenient. This thesis project has used EEG signal, particularly, alpha waves, to overcome them by using an embedded software-hardware implementation to detect these signals in real time
Resumo:
The FIREDASS (FIRE Detection And Suppression Simulation) project is concerned with the development of fine water mist systems as a possible replacement for the halon fire suppression system currently used in aircraft cargo holds. The project is funded by the European Commission, under the BRITE EURAM programme. The FIREDASS consortium is made up of a combination of Industrial, Academic, Research and Regulatory partners. As part of this programme of work, a computational model has been developed to help engineers optimise the design of the water mist suppression system. This computational model is based on Computational Fluid Dynamics (CFD) and is composed of the following components: fire model; mist model; two-phase radiation model; suppression model and detector/activation model. The fire model - developed by the University of Greenwich - uses prescribed release rates for heat and gaseous combustion products to represent the fire load. Typical release rates have been determined through experimentation conducted by SINTEF. The mist model - developed by the University of Greenwich - is a Lagrangian particle tracking procedure that is fully coupled to both the gas phase and the radiation field. The radiation model - developed by the National Technical University of Athens - is described using a six-flux radiation model. The suppression model - developed by SINTEF and the University of Greenwich - is based on an extinguishment crietrion that relies on oxygen concentration and temperature. The detector/ activation model - developed by Cerberus - allows the configuration of many different detector and mist configurations to be tested within the computational model. These sub-models have been integrated by the University of Greenwich into the FIREDASS software package. The model has been validated using data from the SINTEF/GEC test campaigns and it has been found that the computational model gives good agreement with these experimental results. The best agreement is obtained at the ceiling which is where the detectors and misting nozzles would be located in a real system. In this paper the model is briefly described and some results from the validation of the fire and mist model are presented.