878 resultados para Automatic device
Resumo:
Images of a scene, static or dynamic, are generally acquired at different epochs from different viewpoints. They potentially gather information about the whole scene and its relative motion with respect to the acquisition device. Data from different (in the spatial or temporal domain) visual sources can be fused together to provide a unique consistent representation of the whole scene, even recovering the third dimension, permitting a more complete understanding of the scene content. Moreover, the pose of the acquisition device can be achieved by estimating the relative motion parameters linking different views, thus providing localization information for automatic guidance purposes. Image registration is based on the use of pattern recognition techniques to match among corresponding parts of different views of the acquired scene. Depending on hypotheses or prior information about the sensor model, the motion model and/or the scene model, this information can be used to estimate global or local geometrical mapping functions between different images or different parts of them. These mapping functions contain relative motion parameters between the scene and the sensor(s) and can be used to integrate accordingly informations coming from the different sources to build a wider or even augmented representation of the scene. Accordingly, for their scene reconstruction and pose estimation capabilities, nowadays image registration techniques from multiple views are increasingly stirring up the interest of the scientific and industrial community. Depending on the applicative domain, accuracy, robustness, and computational payload of the algorithms represent important issues to be addressed and generally a trade-off among them has to be reached. Moreover, on-line performance is desirable in order to guarantee the direct interaction of the vision device with human actors or control systems. This thesis follows a general research approach to cope with these issues, almost independently from the scene content, under the constraint of rigid motions. This approach has been motivated by the portability to very different domains as a very desirable property to achieve. A general image registration approach suitable for on-line applications has been devised and assessed through two challenging case studies in different applicative domains. The first case study regards scene reconstruction through on-line mosaicing of optical microscopy cell images acquired with non automated equipment, while moving manually the microscope holder. By registering the images the field of view of the microscope can be widened, preserving the resolution while reconstructing the whole cell culture and permitting the microscopist to interactively explore the cell culture. In the second case study, the registration of terrestrial satellite images acquired by a camera integral with the satellite is utilized to estimate its three-dimensional orientation from visual data, for automatic guidance purposes. Critical aspects of these applications are emphasized and the choices adopted are motivated accordingly. Results are discussed in view of promising future developments.
Resumo:
Studio di una interfaccia mobile web per la piattaforma XWiki.
Resumo:
Electronic devices based on organic semiconductors have gained increased attention in nanotechnology, especially applicable to the field of field-effect transistors and photovoltaic. A promising class of materials in this reseach field are polycyclic aromatic hydrocarbons (PAHs). Alkyl substitution of these graphenes results in the selforganization into one-dimensional columnar superstructures and provides solubility and processibility. The nano-phase separation between the π-stacking aromatic cores and the disordered peripheral alkyl chains leads to the formation of thermotropic mesophases. Hexa-peri-hexabenzocoronenes (HBC), as an example for a PAH, exhibits some of the highest values for the charge carrier mobility for mesogens, which makes them promising candidates for electronic devices. Prerequisites for efficient charge carrier transport between electrodes are a high purity of the material to reduce possible trapping sites for charge carriers and a pronounced and defect-free, long-range order. Appropriate processing techniques are required to induce a high degree of aligned structures in the discotic material over macroscopic dimensions. Highly-ordered supramolecular structures of different discotics, in particular, of HBC derivatives have been obtained by solution processing using the zone-casting technique, zone-melting or simple extrusion. Simplicity and fabrication of highly oriented columnar structures over long-range are the most essential advantages of these zone-processing methods. A close relation between the molecular design, self-aggregation and the processing conditions has been revealed. The long-range order achieved by the zone-casting proved to be suitable for field effect transistors (FET).
Towards model driven software development for Arduino platforms: a DSL and automatic code generation
Resumo:
La tesi ha lo scopo di esplorare la produzione di sistemi software per Embedded Systems mediante l'utilizzo di tecniche relative al mondo del Model Driven Software Development. La fase più importante dello sviluppo sarà la definizione di un Meta-Modello che caratterizza i concetti fondamentali relativi agli embedded systems. Tale modello cercherà di astrarre dalla particolare piattaforma utilizzata ed individuare quali astrazioni caratterizzano il mondo degli embedded systems in generale. Tale meta-modello sarà quindi di tipo platform-independent. Per la generazione automatica di codice è stata adottata una piattaforma di riferimento, cioè Arduino. Arduino è un sistema embedded che si sta sempre più affermando perché coniuga un buon livello di performance ed un prezzo relativamente basso. Tale piattaforma permette lo sviluppo di sistemi special purpose che utilizzano sensori ed attuatori di vario genere, facilmente connessi ai pin messi a disposizione. Il meta-modello definito è un'istanza del meta-metamodello MOF, definito formalmente dall'organizzazione OMG. Questo permette allo sviluppatore di pensare ad un sistema sotto forma di modello, istanza del meta-modello definito. Un meta-modello può essere considerato anche come la sintassi astratta di un linguaggio, quindi può essere definito da un insieme di regole EBNF. La tecnologia utilizzata per la definizione del meta-modello è stata Xtext: un framework che permette la scrittura di regole EBNF e che genera automaticamente il modello Ecore associato al meta-modello definito. Ecore è l'implementazione di EMOF in ambiente Eclipse. Xtext genera inoltre dei plugin che permettono di avere un editor guidato dalla sintassi, definita nel meta-modello. La generazione automatica di codice è stata realizzata usando il linguaggio Xtend2. Tale linguaggio permette di esplorare l'Abstract Syntax Tree generato dalla traduzione del modello in Ecore e di generare tutti i file di codice necessari. Il codice generato fornisce praticamente tutta la schematic part dell'applicazione, mentre lascia all'application designer lo sviluppo della business logic. Dopo la definizione del meta-modello di un sistema embedded, il livello di astrazione è stato spostato più in alto, andando verso la definizione della parte di meta-modello relativa all'interazione di un sistema embedded con altri sistemi. Ci si è quindi spostati verso un ottica di Sistema, inteso come insieme di sistemi concentrati che interagiscono. Tale difinizione viene fatta dal punto di vista del sistema concentrato di cui si sta definendo il modello. Nella tesi viene inoltre introdotto un caso di studio che, anche se abbastanza semplice, fornisce un esempio ed un tutorial allo sviluppo di applicazioni mediante l'uso del meta-modello. Ci permette inoltre di notare come il compito dell'application designer diventi piuttosto semplice ed immediato, sempre se basato su una buona analisi del problema. I risultati ottenuti sono stati di buona qualità ed il meta-modello viene tradotto in codice che funziona correttamente.
Resumo:
The central objective of research in Information Retrieval (IR) is to discover new techniques to retrieve relevant information in order to satisfy an Information Need. The Information Need is satisfied when relevant information can be provided to the user. In IR, relevance is a fundamental concept which has changed over time, from popular to personal, i.e., what was considered relevant before was information for the whole population, but what is considered relevant now is specific information for each user. Hence, there is a need to connect the behavior of the system to the condition of a particular person and his social context; thereby an interdisciplinary sector called Human-Centered Computing was born. For the modern search engine, the information extracted for the individual user is crucial. According to the Personalized Search (PS), two different techniques are necessary to personalize a search: contextualization (interconnected conditions that occur in an activity), and individualization (characteristics that distinguish an individual). This movement of focus to the individual's need undermines the rigid linearity of the classical model overtaken the ``berry picking'' model which explains that the terms change thanks to the informational feedback received from the search activity introducing the concept of evolution of search terms. The development of Information Foraging theory, which observed the correlations between animal foraging and human information foraging, also contributed to this transformation through attempts to optimize the cost-benefit ratio. This thesis arose from the need to satisfy human individuality when searching for information, and it develops a synergistic collaboration between the frontiers of technological innovation and the recent advances in IR. The search method developed exploits what is relevant for the user by changing radically the way in which an Information Need is expressed, because now it is expressed through the generation of the query and its own context. As a matter of fact the method was born under the pretense to improve the quality of search by rewriting the query based on the contexts automatically generated from a local knowledge base. Furthermore, the idea of optimizing each IR system has led to develop it as a middleware of interaction between the user and the IR system. Thereby the system has just two possible actions: rewriting the query, and reordering the result. Equivalent actions to the approach was described from the PS that generally exploits information derived from analysis of user behavior, while the proposed approach exploits knowledge provided by the user. The thesis went further to generate a novel method for an assessment procedure, according to the "Cranfield paradigm", in order to evaluate this type of IR systems. The results achieved are interesting considering both the effectiveness achieved and the innovative approach undertaken together with the several applications inspired using a local knowledge base.
Resumo:
The identification of people by measuring some traits of individual anatomy or physiology has led to a specific research area called biometric recognition. This thesis is focused on improving fingerprint recognition systems considering three important problems: fingerprint enhancement, fingerprint orientation extraction and automatic evaluation of fingerprint algorithms. An effective extraction of salient fingerprint features depends on the quality of the input fingerprint. If the fingerprint is very noisy, we are not able to detect a reliable set of features. A new fingerprint enhancement method, which is both iterative and contextual, is proposed. This approach detects high-quality regions in fingerprints, selectively applies contextual filtering and iteratively expands like wildfire toward low-quality ones. A precise estimation of the orientation field would greatly simplify the estimation of other fingerprint features (singular points, minutiae) and improve the performance of a fingerprint recognition system. The fingerprint orientation extraction is improved following two directions. First, after the introduction of a new taxonomy of fingerprint orientation extraction methods, several variants of baseline methods are implemented and, pointing out the role of pre- and post- processing, we show how to improve the extraction. Second, the introduction of a new hybrid orientation extraction method, which follows an adaptive scheme, allows to improve significantly the orientation extraction in noisy fingerprints. Scientific papers typically propose recognition systems that integrate many modules and therefore an automatic evaluation of fingerprint algorithms is needed to isolate the contributions that determine an actual progress in the state-of-the-art. The lack of a publicly available framework to compare fingerprint orientation extraction algorithms, motivates the introduction of a new benchmark area called FOE (including fingerprints and manually-marked orientation ground-truth) along with fingerprint matching benchmarks in the FVC-onGoing framework. The success of such framework is discussed by providing relevant statistics: more than 1450 algorithms submitted and two international competitions.
Resumo:
In this thesis effects of plasma actuators based on Dielectric Barrier Discharge (DBD) technology over a NACA 0015 bidimensional airfoil have been analyzed in an experimental way, at low Reynolds number. Work developed on thesis has been carried on in partnership with the Department of Electrical Engineering of Università di Bologna, inside Wind Tunnel of the Applied Aerodynamic Laboratory of Aerospace Engineering faculty. In order to verify the effectiveness of these active control devices, the analysis has shown how actuators succeed in prevent boundary layer separation only in certain conditions af angle of attack and Reynolds numbers. Moreover, in this thesis actuators’ chordwise position effect has been also analyzed, together with the influence of steady and unsteady operations.
Resumo:
During the last decade peach and nectarine fruit have lost considerable market share, due to increased consumer dissatisfaction with quality at retail markets. This is mainly due to harvesting of too immature fruit and high ripening heterogeneity. The main problem is that the traditional used maturity indexes are not able to objectively detect fruit maturity stage, neither the variability present in the field, leading to a difficult post-harvest management of the product and to high fruit losses. To assess more precisely the fruit ripening other techniques and devices can be used. Recently, a new non-destructive maturity index, based on the vis-NIR technology, the Index of Absorbance Difference (IAD), that correlates with fruit degreening and ethylene production, was introduced and the IAD was used to study peach and nectarine fruit ripening from the “field to the fork”. In order to choose the best techniques to improve fruit quality, a detailed description of the tree structure, of fruit distribution and ripening evolution on the tree was faced. More in details, an architectural model (PlantToon®) was used to design the tree structure and the IAD was applied to characterize the maturity stage of each fruit. Their combined use provided an objective and precise evaluation of the fruit ripening variability, related to different training systems, crop load, fruit exposure and internal temperature. Based on simple field assessment of fruit maturity (as IAD) and growth, a model for an early prediction of harvest date and yield, was developed and validated. The relationship between the non-destructive maturity IAD, and the fruit shelf-life, was also confirmed. Finally the obtained results were validated by consumer test: the fruit sorted in different maturity classes obtained a different consumer acceptance. The improved knowledge, leaded to an innovative management of peach and nectarine fruit, from “field to market”.
Resumo:
La Cognitive Radio è un dispositivo in grado di reagire ai cambiamenti dell’ambiente radio in cui opera, modificando autonomamente e dinamicamente i propri parametri funzionali tra cui la frequenza, la potenza di trasmissione e la modulazione. Il principio di base di questi dispositivi è l’accesso dinamico alle risorse radio potenzialmente non utilizzate, con cui utenti non in possesso di licenze possono sfruttare le frequenze che in un determinato spazio temporale non vengono usate, preoccupandosi di non interferire con gli utenti che hanno privilegi su quella parte di spettro. Devono quindi essere individuati i cosiddetti “spectrum holes” o “white spaces”, parti di spettro assegnate ma non utilizzate, dai quali prendono il nome i dispositivi.Uno dei modi per individuare gli “Spectrum holes” per una Cognitive Radio consiste nel cercare di captare il segnale destinato agli utenti primari; questa tecnica è nota con il nome di Spectrum Sensing e consente di ottenere essenzialmente una misura all’interno del canale considerato al fine di determinare la presenza o meno di un servizio protetto. La tecnica di sensing impiegata da un WSD che opera autonomamente non è però molto efficiente in quanto non garantisce una buona protezione ai ricevitori DTT che usano lo stesso canale sul quale il WSD intende trasmettere.A livello europeo la soluzione che è stata ritenuta più affidabile per evitare le interferenze sui ricevitori DTT è rappresentata dall’uso di un geo-location database che opera in collaborazione con il dispositivo cognitivo.Lo scopo di questa tesi è quello di presentare un algoritmo che permette di combinare i due approcci di geo-location database e Sensing per definire i livelli di potenza trasmissibile da un WSD.
Resumo:
Il lavoro svolto in questa tesi consiste nell'effettuare il porting del Monitor di rete da Linux ad Android,facente parte di un sistema più complesso conosciuto come ABPS. Il ruolo del monitor è quello di configurare dinamicamente tutte le interfacce di rete disponibili sul dispositivo sul quale lavora,in modo da essere connessi sempre alla miglior rete conosciuta,ad esempio al miglior Access Point nel caso del interfaccia wireless.
Resumo:
“Plasmon” is a synonym for collective oscillations of the conduction electrons in a metal nanoparticle (excited by an incoming light wave), which cause strong optical responses like efficient light scattering. The scattering cross-section with respect to the light wavelength depends not only on material, size and shape of the nanoparticle, but also on the refractive index of the embedding medium. For this reason, plasmonic nanoparticles are interesting candidates for sensing applications. Here, two novel setups for rapid spectral investigations of single nanoparticles and different sensing experiments are presented.rnrnPrecisely, the novel setups are based on an optical microscope operated in darkfield modus. For the fast single particle spectroscopy (fastSPS) setup, the entrance pinhole of a coupled spectrometer is replaced by a liquid crystal device (LCD) acting as spatially addressable electronic shutter. This improvement allows the automatic and continuous investigation of several particles in parallel for the first time. The second novel setup (RotPOL) usesrna rotating wedge-shaped polarizer and encodes the full polarization information of each particle within one image, which reveals the symmetry of the particles and their plasmon modes. Both setups are used to observe nanoparticle growth in situ on a single-particle level to extract quantitative data on nanoparticle growth.rnrnUsing the fastSPS setup, I investigate the membrane coating of gold nanorods in aqueous solution and show unequivocally the subsequent detection of protein binding to the membrane. This binding process leads to a spectral shift of the particles resonance due to the higher refractive index of the protein compared to water. Hence, the nanosized addressable sensor platform allows for local analysis of protein interactions with biological membranes as a function of the lateral composition of phase separated membranes.rnrnThe sensitivity on changes in the environmental refractive index depends on the particles’ aspect ratio. On the basis of simulations and experiments, I could present the existence of an optimal aspect ratio range between 3 and 4 for gold nanorods for sensing applications. A further sensitivity increase can only be reached by chemical modifications of the gold nanorods. This can be achieved by synthesizing an additional porous gold cage around the nanorods, resulting in a plasmon sensitivity raise of up to 50 % for those “nanorattles” compared to gold nanorods with the same resonance wavelength. Another possibility isrnto coat the gold nanorods with a thin silver shell. This reduces the single particle’s resonance spectral linewidth about 30 %, which enlarges the resolution of the observable shift. rnrnThis silver coating evokes the interesting effect of reducing the ensemble plasmon linewidth by changing the relation connecting particle shape and plasmon resonance wavelength. This change, I term plasmonic focusing, leads to less variation of resonance wavelengths for the same particle size distribution, which I show experimentally and theoretically.rnrnIn a system of two coupled nanoparticles, the plasmon modes of the transversal and longitudinal axis depend on the refractive index of the environmental solution, but only the latter one is influenced by the interparticle distance. I show that monitoring both modes provides a self-calibrating system, where interparticle distance variations and changes of the environmental refractive index can be determined with high precision.
Resumo:
Quadro normativo sui Dispositivi Medici e la sua evoluzione (2007/47,UNI CEI EN ISO 14971). Software DM: processo di certificazione,gestione di reti IT medicali, ruoli e responsabilità (CEI 80001-1). Casi d'uso: Linee guida: MEDDEV e linee guida svedesi più relativi esempi applicabili alle aziende sanitarie.
Resumo:
In this report a new automated optical test for next generation of photonic integrated circuits (PICs) is provided by the test-bed design and assessment. After a briefly analysis of critical problems of actual optical tests, the main test features are defined: automation and flexibility, relaxed alignment procedure, speed up of entire test and data reliability. After studying varied solutions, the test-bed components are defined to be lens array, photo-detector array, and software controller. Each device is studied and calibrated, the spatial resolution, and reliability against interference at the photo-detector array are studied. The software is programmed in order to manage both PIC input, and photo-detector array output as well as data analysis. The test is validated by analysing state-of-art 16 ports PIC: the waveguide location, current versus power, and time-spatial power distribution are measured as well as the optical continuity of an entire path of PIC. Complexity, alignment tolerance, time of measurement are also discussed.