989 resultados para Mecanic scanner - prototype
Resumo:
The use of electrocardiogram nowadays, is very important in diagnosis of heart disease. The emergent increase of portable technology provides medica] monitoring of vital signs allowing freedom ofmovement and watching during normal activity of the patient. In this shidy, it is described the development of a prototype of an ambulatory cardiac monitoring system using 3 leads. The systems consists on conversion of an analog signal, having been previously processed and conditioned, into digital ECG signal and after processed with a microcontroller (MCU). The heartbeat rate can be observed in an LCD display. The LCD display is also used as the interface during the setup process. Ali digital data stream can be stored on a SD memory card llowing the ECG signa] to be accessed later on a PC.
Resumo:
In the context of this work we evaluated a multisensory, noninvasive prototype platform for shake flask cultivations by monitoring three basic parameters (pH, pO2 and biomass). The focus lies on the evaluation of the biomass sensor based on backward light scattering. The application spectrum was expanded to four new organisms in addition to E. coli K12 and S. cerevisiae [1]. It could be shown that the sensor is appropriate for a wide range of standard microorganisms, e.g., L. zeae, K. pastoris, A. niger and CHO-K1. The biomass sensor signal could successfully be correlated and calibrated with well-known measurement methods like OD600, cell dry weight (CDW) and cell concentration. Logarithmic and Bleasdale-Nelder derived functions were adequate for data fitting. Measurements at low cell concentrations proved to be critical in terms of a high signal to noise ratio, but the integration of a custom made light shade in the shake flask improved these measurements significantly. This sensor based measurement method has a high potential to initiate a new generation of online bioprocess monitoring. Metabolic studies will particularly benefit from the multisensory data acquisition. The sensor is already used in labscale experiments for shake flask cultivations.
Resumo:
In this paper we present a fast and precise method to estimate the planar motion of a lidar from consecutive range scans. For every scanned point we formulate the range flow constraint equation in terms of the sensor velocity, and minimize a robust function of the resulting geometric constraints to obtain the motion estimate. Conversely to traditional approaches, this method does not search for correspondences but performs dense scan alignment based on the scan gradients, in the fashion of dense 3D visual odometry. The minimization problem is solved in a coarse-to-fine scheme to cope with large displacements, and a smooth filter based on the covariance of the estimate is employed to handle uncertainty in unconstraint scenarios (e.g. corridors). Simulated and real experiments have been performed to compare our approach with two prominent scan matchers and with wheel odometry. Quantitative and qualitative results demonstrate the superior performance of our approach which, along with its very low computational cost (0.9 milliseconds on a single CPU core), makes it suitable for those robotic applications that require planar odometry. For this purpose, we also provide the code so that the robotics community can benefit from it.
Resumo:
Within the supply chain, the customer does not simply buy parts or services from suppliers, but also buys supplier capabilities, which results in quality products and services. Having a tool that handles the Inspection as well as the Nonconformance, Complaint, Corrective Action and Concession processes is key to successfully track the supplier performance. Taking as a case study a Supplier Quality Management (SQM) currently in operation in an Original Equipment Manufacturer (OEM) for automotive industry, this paper presents a platform to support a Supplier Quality Management System (SQMS), that fits the technical specification ISO/TS 16949 requirements. This prototype is composed by a web platform and a mobile App, having flexibility and mobility as key main characteristics.
Resumo:
During the lifetime of a research project, different partners develop several research prototype tools that share many common aspects. This is equally true for researchers as individuals and as groups: during a period of time they often develop several related tools to pursue a specific research line. Making research prototype tools easily accessible to the community is of utmost importance to promote the corresponding research, get feedback, and increase the tools’ lifetime beyond the duration of a specific project. One way to achieve this is to build graphical user interfaces (GUIs) that facilitate trying tools; in particular, with web-interfaces one avoids the overhead of downloading and installing the tools. Building GUIs from scratch is a tedious task, in particular for web-interfaces, and thus it typically gets low priority when developing a research prototype. Often we opt for copying the GUI of one tool and modifying it to fit the needs of a new related tool. Apart from code duplication, these tools will “live” separately, even though we might benefit from having them all in a common environment since they are related. This work aims at simplifying the process of building GUIs for research prototypes tools. In particular, we present EasyInterface, a toolkit that is based on novel methodology that provides an easy way to make research prototype tools available via common different environments such as a web-interface, within Eclipse, etc. It includes a novel text-based output language that allows to present results graphically without requiring any knowledge in GUI/Web programming. For example, an output of a tool could be (a structured version of) “highlight line number 10 of file ex.c” and “when the user clicks on line 10, open a dialog box with the text ...”. The environment will interpret this output and converts it to corresponding visual e_ects. The advantage of using this approach is that it will be interpreted equally by all environments of EasyInterface, e.g., the web-interface, the Eclipse plugin, etc. EasyInterface has been developed in the context of the Envisage [5] project, and has been evaluated on tools developed in this project, which include static analyzers, test-case generators, compilers, simulators, etc. EasyInterface is open source and available at GitHub2.
Resumo:
The aim of this study is to investigate the information provided by sulfur count rates obtained by X-ray fluorescence core scanner (XRF-CS) along sedimentary records. The analysis of two marine sediment cores from the Niger Delta margin shows that XRF-CS sulfur count rates obtained at the surface of split core sections with XRF-CS correlate with both direct quantitative pyrite concentrations, as inferred from X-ray powder diffraction (XRD) and sulfur determination by wavelength dispersive X-ray fluorescence (WD-XRF) spectrometry, and total dissolved sulfide (TDS) contents in the sediment pore water. These findings demonstrate the potential of XRF-CS for providing continuous profiles of pyrite distribution along split sections of sediment cores. The potential of XRF-CS to detect TDS pore water enrichments in marine sediment records, even a long time after sediment recovery, will be further discussed.
Resumo:
Tesi hau Solarus AB enpresaren konzentratzailedun eguzki kolektore fotovoltaiko termikoei (C-PVT) buruz doa eta bi helburu nagusi ditu. Lehena Solarus-eko oraingo diseinuaren alderaketak diseinatzea da, MaReCo (Maximum Reflector Collector) diseinuaren eta parabola puruaren alderaketa batzuekin batera. Diseinu hauetan eguzki zelulen ebaketa berriak daude barruan eta 4 busbar-eko eguzki zeluletan oinarritua dago. Honi esker analisi sakon bat egin ahalko da hargailu eta estruktura diseinuak konparatuz. Bigarren helburua Solarus AB-k Gävleko unibertsitatean (HiG) kokaturik dituen kolektoreen errendimendu elektriko eta termikoa aztertzean datza. Datuak simulazio eta software espezifikoen bidez lortu dira eta ondoren Microsoft Excel®-en aztertu. Bi proiektu txikiagoak egin dira ere enpresan, bata eguzki kolektore fotovoltaiko termikoen merkatuaren ikerketan datza eta bestea eguzki kolektoreen produkzio prozesuaren gida batean. Hargailuen eta estrukturaren diseinu berriak preparatuta utzi dira prototipoen hurreneko eraikuntzarako eta proiektuarekin jarraitzeko etorkizuneko lan bat planeatu da. Unibertsitateko instalakuntzaren analisiari dagokionez, errendimendu elektriko eta termikoa estimatuena baino nabarmenki txikiagoak izan dira.
Resumo:
The particulate matter distribution (PM) trends that exist in catalyzed particulate filters (CPFs) after loading, passive oxidation, active regeneration, and post loading conditions are not clearly understood. These data are required to optimize the operation of CPFs, prevent damage to the CPFs caused by non-uniform distributions, and develop accurate CPF models. To develop an understanding of PM distribution trends, multiple tests were conducted and the PM distribution was measured in three dimensions using a terahertz wave scanner. The results of this work indicate that loading, passive oxidation, active regeneration, and post loading can all cause non-uniform PM distributions. The density of the PM in the substrate after loading and the amount of PM that is oxidized during passive oxidations and active regenerations affect the uniformity of the distribution. Post loading that occurs after active regenerations result in distributions that are less uniform than post loading that occurs after passive oxidations.
Resumo:
A purpose of this research study was to demonstrate the practical linguistic study and evaluation of dissertations by using two examples of the latest technology, the microcomputer and optical scanner. That involved developing efficient methods for data entry plus creating computer algorithms appropriate for personal, linguistic studies. The goal was to develop a prototype investigation which demonstrated practical solutions for maximizing the linguistic potential of the dissertation data base. The mode of text entry was from a Dest PC Scan 1000 Optical Scanner. The function of the optical scanner was to copy the complete stack of educational dissertations from the Florida Atlantic University Library into an I.B.M. XT microcomputer. The optical scanner demonstrated its practical value by copying 15,900 pages of dissertation text directly into the microcomputer. A total of 199 dissertations or 72% of the entire stack of education dissertations (277) were successfully copied into the microcomputer's word processor where each dissertation was analyzed for a variety of syntax frequencies. The results of the study demonstrated the practical use of the optical scanner for data entry, the microcomputer for data and statistical analysis, and the availability of the college library as a natural setting for text studies. A supplemental benefit was the establishment of a computerized dissertation corpus which could be used for future research and study. The final step was to build a linguistic model of the differences in dissertation writing styles by creating 7 factors from 55 dependent variables through principal components factor analysis. The 7 factors (textual components) were then named and described on a hypothetical construct defined as a continuum from a conversational, interactional style to a formal, academic writing style. The 7 factors were then grouped through discriminant analysis to create discriminant functions for each of the 7 independent variables. The results indicated that a conversational, interactional writing style was associated with more recent dissertations (1972-1987), an increase in author's age, females, and the department of Curriculum and Instruction. A formal, academic writing style was associated with older dissertations (1972-1987), younger authors, males, and the department of Administration and Supervision. It was concluded that there were no significant differences in writing style due to subject matter (community college studies) compared to other subject matter. It was also concluded that there were no significant differences in writing style due to the location of dissertation origin (Florida Atlantic University, University of Central Florida, Florida International University).
Resumo:
Selling devices on retail stores comes with the big challenge of grabbing the customer’s attention. Nowadays people have a lot of offers at their disposal and new marketing techniques must emerge to differentiate the products. When it comes to smartphones and tablets, those devices can make the difference by themselves, if we use their computing power and capabilities to create something unique and interactive. With that in mind, three prototypes were developed during an internship: a face recognition based Customer Detection, a face tracking solution with an Avatar and interactive cross-app Guides. All three revealed to have potential to be differentiating solutions in a retail store, not only raising the chance of a customer taking notice of the device but also of interacting with them to learn more about their features. The results were meant to be only proof of concepts and therefore were not tested in the real world.
Resumo:
This paper presents the conception of an original superconducting Frictionless Zero Field Cooling bearing virtual prototype. In previous work also shown in this conference, a viability study of a Zero Field Cooling-superconducting bearing concept was conducted. It showed that the virtual prototype is feasible. Moreover, the simulation studies showed that a Zero Field Cooling superconducting track provides not only effective lateral stability but also higher levitation forces than the commonly used Field Cooling tracks. In this paper the new Zero Field Cooling -bearing virtual prototype is modeled in 3D. The virtual prototype was designed having in mind: i) a future implementation in high density polyurethane, for low temperature robustness; ii) future manufacturing in a three axes CNC milling machine and; iii) future implementation of some parts using an additive manufacturing technique.
Resumo:
L'Electron-Ion Collider è un futuro acceleratore di particelle che approfondirà la nostra conoscenza riguardo l'interazione forte tramite la collisione di elettroni con nuclei e protoni. Uno dei progetti attualmente considerati per la costruzione del rivelatore, il dual-radiator RICH, prevede l'impiego di due radiatori Cherenkov, sui quali verranno montati dei fotorivelatori per rilevare l'emissione della luce Cherenkov e risalire alla massa delle particelle. L'opzione di base per questi rivelatori sono i sensori al silicio SiPM. Questo lavoro di tesi si basa sullo studio delle prestazioni di un prototipo per l'acquisizione dei dati rilevati dai SiPM che sfrutta l'effetto termoelettrico per raffreddare la zona in cui sono situati i sensori. L'analisi dei dati acquisiti ha portato alla conclusione che le prestazioni del prototipo sono confrontabili con quelle misurate all'interno di una camera climatica quando si trovano alla stessa temperatura.
Resumo:
La città di Bologna è sicuramente famosa per le sue due torri, ma anche per la fitta rete idraulica ormai nascosta sotto le strade e sotto gli edifici. I primi canali di Bologna furono realizzati tra il XII e il XVI secolo a seguito della costruzione delle due opere fondamentali: le chiuse di San Ruffillo e di Casalecchio. Queste due opere di presa servivano e, servono ancora oggi, ad alimentare i canali del Navile, del Reno, del Savena, del Cavaticcio e delle Moline. Oltre a questi canali alimentati dalle acque dei fiumi, sotto la città di Bologna scorrono torrenti che drenano le acque prevalentemente meteoriche della zona pedecollinare della città, come ad esempio il torrente Ravone. Il presente lavoro di tesi ha come caso di studio proprio quest’ultimo. Il Ravone, in origine, scorreva a cielo aperto attraversando la città, ma per permettere l’urbanizzazione fuori dalle mura, è stato tombato in diversi tratti e in diverse epoche. La scarsità di informazioni riguardo la sua esatta posizione e la mancanza di elaborati grafici in grado di descriverne la sua geometria, ha spinto le autorità a richiedere un accurato rilievo al LARIG dell’UniBo. Le operazioni di rilievo si sono svolte con l’uso di tecniche geomatiche per la modellazione tridimensionale come l’aerofotogrammetria e l’acquisizione con laser scanner terrestre. Al fine di georeferenziare il dato acquisito, si è ricorso a tecniche di rilievo topografico come il posizionamento GNSS e la misurazione di angoli e distanze con stazione totale. I primi capitoli di questo elaborato sono dedicati alla descrizione dei fondamenti teorici della geodesia e delle tecniche di rilievo utilizzate per la restituzione del modello tridimensionale. Gli ultimi capitoli, invece, sono dedicati alla descrizione delle fasi di rilievo e all’analisi dei dati, dedicando particolare attenzione alla georeferenziazione delle nuvole di punti acquisite in ambienti confinati, come i tratti tombati del torrente Ravone.
Resumo:
Planning is an important sub-field of artificial intelligence (AI) focusing on letting intelligent agents deliberate on the most adequate course of action to attain their goals. Thanks to the recent boost in the number of critical domains and systems which exploit planning for their internal procedures, there is an increasing need for planning systems to become more transparent and trustworthy. Along this line, planning systems are now required to produce not only plans but also explanations about those plans, or the way they were attained. To address this issue, a new research area is emerging in the AI panorama: eXplainable AI (XAI), within which explainable planning (XAIP) is a pivotal sub-field. As a recent domain, XAIP is far from mature. No consensus has been reached in the literature about what explanations are, how they should be computed, and what they should explain in the first place. Furthermore, existing contributions are mostly theoretical, and software implementations are rarely more than preliminary. To overcome such issues, in this thesis we design an explainable planning framework bridging the gap between theoretical contributions from literature and software implementations. More precisely, taking inspiration from the state of the art, we develop a formal model for XAIP, and the software tool enabling its practical exploitation. Accordingly, the contribution of this thesis is four-folded. First, we review the state of the art of XAIP, supplying an outline of its most significant contributions from the literature. We then generalise the aforementioned contributions into a unified model for XAIP, aimed at supporting model-based contrastive explanations. Next, we design and implement an algorithm-agnostic library for XAIP based on our model. Finally, we validate our library from a technological perspective, via an extensive testing suite. Furthermore, we assess its performance and usability through a set of benchmarks and end-to-end examples.
Resumo:
The IoT is growing more and more each year and is becoming so ubiquitous that it includes heterogeneous devices with different hardware and software constraints leading to an highly fragmented ecosystem. Devices are using different protocols with different paradigms and they are not compatible with each other; some devices use request-response protocols like HTTP or CoAP while others use publish-subscribe protocols like MQTT. Integration in IoT is still an open research topic. When handling and testing IoT sensors there are some common task that people may be interested in: reading and visualizing the current value of the sensor; doing some aggregations on a set of values in order to compute statistical features; saving the history of the data to a time-series database; forecasting the future values to react in advance to a future condition; bridging the protocol of the sensor in order to integrate the device with other tools. In this work we will show the working implementation of a low-code and flow-based tool prototype which supports the common operations mentioned above, based on Node-RED and Python. Since this system is just a prototype, it has some issues and limitations that will be discussed in this work.