930 resultados para Lab-On-A-Chip Devices
Resumo:
Tässä työssä esiteltiin Android laitteisto- ja sovellusalustana sekä kuvattiin, kuinka Android-pelisovelluksen käyttöliittymä voidaan pitää yhtenäisenä eri näyttölaitteilla skaalauskertoimien ja ankkuroinnin avulla. Toisena osiona työtä käsiteltiin yksinkertaisia tapoja, joilla pelisovelluksien suorituskykyä voidaan parantaa. Näistä tarkempiin mittauksiin valittiin matalatarkkuuksinen piirtopuskuri ja näkymättömissä olevien kappaleiden piilotus. Mittauksissa valitut menetelmät vaikuttivat demosovelluksen suorituskykyyn huomattavasti. Tässä työssä rajauduttiin Android-ohjelmointiin Java-kielellä ilman ulkoisia kirjastoja, jolloin työn tuloksia voi helposti hyödyntää mahdollisimman monessa eri käyttökohteessa.
Resumo:
The purpose of this thesis is to examine how mobile banking and mobile payments services will change the banking sector in Finland, and what role non-bank companies from the IT and telecom industries will play in this process. The thesis consists of a literature review and a qualitative study. The literature review forms a comprehensive overview of mobile banking and mobile payments services. The qualitative research was conducted as a descriptive study, focusing on the views of bank and non-bank players. The results show that banks have a significant advantage over their IT and telecom rivals in regards to their service offering, financial buffer, and status as trustworthy institutions. The banks’ embrace of mobile financial services will change the Finnish banking sector into one, with a light branch network focused on sales power, and a heavy emphasis on new mobile devices providing service power regardless of time and place.
Resumo:
The power is still today an issue in wearable computing applications. The aim of the present paper is to raise awareness of the power consumption of wearable computing devices in specific scenarios to be able in the future to design energy efficient wireless sensors for context recognition in wearable computing applications. The approach is based on a hardware study. The objective of this paper is to analyze and compare the total power consumption of three representative wearable computing devices in realistic scenarios such as Display, Speaker, Camera and microphone, Transfer by Wi-Fi, Monitoring outdoor physical activity and Pedometer. A scenario based energy model is also developed. The Samsung Galaxy Nexus I9250 smartphone, the Vuzix M100 Smart Glasses and the SimValley Smartwatch AW-420.RX are the three devices representative of their form factors. The power consumption is measured using PowerTutor, an android energy profiler application with logging option and using unknown parameters so it is adjusted with the USB meter. The result shows that the screen size is the main parameter influencing the power consumption. The power consumption for an identical scenario varies depending on the wearable devices meaning that others components, parameters or processes might impact on the power consumption and further study is needed to explain these variations. This paper also shows that different inputs (touchscreen is more efficient than buttons controls) and outputs (speaker sensor is more efficient than display sensor) impact the energy consumption in different way. This paper gives recommendations to reduce the energy consumption in healthcare wearable computing application using the energy model.
Resumo:
Ce mémoire décrit l’imaginaire sonore tel qu’il s’est transformé par l’apparition de dispositifs de reproduction (téléphone, phonographe et radio) à la fin du 19ème siècle et au début du 20ème siècle. Si ces appareils de reproduction sonore signalent un nouveau contexte socioculturel permettant la captation, la conservation et la transmission de manifestations sensibles, ils transforment également la manière de concevoir le son, ils modifient le statut de l’audition par rapport aux autres sens et reconfigurent un imaginaire qui traduit un rapport à soi, à autrui et au monde. Cette étude littéraire de la reproductibilité sonore propose une réflexion entre technologie et poétique en questionnant l’idée de communication. L’élément spécifique qui caractérise les appareils de reproduction sonore est un objet technique nommé «transducteur ». Je considère le transducteur à la fois comme métaphore et matérialité de médiation; conçu en termes de dispositif de transduction, ce concept permet une différente compréhension des pratiques sociales et de l’imaginaire constituant cet artefact culturel.
Resumo:
L'ensemble de mon travail a été réalisé grâce a l'utilisation de logiciel libre.
Resumo:
clRNG et clProbdist sont deux interfaces de programmation (APIs) que nous avons développées pour la génération de nombres aléatoires uniformes et non uniformes sur des dispositifs de calculs parallèles en utilisant l’environnement OpenCL. La première interface permet de créer au niveau d’un ordinateur central (hôte) des objets de type stream considérés comme des générateurs virtuels parallèles qui peuvent être utilisés aussi bien sur l’hôte que sur les dispositifs parallèles (unités de traitement graphique, CPU multinoyaux, etc.) pour la génération de séquences de nombres aléatoires. La seconde interface permet aussi de générer au niveau de ces unités des variables aléatoires selon différentes lois de probabilité continues et discrètes. Dans ce mémoire, nous allons rappeler des notions de base sur les générateurs de nombres aléatoires, décrire les systèmes hétérogènes ainsi que les techniques de génération parallèle de nombres aléatoires. Nous présenterons aussi les différents modèles composant l’architecture de l’environnement OpenCL et détaillerons les structures des APIs développées. Nous distinguons pour clRNG les fonctions qui permettent la création des streams, les fonctions qui génèrent les variables aléatoires uniformes ainsi que celles qui manipulent les états des streams. clProbDist contient les fonctions de génération de variables aléatoires non uniformes selon la technique d’inversion ainsi que les fonctions qui permettent de retourner différentes statistiques des lois de distribution implémentées. Nous évaluerons ces interfaces de programmation avec deux simulations qui implémentent un exemple simplifié d’un modèle d’inventaire et un exemple d’une option financière. Enfin, nous fournirons les résultats d’expérimentation sur les performances des générateurs implémentés.
Resumo:
In this thesis, we explore the design, computation, and experimental analysis of photonic crystals, with a special emphasis on structures and devices that make a connection with practically realizable systems. First, we analyze the propenies of photonic-crystal: periodic dielectric structures that have a band gap for propagation. The band gap of periodically loaded air column on a dielectric substrate is computed using Eigen solvers in a plane wave basis. Then this idea is extended to planar filters and antennas at microwave regime. The main objectives covered in this thesis are:• Computation of Band Gap origin in Photonic crystal with the abet of Maxwell's equation and Bloch-Floquet's theorem • Extension of Band Gap to Planar structures at microwave regime • Predict the dielectric constant - synthesized dieletric cmstant of the substrates when loaded with Photonic Band Gap (PBG) structures in a microstrip transmission line • Identify the resonant characteristic of the PBG cell and extract the equivalent circuit based on PBG cell and substrate parameters for microstrip transmission line • Miniaturize PBG as Defected Ground Structures (DGS) and use the property to be implemented in planar filters with microstrip transmission line • Extended the band stop effect of PBG / DGS to coplanar waveguide and asymmetric coplanar waveguide. • Formulate design equations for the PBG / DGS filters • Use these PBG / DGS ground plane as ground plane of microstrip antennas • Analysis of filters and antennas using FDID method
Resumo:
La miniaturització de la industria microelectrònica és un fet del tot inqüestionables i la tecnologia CMOS no n'és una excepció. En conseqüència la comunitat científica s'ha plantejat dos grans reptes: En primer lloc portar la tecnologia CMOS el més lluny possible ('Beyond CMOS') tot desenvolupant sistemes d'altes prestacions com microprocessadors, micro - nanosistemes o bé sistemes de píxels. I en segon lloc encetar una nova generació electrònica basada en tecnologies totalment diferents dins l'àmbit de les Nanotecnologies. Tots aquests avanços exigeixen una recerca i innovació constant en la resta d'àrees complementaries com són les d'encapsulat. L'encapsulat ha de satisfer bàsicament tres funcions: Interfície elèctrica del sistema amb l'exterior, Proporcionar un suport mecànic al sistema i Proporcionar un camí de dissipació de calor. Per tant, si tenim en compte que la majoria d'aquests dispositius d'altes prestacions demanden un alt nombre d'entrades i sortides, els mòduls multixip (MCMs) i la tecnologia flip chip es presenten com una solució molt interessant per aquests tipus de dispositiu. L'objectiu d'aquesta tesi és la de desenvolupar una tecnologia de mòduls multixip basada en interconnexions flip chip per a la integració de detectors de píxels híbrids, que inclou: 1) El desenvolupament d'una tecnologia de bumping basada en bumps de soldadura Sn/Ag eutèctics dipositats per electrodeposició amb un pitch de 50µm, i 2) El desenvolupament d'una tecnologia de vies d'or en silici que permet interconnectar i apilar xips verticalment (3D packaging) amb un pitch de 100µm. Finalment aquesta alta capacitat d'interconnexió dels encapsulats flip chip ha permès que sistemes de píxels tradicionalment monolítics puguin evolucionar cap a sistemes híbrids més compactes i complexes, i que en aquesta tesi s'ha vist reflectit transferint la tecnologia desenvolupada al camp de la física d'altes energies, en concret implantant el sistema de bump bonding d'un mamògraf digital. Addicionalment s'ha implantat també un dispositiu detector híbrid modular per a la reconstrucció d'imatges 3D en temps real, que ha donat lloc a una patent.
Resumo:
Hybrid multiprocessor architectures which combine re-configurable computing and multiprocessors on a chip are being proposed to transcend the performance of standard multi-core parallel systems. Both fine-grained and coarse-grained parallel algorithm implementations are feasible in such hybrid frameworks. A compositional strategy for designing fine-grained multi-phase regular processor arrays to target hybrid architectures is presented in this paper. The method is based on deriving component designs using classical regular array techniques and composing the components into a unified global design. Effective designs with phase-changes and data routing at run-time are characteristics of these designs. In order to describe the data transfer between phases, the concept of communication domain is introduced so that the producer–consumer relationship arising from multi-phase computation can be treated in a unified way as a data routing phase. This technique is applied to derive new designs of multi-phase regular arrays with different dataflow between phases of computation.
Resumo:
In this article, we review the state-of-the-art techniques in mining data streams for mobile and ubiquitous environments. We start the review with a concise background of data stream processing, presenting the building blocks for mining data streams. In a wide range of applications, data streams are required to be processed on small ubiquitous devices like smartphones and sensor devices. Mobile and ubiquitous data mining target these applications with tailored techniques and approaches addressing scarcity of resources and mobility issues. Two categories can be identified for mobile and ubiquitous mining of streaming data: single-node and distributed. This survey will cover both categories. Mining mobile and ubiquitous data require algorithms with the ability to monitor and adapt the working conditions to the available computational resources. We identify the key characteristics of these algorithms and present illustrative applications. Distributed data stream mining in the mobile environment is then discussed, presenting the Pocket Data Mining framework. Mobility of users stimulates the adoption of context-awareness in this area of research. Context-awareness and collaboration are discussed in the Collaborative Data Stream Mining, where agents share knowledge to learn adaptive accurate models.
Resumo:
This paper proposes a parallel hardware architecture for image feature detection based on the Scale Invariant Feature Transform algorithm and applied to the Simultaneous Localization And Mapping problem. The work also proposes specific hardware optimizations considered fundamental to embed such a robotic control system on-a-chip. The proposed architecture is completely stand-alone; it reads the input data directly from a CMOS image sensor and provides the results via a field-programmable gate array coupled to an embedded processor. The results may either be used directly in an on-chip application or accessed through an Ethernet connection. The system is able to detect features up to 30 frames per second (320 x 240 pixels) and has accuracy similar to a PC-based implementation. The achieved system performance is at least one order of magnitude better than a PC-based solution, a result achieved by investigating the impact of several hardware-orientated optimizations oil performance, area and accuracy.
Resumo:
Mobile learning involves use of mobile devices to participate in learning activities. Most elearning activities are available to participants through learning systems such as learning content management systems (LCMS). Due to certain challenges, LCMS are not equally accessible on all mobile devices. This study investigates actual use, perceived usefulness and user experiences of LCMS use on mobile phones at Makerere University in Uganda. The study identifies challenges pertaining to use and discusses how to improve LCMS use on mobile phones. Such solutions are a cornerstone in enabling and improving mobile learning. Data was collected by means of focus group discussions, an online survey designed based on the Technology Acceptance Model (TAM), and LCMS log files of user activities. Data was collected from two courses where Moodle was used as a learning platform. The results indicate positive attitudes towards use of LCMS on phones but also huge challenges whichare content related and technical in nature.
Resumo:
A migração de materiais educacionais para dispositivos portáteis, tais como computadores do tipo tablet, torna possível oferecer altos níveis de interatividade na apresentação de animações e, dessa forma, pesquisas são necessárias para avaliar o valor pedagógico de incorporar recursos sofisticados de interatividade em lições para dispositivos portáteis. Estudantes de Engenharia (no Experimento 1) e estudantes de nível superior de outras áreas (no Experimento 2) estudaram por 5 minutos uma animação mostrando, em um computador do tipo tablet, os seis passos de um procedimento de manutenção para um dispositivo mecânico chamado Tomada de Força. A animação envolveu um baixo nível de interatividade, no qual os estudantes eram capazes de reproduzir, pausar, avançar e voltar a animação por meio de botões acionados em tela sensível ao toque (touch screen); um alto nível de interatividade, no qual os estudantes podiam também tocar e deslizar um dedo na tela para rotacionar a animação ou ainda tocar a tela com dois dedos abrindo-os ou fechando-os para ampliar ou reduzir a animação; ou nenhuma interatividade (apenas no Experimento 2). De forma geral, em ambos os experimentos, os estudantes que utilizaram alto nível de interatividade reportaram maior interesse, mas não mostraram melhor aprendizagem, comparados aos grupos de baixa ou nenhuma interatividade. Entretanto, no Experimento 2, estudantes que se classificaram como alunos verbais demonstraram maior interesse e obtiveram pontuações mais altas de aprendizagem com alta interatividade, em vez de baixa ou nenhuma interatividade. Esse padrão, contudo, não foi encontrado entre os alunos visuais. Também no Experimento 2, os alunos verbais e os alunos com baixo nível de autorregulação de aprendizagem, que manifestaram alto nível de interesse, obtiveram pontuações mais altas de aprendizagem do que os alunos visuais e os alunos com alto nível de autorregulação de aprendizagem, que manifestaram baixo nível de interesse, respectivamente.
Resumo:
The increasingly request for processing power during last years has pushed integrated circuit industry to look for ways of providing even more processing power with less heat dissipation, power consumption, and chip area. This goal has been achieved increasing the circuit clock, but since there are physical limits of this approach a new solution emerged as the multiprocessor system on chip (MPSoC). This approach demands new tools and basic software infrastructure to take advantage of the inherent parallelism of these architectures. The oil exploration industry has one of its firsts activities the project decision on exploring oil fields, those decisions are aided by reservoir simulations demanding high processing power, the MPSoC may offer greater performance if its parallelism can be well used. This work presents a proposal of a micro-kernel operating system and auxiliary libraries aimed to the STORM MPSoC platform analyzing its influence on the problem of reservoir simulation
Resumo:
Background: The sequencing and publication of the cattle genome and the identification of single nucleotide polymorphism (SNP) molecular markers have provided new tools for animal genetic evaluation and genomic-enhanced selection. These new tools aim to increase the accuracy and scope of selection while decreasing generation interval. The objective of this study was to evaluate the enhancement of accuracy caused by the use of genomic information (Clarifide® - Pfizer) on genetic evaluation of Brazilian Nellore cattle. Review: The application of genome-wide association studies (GWAS) is recognized as one of the most practical approaches to modern genetic improvement. Genomic selection is perhaps most suited to the improvement of traits with low heritability in zebu cattle. The primary interest in livestock genomics has been to estimate the effects of all the markers on the chip, conduct cross-validation to determine accuracy, and apply the resulting information in GWAS either alone [9] or in combination with bull test and pedigree-based genetic evaluation data. The cost of SNP50K genotyping however limits the commercial application of GWAS based on all the SNPs on the chip. However, reasonable predictability and accuracy can be achieved in GWAS by using an assay that contains an optimally selected predictive subset of markers, as opposed to all the SNPs on the chip. The best way to integrate genomic information into genetic improvement programs is to have it included in traditional genetic evaluations. This approach combines traditional expected progeny differences based on phenotype and pedigree with the genomic breeding values based on the markers. Including the different sources of information into a multiple trait genetic evaluation model, for within breed dairy cattle selection, is working with excellent results. However, given the wide genetic diversity of zebu breeds, the high-density panel used for genomic selection in dairy cattle (Ilumina Bovine SNP50 array) appears insufficient for across-breed genomic predictions and selection in beef cattle. Today there is only one breed-specific targeted SNP panel and genomic predictions developed using animals across the entire population of the Nellore breed (www.pfizersaudeanimal.com), which enables genomically - enhanced selection. Genomic profiles are a way to enhance our current selection tools to achieve more accurate predictions for younger animals. Material and Methods: We analyzed the age at first calving (AFC), accumulated productivity (ACP), stayability (STAY) and heifer pregnancy at 30 months (HP30) in Nellore cattle fitting two different animal models; 1) a traditional single trait model, and 2) a two-trait model where the genomic breeding value or molecular value prediction (MVP) was included as a correlated trait. All mixed model analyses were performed using the statistical software ASREML 3.0. Results: Genetic correlation estimates between AFC, ACP, STAY, HP30 and respective MVPs ranged from 0.29 to 0.46. Results also showed an increase of 56%, 36%, 62% and 19% in estimated accuracy of AFC, ACP, STAY and HP30 when MVP information was included in the animal model. Conclusion: Depending upon the trait, integration of MVP information into genetic evaluation resulted in increased accuracy of 19% to 62% as compared to accuracy from traditional genetic evaluation. GE-EPD will be an effective tool to enable faster genetic improvement through more dependable selection of young animals.