54 resultados para Lab-On-A-Chip Devices
em Université de Lausanne, Switzerland
Resumo:
PDMS-based microfluidic devices combined with lanthanide-based immunocomplexes have been successfully tested for the multiplex detection of biomarkers on cancerous tissues, revealing an enhanced sensitivity compared to classical organic dyes.
Resumo:
The lanthanide binuclear helicate [Eu(2)(L(C2(CO(2)H)))(3)] is coupled to avidin to yield a luminescent bioconjugate EuB1 (Q = 9.3%, tau((5)D(0)) = 2.17 ms). MALDI/TOF mass spectrometry confirms the covalent binding of the Eu chelate and UV-visible spectroscopy allows one to determine a luminophore/protein ratio equal to 3.2. Bio-affinity assays involving the recognition of a mucin-like protein expressed on human breast cancer MCF-7 cells by a biotinylated monoclonal antibody 5D10 to which EuB1 is attached via avidin-biotin coupling demonstrate that (i) avidin activity is little affected by the coupling reaction and (ii) detection limits obtained by time-resolved (TR) luminescence with EuB1 and a commercial Eu-avidin conjugate are one order of magnitude lower than those of an organic conjugate (FITC-streptavidin). In the second part of the paper, conditions for growing MCF-7 cells in 100-200 microm wide microchannels engraved in PDMS are established; we demonstrate that EuB1 can be applied as effectively on this lab-on-a-chip device for the detection of tumour-associated antigens as on MCF-7 cells grown in normal culture vials. In order to exploit the versatility of the ligand used for self-assembling [Ln(2)(L(C2(CO(2)H)))(3)] helicates, which sensitizes the luminescence of both Eu(III) and Tb(III) ions, a dual on-chip assay is proposed in which estrogen receptors (ERs) and human epidermal growth factor receptors (Her2/neu) can be simultaneously detected on human breast cancer tissue sections. The Ln helicates are coupled to two secondary antibodies: ERs are visualized by red-emitting EuB4 using goat anti-mouse IgG and Her2/neu receptors by green-emitting TbB5 using goat anti-rabbit IgG. The fact that the assay is more than 6 times faster and requires 5 times less reactants than conventional immunohistochemical assays provides essential advantages over conventional immunohistochemistry for future clinical biomarker detection.
Resumo:
Contamination with arsenic is a recurring problem in both industrialized and developing countries. Drinking water supplies for large populations can have concentrations much higher than the permissible levels (for most European countries and the United States, 10 μg As per L; elsewhere, 50 μg As per L). Arsenic analysis requires high-end instruments, which are largely unavailable in developing countries. Bioassays based on genetically engineered bacteria have been proposed as suitable alternatives but such tests would profit from better standardization and direct incorporation into sensing devices. The goal of this work was to develop and test microfluidic devices in which bacterial bioreporters could be embedded, exposed and reporter signals detected, as a further step towards a complete miniaturized bacterial biosensor. The signal element in the biosensor is a nonpathogenic laboratory strain of Escherichia coli, which produces a variant of the green fluorescent protein after contact to arsenite and arsenate. E. coli bioreporter cells were encapsulated in agarose beads and incorporated into a microfluidic device where they were captured in 500 × 500 μm(2) cages and exposed to aqueous samples containing arsenic. Cell-beads frozen at -20 °C in the microfluidic chip retained inducibility for up to a month and arsenic samples with 10 or 50 μg L(-1) could be reproducibly discriminated from the blank. In the 0-50 μg L(-1) range and with an exposure time of 200 minutes, the rate of signal increase was linearly proportional to the arsenic concentration. The time needed to reliably and reproducibly detect a concentration of 50 μg L(-1) was 75-120 minutes, and 120-180 minutes for a concentration of 10 μg L(-1).
Resumo:
Integrated approaches using different in vitro methods in combination with bioinformatics can (i) increase the success rate and speed of drug development; (ii) improve the accuracy of toxicological risk assessment; and (iii) increase our understanding of disease. Three-dimensional (3D) cell culture models are important building blocks of this strategy which has emerged during the last years. The majority of these models are organotypic, i.e., they aim to reproduce major functions of an organ or organ system. This implies in many cases that more than one cell type forms the 3D structure, and often matrix elements play an important role. This review summarizes the state of the art concerning commonalities of the different models. For instance, the theory of mass transport/metabolite exchange in 3D systems and the special analytical requirements for test endpoints in organotypic cultures are discussed in detail. In the next part, 3D model systems for selected organs--liver, lung, skin, brain--are presented and characterized in dedicated chapters. Also, 3D approaches to the modeling of tumors are presented and discussed. All chapters give a historical background, illustrate the large variety of approaches, and highlight up- and downsides as well as specific requirements. Moreover, they refer to the application in disease modeling, drug discovery and safety assessment. Finally, consensus recommendations indicate a roadmap for the successful implementation of 3D models in routine screening. It is expected that the use of such models will accelerate progress by reducing error rates and wrong predictions from compound testing.
Resumo:
Living bacteria or yeast cells are frequently used as bioreporters for the detection of specific chemical analytes or conditions of sample toxicity. In particular, bacteria or yeast equipped with synthetic gene circuitry that allows the production of a reliable non-cognate signal (e.g., fluorescent protein or bioluminescence) in response to a defined target make robust and flexible analytical platforms. We report here how bacterial cells expressing a fluorescence reporter ("bactosensors"), which are mostly used for batch sample analysis, can be deployed for automated semi-continuous target analysis in a single concise biochip. Escherichia coli-based bactosensor cells were continuously grown in a 13 or 50 nanoliter-volume reactor on a two-layered polydimethylsiloxane-on-glass microfluidic chip. Physiologically active cells were directed from the nl-reactor to a dedicated sample exposure area, where they were concentrated and reacted in 40 minutes with the target chemical by localized emission of the fluorescent reporter signal. We demonstrate the functioning of the bactosensor-chip by the automated detection of 50 μgarsenite-As l(-1) in water on consecutive days and after a one-week constant operation. Best induction of the bactosensors of 6-9-fold to 50 μg l(-1) was found at an apparent dilution rate of 0.12 h(-1) in the 50 nl microreactor. The bactosensor chip principle could be widely applicable to construct automated monitoring devices for a variety of targets in different environments.
Resumo:
We present a new lab-on-a-chip system for electrophysiological measurements on Xenopus oocytes. Xenopus oocytes are widely used host cells in the field of pharmacological studies and drug development. We developed a novel non-invasive technique using immobilized non-devitellinized cells that replaces the traditional "two-electrode voltage-clamp" (TEVC) method. In particular, rapid fluidic exchange was implemented on-chip to allow recording of fast kinetic events of exogenous ion channels expressed in the cell membrane. Reducing fluidic exchange times of extracellular reagent solutions is a great challenge with these large millimetre-sized cells. Fluidic switching is obtained by shifting the laminar flow interface in a perfusion channel under the cell by means of integrated poly-dimethylsiloxane (PDMS) microvalves. Reagent solution exchange times down to 20 ms have been achieved. An on-chip purging system allows to perform complex pharmacological protocols, making the system suitable for screening of ion channel ligand libraries. The performance of the integrated rapid fluidic exchange system was demonstrated by investigating the self-inhibition of human epithelial sodium channels (ENaC). Our results show that the response time of this ion channel to a specific reactant is about an order of magnitude faster than could be estimated with the traditional TEVC technique.
Resumo:
BACKGROUND: In many countries, primary care physicians determine whether or not older drivers are fit to drive. Little, however, is known regarding the effects of cognitive decline on driving performance and the means to detect it. This study explores to what extent the trail making test (TMT) can provide indications to clinicians about their older patients' on-road driving performance in the context of cognitive decline. METHODS: This translational study was nested within a cohort study and an exploratory psychophysics study. The target population of interest was constituted of older drivers in the absence of important cognitive or physical disorders. We therefore recruited and tested 404 home-dwelling drivers, aged 70 years or more and in possession of valid drivers' licenses, who volunteered to participate in a driving refresher course. Forty-five drivers also agreed to undergo further testing at our lab. On-road driving performance was evaluated by instructors during a 45 minute validated open-road circuit. Drivers were classified as either being excellent, good, moderate, or poor depending on their score on a standardized evaluation of on-road driving performance. RESULTS: The area under the receiver operator curve for detecting poorly performing drivers was 0.668 (CI95% 0.558 to 0.778) for the TMT-A, and 0.662 (CI95% 0.542 to 0.783) for the TMT-B. TMT was related to contrast sensitivity, motion direction, orientation discrimination, working memory, verbal fluency, and literacy. Older patients with a TMT-A ≥ 54 seconds or a TMT-B ≥ 150 seconds have a threefold (CI95% 1.3 to 7.0) increased risk of performing poorly during the on-road evaluation. TMT had a sensitivity of 63.6%, a specificity of 64.9%, a positive predictive value of 9.5%, and a negative predictive value of 96.9%. CONCLUSION: In screening settings, the TMT would have clinicians uselessly consider driving cessation in nine drivers out of ten. Given the important negative impact this could have on older drivers, this study confirms the TMT not to be specific enough for clinicians to justify driving cessation without complementary investigations on driving behaviors.
Resumo:
PURPOSE: Congenital stationary night blindness (CSNB) is a clinically and genetically heterogeneous retinal disease. Although electroretinographic (ERG) measurements can discriminate clinical subgroups, the identification of the underlying genetic defects has been complicated for CSNB because of genetic heterogeneity, the uncertainty about the mode of inheritance, and time-consuming and costly mutation scanning and direct sequencing approaches. METHODS: To overcome these challenges and to generate a time- and cost-efficient mutation screening tool, the authors developed a CSNB genotyping microarray with arrayed primer extension (APEX) technology. To cover as many mutations as possible, a comprehensive literature search was performed, and DNA samples from a cohort of patients with CSNB were first sequenced directly in known CSNB genes. Subsequently, oligonucleotides were designed representing 126 sequence variations in RHO, CABP4, CACNA1F, CACNA2D4, GNAT1, GRM6, NYX, PDE6B, and SAG and spotted on the chip. RESULTS: Direct sequencing of genes known to be associated with CSNB in the study cohort revealed 21 mutations (12 novel and 9 previously reported). The resultant microarray containing oligonucleotides, which allow to detect 126 known and novel mutations, was 100% effective in determining the expected sequence changes in all known samples assessed. In addition, investigation of 34 patients with CSNB who were previously not genotyped revealed sequence variants in 18%, of which 15% are thought to be disease-causing mutations. CONCLUSIONS: This relatively inexpensive first-pass genetic testing device for patients with a diagnosis of CSNB will improve molecular diagnostics and genetic counseling of patients and their families and gives the opportunity to analyze whether, for example, more progressive disorders such as cone or cone-rod dystrophies underlie the same gene defects.
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.
Resumo:
Pioneer work on iontophoresis undertaken by David Maurice during the 1970s and 1980s laid the initial groundwork for its potential implementation as a promising ocular therapeutic modality. A better understanding of tissue interactions within the eye during electric current application, along with better designs of drug delivery devices have enabled us to pursue David Maurice's original ideas and take them from the bench to the bed side. In the present study we demonstrate the potential application of an iontophoresis device (Eyegate, Optis, France) for the treatment of certain human eye diseases. Seventeen patients received a penetrating keratoplasty (PKP) at various intervals before presentation with active graft rejection in our clinic and were treated using this iontophoresis device. Methylprednisolone sodium succinate (MP) 62.5 mg/ml was infused within the Eyegate ocular probe container and an electrical current of 1.5 mA was delivered for 4 min with the negative pole connected to the ocular probe. Patients were treated on an ambulatory basis and received a standard course of three iontophoresis applications given once a day over 3 consecutive days. After treatment, 15 of the 17 treated eyes (88%) demonstrated a complete reversal of the rejection processes. In two eyes, only a partial and temporary improvement was observed. The mean best corrected visual acuity of all 17 patients during the last follow up visit was 0.37 +/- 0.2 compared to 0.06 +/- 0.05 before initiation of the iontophoresis treatment. The mean follow-up time was 13.7 months with a range of 5-29 months for the 17 patients. No significant side-effects associated with the iontophoresis treatment were observed. Thus, for the management of active corneal graft rejection, iontophoresis of MP can be an alternative to very frequent instillations of eye drops, or to pulsed intravenous therapy of corticosteroids.
Resumo:
SUMMARY : Eukaryotic DNA interacts with the nuclear proteins using non-covalent ionic interactions. Proteins can recognize specific nucleotide sequences based on the sterical interactions with the DNA and these specific protein-DNA interactions are the basis for many nuclear processes, e.g. gene transcription, chromosomal replication, and recombination. New technology termed ChIP-Seq has been recently developed for the analysis of protein-DNA interactions on a whole genome scale and it is based on immunoprecipitation of chromatin and high-throughput DNA sequencing procedure. ChIP-Seq is a novel technique with a great potential to replace older techniques for mapping of protein-DNA interactions. In this thesis, we bring some new insights into the ChIP-Seq data analysis. First, we point out to some common and so far unknown artifacts of the method. Sequence tag distribution in the genome does not follow uniform distribution and we have found extreme hot-spots of tag accumulation over specific loci in the human and mouse genomes. These artifactual sequence tags accumulations will create false peaks in every ChIP-Seq dataset and we propose different filtering methods to reduce the number of false positives. Next, we propose random sampling as a powerful analytical tool in the ChIP-Seq data analysis that could be used to infer biological knowledge from the massive ChIP-Seq datasets. We created unbiased random sampling algorithm and we used this methodology to reveal some of the important biological properties of Nuclear Factor I DNA binding proteins. Finally, by analyzing the ChIP-Seq data in detail, we revealed that Nuclear Factor I transcription factors mainly act as activators of transcription, and that they are associated with specific chromatin modifications that are markers of open chromatin. We speculate that NFI factors only interact with the DNA wrapped around the nucleosome. We also found multiple loci that indicate possible chromatin barrier activity of NFI proteins, which could suggest the use of NFI binding sequences as chromatin insulators in biotechnology applications. RESUME : L'ADN des eucaryotes interagit avec les protéines nucléaires par des interactions noncovalentes ioniques. Les protéines peuvent reconnaître les séquences nucléotidiques spécifiques basées sur l'interaction stérique avec l'ADN, et des interactions spécifiques contrôlent de nombreux processus nucléaire, p.ex. transcription du gène, la réplication chromosomique, et la recombinaison. Une nouvelle technologie appelée ChIP-Seq a été récemment développée pour l'analyse des interactions protéine-ADN à l'échelle du génome entier et cette approche est basée sur l'immuno-précipitation de la chromatine et sur la procédure de séquençage de l'ADN à haut débit. La nouvelle approche ChIP-Seq a donc un fort potentiel pour remplacer les anciennes techniques de cartographie des interactions protéine-ADN. Dans cette thèse, nous apportons de nouvelles perspectives dans l'analyse des données ChIP-Seq. Tout d'abord, nous avons identifié des artefacts très communs associés à cette méthode qui étaient jusqu'à présent insoupçonnés. La distribution des séquences dans le génome ne suit pas une distribution uniforme et nous avons constaté des positions extrêmes d'accumulation de séquence à des régions spécifiques, des génomes humains et de la souris. Ces accumulations des séquences artéfactuelles créera de faux pics dans toutes les données ChIP-Seq, et nous proposons différentes méthodes de filtrage pour réduire le nombre de faux positifs. Ensuite, nous proposons un nouvel échantillonnage aléatoire comme un outil puissant d'analyse des données ChIP-Seq, ce qui pourraient augmenter l'acquisition de connaissances biologiques à partir des données ChIP-Seq. Nous avons créé un algorithme d'échantillonnage aléatoire et nous avons utilisé cette méthode pour révéler certaines des propriétés biologiques importantes de protéines liant à l'ADN nommés Facteur Nucléaire I (NFI). Enfin, en analysant en détail les données de ChIP-Seq pour la famille de facteurs de transcription nommés Facteur Nucléaire I, nous avons révélé que ces protéines agissent principalement comme des activateurs de transcription, et qu'elles sont associées à des modifications de la chromatine spécifiques qui sont des marqueurs de la chromatine ouverte. Nous pensons que lés facteurs NFI interagir uniquement avec l'ADN enroulé autour du nucléosome. Nous avons également constaté plusieurs régions génomiques qui indiquent une éventuelle activité de barrière chromatinienne des protéines NFI, ce qui pourrait suggérer l'utilisation de séquences de liaison NFI comme séquences isolatrices dans des applications de la biotechnologie.
Resumo:
OBJECTIVE: To reach a consensus on the clinical use of ambulatory blood pressure monitoring (ABPM). METHODS: A task force on the clinical use of ABPM wrote this overview in preparation for the Seventh International Consensus Conference (23-25 September 1999, Leuven, Belgium). This article was amended to account for opinions aired at the conference and to reflect the common ground reached in the discussions. POINTS OF CONSENSUS: The Riva Rocci/Korotkoff technique, although it is prone to error, is easy and cheap to perform and remains worldwide the standard procedure for measuring blood pressure. ABPM should be performed only with properly validated devices as an accessory to conventional measurement of blood pressure. Ambulatory recording of blood pressure requires considerable investment in equipment and training and its use for screening purposes cannot be recommended. ABPM is most useful for identifying patients with white-coat hypertension (WCH), also known as isolated clinic hypertension, which is arbitrarily defined as a clinic blood pressure of more than 140 mmHg systolic or 90 mmHg diastolic in a patient with daytime ambulatory blood pressure below 135 mmHg systolic and 85 mmHg diastolic. Some experts consider a daytime blood pressure below 130 mmHg systolic and 80 mmHg diastolic optimal. Whether WCH predisposes subjects to sustained hypertension remains debated. However, outcome is better correlated to the ambulatory blood pressure than it is to the conventional blood pressure. Antihypertensive drugs lower the clinic blood pressure in patients with WCH but not the ambulatory blood pressure, and also do not improve prognosis. Nevertheless, WCH should not be left unattended. If no previous cardiovascular complications are present, treatment could be limited to follow-up and hygienic measures, which should also account for risk factors other than hypertension. ABPM is superior to conventional measurement of blood pressure not only for selecting patients for antihypertensive drug treatment but also for assessing the effects both of non-pharmacological and of pharmacological therapy. The ambulatory blood pressure should be reduced by treatment to below the thresholds applied for diagnosing sustained hypertension. ABPM makes the diagnosis and treatment of nocturnal hypertension possible and is especially indicated for patients with borderline hypertension, the elderly, pregnant women, patients with treatment-resistant hypertension and patients with symptoms suggestive of hypotension. In centres with sufficient financial resources, ABPM could become part of the routine assessment of patients with clinic hypertension. For patients with WCH, it should be repeated at annual or 6-monthly intervals. Variation of blood pressure throughout the day can be monitored only by ABPM, but several advantages of the latter technique can also be obtained by self-measurement of blood pressure, a less expensive method that is probably better suited to primary practice and use in developing countries. CONCLUSIONS: ABPM or equivalent methods for tracing the white-coat effect should become part of the routine diagnostic and therapeutic procedures applied to treated and untreated patients with elevated clinic blood pressures. Results of long-term outcome trials should better establish the advantage of further integrating ABPM as an accessory to conventional sphygmomanometry into the routine care of hypertensive patients and should provide more definite information on the long-term cost-effectiveness. Because such trials are not likely to be funded by the pharmaceutical industry, governments and health insurance companies should take responsibility in this regard.
Resumo:
Ventricular assist devices (VADs) are used in treatment for terminal heart failure or as a bridge to transplantation. We created biVAD using the artificial muscles (AMs) that supports both ventricles at the same time. We developed the test bench (TB) as the in vitro evaluating system to enable the measurement of performance. The biVAD exerts different pressure between left and right ventricle like the heart physiologically does. The heart model based on child's heart was constructed in silicone. This model was fitted with the biVAD. Two pipettes containing water with an ultrasonic sensor placed on top of each and attached to ventricles reproduced the preload and the after load of each ventricle by the real-time measurement of the fluid height variation proportionally to the exerted pressure. The LabVIEW software extrapolated the displaced volume and the pressure generated by each side of our biVAD. The development of a standardized protocol permitted the validation of the TB for in vitro evaluation, measurement of the performances of the AM biVAD herein, and reproducibility of data.
Resumo:
Hypoglycemia, if recurrent, may have severe consequences on cognitive and psychomotor development of neonates. Therefore, screening for hypoglycemia is a daily routine in every facility taking care of newborn infants. Point-of-care-testing (POCT) devices are interesting for neonatal use, as their handling is easy, measurements can be performed at bedside, demanded blood volume is small and results are readily available. However, such whole blood measurements are challenged by a wide variation of hematocrit in neonates and a spectrum of normal glucose concentration at the lower end of the test range. We conducted a prospective trial to check precision and accuracy of the best suitable POCT device for neonatal use from three leading companies in Europe. Of the three devices tested (Precision Xceed, Abbott; Elite XL, Bayer; Aviva Nano, Roche), Aviva Nano exhibited the best precision. None completely fulfilled the ISO-accuracy-criteria 15197: 2003 or 2011. Aviva Nano fulfilled these criteria in 92% of cases while the others were <87%. Precision Xceed reached the 95% limit of the 2003 ISO-criteria for values ≤4.2 mmol/L, but not for the higher range (71%). Although validated for adults, new POCT devices need to be specifically evaluated on newborn infants before adopting their routine use in neonatology.
Resumo:
While mobile technologies can provide great personalized services for mobile users, they also threaten their privacy. Such personalization-privacy paradox are particularly salient for context aware technology based mobile applications where user's behaviors, movement and habits can be associated with a consumer's personal identity. In this thesis, I studied the privacy issues in the mobile context, particularly focus on an adaptive privacy management system design for context-aware mobile devices, and explore the role of personalization and control over user's personal data. This allowed me to make multiple contributions, both theoretical and practical. In the theoretical world, I propose and prototype an adaptive Single-Sign On solution that use user's context information to protect user's private information for smartphone. To validate this solution, I first proved that user's context is a unique user identifier and context awareness technology can increase user's perceived ease of use of the system and service provider's authentication security. I then followed a design science research paradigm and implemented this solution into a mobile application called "Privacy Manager". I evaluated the utility by several focus group interviews, and overall the proposed solution fulfilled the expected function and users expressed their intentions to use this application. To better understand the personalization-privacy paradox, I built on the theoretical foundations of privacy calculus and technology acceptance model to conceptualize the theory of users' mobile privacy management. I also examined the role of personalization and control ability on my model and how these two elements interact with privacy calculus and mobile technology model. In the practical realm, this thesis contributes to the understanding of the tradeoff between the benefit of personalized services and user's privacy concerns it may cause. By pointing out new opportunities to rethink how user's context information can protect private data, it also suggests new elements for privacy related business models.