857 resultados para cash-in-advance
Resumo:
Les instabilités engendrées par des gradients de densité interviennent dans une variété d'écoulements. Un exemple est celui de la séquestration géologique du dioxyde de carbone en milieux poreux. Ce gaz est injecté à haute pression dans des aquifères salines et profondes. La différence de densité entre la saumure saturée en CO2 dissous et la saumure environnante induit des courants favorables qui le transportent vers les couches géologiques profondes. Les gradients de densité peuvent aussi être la cause du transport indésirable de matières toxiques, ce qui peut éventuellement conduire à la pollution des sols et des eaux. La gamme d'échelles intervenant dans ce type de phénomènes est très large. Elle s'étend de l'échelle poreuse où les phénomènes de croissance des instabilités s'opèrent, jusqu'à l'échelle des aquifères à laquelle interviennent les phénomènes à temps long. Une reproduction fiable de la physique par la simulation numérique demeure donc un défi en raison du caractère multi-échelles aussi bien au niveau spatial et temporel de ces phénomènes. Il requiert donc le développement d'algorithmes performants et l'utilisation d'outils de calculs modernes. En conjugaison avec les méthodes de résolution itératives, les méthodes multi-échelles permettent de résoudre les grands systèmes d'équations algébriques de manière efficace. Ces méthodes ont été introduites comme méthodes d'upscaling et de downscaling pour la simulation d'écoulements en milieux poreux afin de traiter de fortes hétérogénéités du champ de perméabilité. Le principe repose sur l'utilisation parallèle de deux maillages, le premier est choisi en fonction de la résolution du champ de perméabilité (grille fine), alors que le second (grille grossière) est utilisé pour approximer le problème fin à moindre coût. La qualité de la solution multi-échelles peut être améliorée de manière itérative pour empêcher des erreurs trop importantes si le champ de perméabilité est complexe. Les méthodes adaptatives qui restreignent les procédures de mise à jour aux régions à forts gradients permettent de limiter les coûts de calculs additionnels. Dans le cas d'instabilités induites par des gradients de densité, l'échelle des phénomènes varie au cours du temps. En conséquence, des méthodes multi-échelles adaptatives sont requises pour tenir compte de cette dynamique. L'objectif de cette thèse est de développer des algorithmes multi-échelles adaptatifs et efficaces pour la simulation des instabilités induites par des gradients de densité. Pour cela, nous nous basons sur la méthode des volumes finis multi-échelles (MsFV) qui offre l'avantage de résoudre les phénomènes de transport tout en conservant la masse de manière exacte. Dans la première partie, nous pouvons démontrer que les approximations de la méthode MsFV engendrent des phénomènes de digitation non-physiques dont la suppression requiert des opérations de correction itératives. Les coûts de calculs additionnels de ces opérations peuvent toutefois être compensés par des méthodes adaptatives. Nous proposons aussi l'utilisation de la méthode MsFV comme méthode de downscaling: la grille grossière étant utilisée dans les zones où l'écoulement est relativement homogène alors que la grille plus fine est utilisée pour résoudre les forts gradients. Dans la seconde partie, la méthode multi-échelle est étendue à un nombre arbitraire de niveaux. Nous prouvons que la méthode généralisée est performante pour la résolution de grands systèmes d'équations algébriques. Dans la dernière partie, nous focalisons notre étude sur les échelles qui déterminent l'évolution des instabilités engendrées par des gradients de densité. L'identification de la structure locale ainsi que globale de l'écoulement permet de procéder à un upscaling des instabilités à temps long alors que les structures à petite échelle sont conservées lors du déclenchement de l'instabilité. Les résultats présentés dans ce travail permettent d'étendre les connaissances des méthodes MsFV et offrent des formulations multi-échelles efficaces pour la simulation des instabilités engendrées par des gradients de densité. - Density-driven instabilities in porous media are of interest for a wide range of applications, for instance, for geological sequestration of CO2, during which CO2 is injected at high pressure into deep saline aquifers. Due to the density difference between the C02-saturated brine and the surrounding brine, a downward migration of CO2 into deeper regions, where the risk of leakage is reduced, takes place. Similarly, undesired spontaneous mobilization of potentially hazardous substances that might endanger groundwater quality can be triggered by density differences. Over the last years, these effects have been investigated with the help of numerical groundwater models. Major challenges in simulating density-driven instabilities arise from the different scales of interest involved, i.e., the scale at which instabilities are triggered and the aquifer scale over which long-term processes take place. An accurate numerical reproduction is possible, only if the finest scale is captured. For large aquifers, this leads to problems with a large number of unknowns. Advanced numerical methods are required to efficiently solve these problems with today's available computational resources. Beside efficient iterative solvers, multiscale methods are available to solve large numerical systems. Originally, multiscale methods have been developed as upscaling-downscaling techniques to resolve strong permeability contrasts. In this case, two static grids are used: one is chosen with respect to the resolution of the permeability field (fine grid); the other (coarse grid) is used to approximate the fine-scale problem at low computational costs. The quality of the multiscale solution can be iteratively improved to avoid large errors in case of complex permeability structures. Adaptive formulations, which restrict the iterative update to domains with large gradients, enable limiting the additional computational costs of the iterations. In case of density-driven instabilities, additional spatial scales appear which change with time. Flexible adaptive methods are required to account for these emerging dynamic scales. The objective of this work is to develop an adaptive multiscale formulation for the efficient and accurate simulation of density-driven instabilities. We consider the Multiscale Finite-Volume (MsFV) method, which is well suited for simulations including the solution of transport problems as it guarantees a conservative velocity field. In the first part of this thesis, we investigate the applicability of the standard MsFV method to density- driven flow problems. We demonstrate that approximations in MsFV may trigger unphysical fingers and iterative corrections are necessary. Adaptive formulations (e.g., limiting a refined solution to domains with large concentration gradients where fingers form) can be used to balance the extra costs. We also propose to use the MsFV method as downscaling technique: the coarse discretization is used in areas without significant change in the flow field whereas the problem is refined in the zones of interest. This enables accounting for the dynamic change in scales of density-driven instabilities. In the second part of the thesis the MsFV algorithm, which originally employs one coarse level, is extended to an arbitrary number of coarse levels. We prove that this keeps the MsFV method efficient for problems with a large number of unknowns. In the last part of this thesis, we focus on the scales that control the evolution of density fingers. The identification of local and global flow patterns allows a coarse description at late times while conserving fine-scale details during onset stage. Results presented in this work advance the understanding of the Multiscale Finite-Volume method and offer efficient dynamic multiscale formulations to simulate density-driven instabilities. - Les nappes phréatiques caractérisées par des structures poreuses et des fractures très perméables représentent un intérêt particulier pour les hydrogéologues et ingénieurs environnementaux. Dans ces milieux, une large variété d'écoulements peut être observée. Les plus communs sont le transport de contaminants par les eaux souterraines, le transport réactif ou l'écoulement simultané de plusieurs phases non miscibles, comme le pétrole et l'eau. L'échelle qui caractérise ces écoulements est définie par l'interaction de l'hétérogénéité géologique et des processus physiques. Un fluide au repos dans l'espace interstitiel d'un milieu poreux peut être déstabilisé par des gradients de densité. Ils peuvent être induits par des changements locaux de température ou par dissolution d'un composé chimique. Les instabilités engendrées par des gradients de densité revêtent un intérêt particulier puisque qu'elles peuvent éventuellement compromettre la qualité des eaux. Un exemple frappant est la salinisation de l'eau douce dans les nappes phréatiques par pénétration d'eau salée plus dense dans les régions profondes. Dans le cas des écoulements gouvernés par les gradients de densité, les échelles caractéristiques de l'écoulement s'étendent de l'échelle poreuse où les phénomènes de croissance des instabilités s'opèrent, jusqu'à l'échelle des aquifères sur laquelle interviennent les phénomènes à temps long. Etant donné que les investigations in-situ sont pratiquement impossibles, les modèles numériques sont utilisés pour prédire et évaluer les risques liés aux instabilités engendrées par les gradients de densité. Une description correcte de ces phénomènes repose sur la description de toutes les échelles de l'écoulement dont la gamme peut s'étendre sur huit à dix ordres de grandeur dans le cas de grands aquifères. Il en résulte des problèmes numériques de grande taille qui sont très couteux à résoudre. Des schémas numériques sophistiqués sont donc nécessaires pour effectuer des simulations précises d'instabilités hydro-dynamiques à grande échelle. Dans ce travail, nous présentons différentes méthodes numériques qui permettent de simuler efficacement et avec précision les instabilités dues aux gradients de densité. Ces nouvelles méthodes sont basées sur les volumes finis multi-échelles. L'idée est de projeter le problème original à une échelle plus grande où il est moins coûteux à résoudre puis de relever la solution grossière vers l'échelle de départ. Cette technique est particulièrement adaptée pour résoudre des problèmes où une large gamme d'échelle intervient et évolue de manière spatio-temporelle. Ceci permet de réduire les coûts de calculs en limitant la description détaillée du problème aux régions qui contiennent un front de concentration mobile. Les aboutissements sont illustrés par la simulation de phénomènes tels que l'intrusion d'eau salée ou la séquestration de dioxyde de carbone.
Resumo:
Abstract This paper shows how to calculate recursively the moments of the accumulated and discounted value of cash flows when the instantaneous rates of return follow a conditional ARMA process with normally distributed innovations. We investigate various moment based approaches to approximate the distribution of the accumulated value of cash flows and we assess their performance through stochastic Monte-Carlo simulations. We discuss the potential use in insurance and especially in the context of Asset-Liability Management of pension funds.
Resumo:
Background/Objective:Little is known about the precise role of parental migrant status (MS) and educational level (EL) on adiposity and various eating habits in young children. Therefore, we assessed their independent contribution in preschoolers.Subjects/Methods:Of 655 randomly selected preschoolers, 542 (5.1±0.6 years; 71% of parental MS and 37% of low parental EL) were analysed. Body composition was measured by bioelectrical impedance. Eating habits were assessed using a semiqualitative food frequency questionnaire and analysed according to five messages developed by the Swiss Society for Nutrition, based on factors implicated in childhood obesity: (1) 'Drinking water and decreasing sweetened drinks', (2) 'Eating fruit and vegetables', (3) 'Decreasing breakfast skipping', (4) 'Reducing fatty and sweet foods' and (5) 'Reducing the intake of meals and snacks in front of television'.Results:Children of migrant and low EL parents had higher body fat, ate more meals and snacks while watching television and had more fruit and fatty foods compared with their respective counterparts (all P0.04). Children of low EL parents also consumed less water and vegetables compared with their counterparts (all P0.04). In most instances, we found an independent contribution of parental MS and EL to adiposity and eating habits. A more pronounced effect was found if both parents were migrants or of low EL. Differences in adiposity and eating habits were relatively similar to the joint parental data when assessed individually for maternal and paternal MS and EL.Conclusions:Parental MS and EL are independently related to adiposity and various eating habits in preschoolers.European Journal of Clinical Nutrition advance online publication, 3 November 2010; doi:10.1038/ejcn.2010.248.
Resumo:
The purpose of this review and analysis is to provide a basic understanding of the issues related to worldwide hypoxic zones and the range of economic questions sorely in need of answers. We begin by describing the causes and extent of hypoxic zones worldwide, followed by a review of the evidence concerning ecological effects of the condition and impacts on ecosystem services. We describe what is known about abatement options and cost effective policy design before turning to an analysis of the large, seasonally recurring hypoxic zone in the Gulf of Mexico. We advance the understanding of this major ecological issue by estimating the relationship between pollutants (nutrients) and the areal extent of the hypoxic zone. This “production function” relationship suggests that both instantaneous and legacy contributions of nutrients contribute to annual predictions of the size of the zone, highlighting concerns that ecologists have raised about lags in the recovery of the system and affirms the importance of multiple nutrients as target pollutants. We conclude with a discussion of critical research needs to provide input to policy formation.
Resumo:
PURPOSE OF REVIEW: Despite progress in the understanding of the pathophysiology of invasive candidiasis, and the development of new classes of well tolerated antifungals, invasive candidiasis remains a disease difficult to diagnose, and associated with significant morbidity and mortality. Early antifungal treatment may be useful in selected groups of patients who remain difficult to identify prospectively. The purpose of this review is to summarize the recent development of risk-identification strategies targeting early identification of ICU patients susceptible to benefit from preemptive or empirical antifungal treatment. RECENT FINDINGS: Combinations of different risk factors are useful in identifying high-risk patients. Among the many risk factors predisposing to invasive candidiasis, colonization has been identified as one of the most important. In contrast to prospective surveillance of the dynamics of colonization (colonization index), integration of clinical colonization status in risk scores models significantly improve their accuracy in identifying patients at risk of invasive candidiasis. SUMMARY: To date, despite limited prospective validation, clinical models targeted at early identification of patients at risk to develop invasive candidiasis represent a major advance in the management of patients at risk of invasive candidiasis. Moreover, large clinical studies using such risk scores or predictive rules are underway.
Resumo:
This research consisted of five laboratory experiments designed to address the following two objectives in an integrated analysis: (1) To discriminate between the symbol Stop Ahead warning sign and a small set of other signs (which included the word-legend Stop Ahead sign); and (2) To analyze sign detection, recognizability, and processing characteristics by drivers. A set of 16 signs was used in each of three experiments. A tachistoscope was used to display each sign image to a respondent for a brief interval in a controlled viewing experiment. The first experiment was designed to test detection of a sign in the driver's visual field; the second experiment was designed to test the driver's ability to recognize a given sign in the visual field; and the third experiment was designed to test the speed and accuracy of a driver's response to each sign as a command to perform a driving action. A fourth experiment tested the meanings drivers associated with an eight-sign subset of the 16 signs used in the first three experiments. A fifth experiment required all persons to select which (if any) signs they considered to be appropriate for use on two scale model county road intersections. The conclusions are that word-legend Stop Ahead signs are more effective driver communication devices than symbol stop-ahead signs; that it is helpful to drivers to have a word plate supplementing the symbol sign if a symbol sign is used; and that the guidance in the Manual on Uniform Traffic Control Devices on the placement of advance warning signs should not supplant engineering judgment in providing proper sign communication at an intersection.
Resumo:
BACKGROUND/AIM: With the evolving boundaries of sports science and greater understanding of the driving factors in the human performance physiology, one of the limiting factors has now become the technology. The growing scientific interest on the practical application of hypoxic training for intermittent activities such as team and racket sports legitimises the development of innovative technologies serving athletes in a sport-specific setting. METHODS: Description of a new mobile inflatable simulated hypoxic equipment. RESULTS: The system comprises two inflatable units-that is, a tunnel and a rectangular design, each with a 215 m(3) volume and a hypoxic trailer generating over 3000 Lpm of hypoxic air with FiO₂ between 0.21 and 0.10 (a simulated altitude up to 5100 m). The inflatable units offer a 45 m running lane (width=1.8 m and height=2.5 m) as well as a 8 m × 10 m dome tent. FiO₂ is stable within a range of 0.1% in normal conditions inside the tunnel. The air supplied is very dry-typically 10-15% relative humidity. CONCLUSIONS: This mobile inflatable simulated hypoxic equipment is a promising technological advance within sport sciences. It offers an opportunity for team-sport players to train under hypoxic conditions, both for repeating sprints (tunnel configuration) or small-side games (rectangular configuration).
Resumo:
This contract extension was granted to analyze data obtained in the original contract period at a level of detail not called for in the original contract nor permitted by the time constraints of the original contract schedule. These further analyses focused on two primary questions: I. What sources of variation can be isolated within the overall pattern of driver recognition errors reported previously for the 16 signs tested in Project HR-256? 2. Were there systematic relations among data on the placement of signs in a simulated signing exercise and data on the respondents' ability to detect the presence of a sign in a visual field or their ability to recognize quickly and correctly a sign shown them or the speed with which these same persons can respond to a sign for a driver decision?
Resumo:
Independence, respect, and equality are values important to all people. These values help define the concepts of autonomy (independence and freedom) and self-determination (the right to make decisions for one’s self). Because these rights are so valued in our society and are something that most of us would value for ourselves, the “least restrictive alternative” should always be considered before taking away a person’s civil and legal rights to make decisions for him or herself. The least restrictive alternative is an option, which allows a person to keep as much autonomy, and self-determination as possible while providing only the level of protection and supervision that is necessary. Some examples may include: representative payee for certain government benefit checks, joint bank accounts or advance directives for health care.
Space Competition and Time Delays in Human Range Expansions. Application to the Neolithic Transition
Resumo:
Space competition effects are well-known in many microbiological and ecological systems. Here we analyze such an effectin human populations. The Neolithic transition (change from foraging to farming) was mainly the outcome of a demographic process that spread gradually throughout Europe from the Near East. In Northern Europe, archaeological data show a slowdown on the Neolithic rate of spread that can be related to a high indigenous (Mesolithic) population density hindering the advance as a result of the space competition between the two populations. We measure this slowdown from a database of 902 Early Neolithic sites and develop a time-delayed reaction-diffusion model with space competition between Neolithic and Mesolithic populations, to predict the observed speeds. The comparison of the predicted speed with the observations and with a previous non-delayed model show that both effects, the time delay effect due to the generation lag and the space competition between populations, are crucial in order to understand the observations
Resumo:
Background: The combined serum creatinine (SCreat) and cystatin C (CysC) CKD-EPI formula constitutes a new advance for glomerular filtration rate (GFR) estimation in adults. Using inulin clearances (iGFRs), the revised SCreat and the combined Schwartz formulas, this study aims to evaluate the applicability of the combined CKD-EPI formula in children. Method: 201 iGFRs for 201 children were analyzed and divided by chronic kidney disease (CKD) stages (iGFRs ≥90 ml/min/1.73 m(2), 90 > iGFRs > 60, and iGFRs ≤59), and by age groups (<10, 10-15, and >15 years). Medians with 95% confidence intervals of bias, precision, and accuracies within 30% of the iGFRs, for all three formulas, were compared using the Wilcoxon signed-rank test. Results: For the entire cohort and for all CKD and age groups, medians of bias for the CKD-EPI formula were significantly higher (p < 0.001) and precision was significantly lower than the solely SCreat and the combined SCreat and CysC Schwartz formulas. We also found that using the CKD-EPI formula, bias decreased and accuracy increased while the child age group increased, with a better formula performance above 15 years of age. However, the CKD-EPI formula accuracy is 58% compared to 93 and 92% for the SCreat and combined Schwartz formulas in this adolescent group. Conclusions: The performance of the combined CKD-EPI formula improves in adolescence compared with younger ages. Nevertheless, the CKD-EPI formula performs more poorly than the SCreat and the combined Schwartz formula in pediatric population. © 2013 S. Karger AG, Basel.
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.
Resumo:
During the recent economic crisis, school district budgets have been impacted by state school aid funding shortfalls and state aid reductions due to across-the-board general fund reductions in fiscal year 2009 and fiscal year 2010. Additionally, in fiscal year 2011, the state school aid appropriation was capped and was 156.1 million dollars short of fully funding the state portion of the school aid formula. This issue review examines the impact that the reductions in state aid have had on school district cash reserve levies.
Resumo:
BACKGROUND: Video-laryngoscopes are marketed for intubation in difficult airway management. They provide a better view of the larynx and may facilitate tracheal intubation, but there is no adequately powered study comparing different types of video-laryngoscopes in a difficult airway scenario or in a simulated difficult airway situation. METHODS/DESIGN: The objective of this trial is to evaluate and to compare the clinical performance of three video-laryngoscopes with a guiding channel for intubation (Airtraq?, A. P. Advance?, King Vision?) and three video-laryngoscopes without an integrated tracheal tube guidance (C-MAC?, GlideScope?, McGrath?) in a simulated difficult airway situation in surgical patients. The working hypothesis is that each video-laryngoscope provides at least a 90% first intubation success rate (lower limit of the 95% confidence interval >0.9). It is a prospective, patient-blinded, multicenter, randomized controlled trial in 720 patients who are scheduled for elective surgery under general anesthesia, requiring tracheal intubation at one of the three participating hospitals. A difficult airway will be created using an extrication collar and taping the patients' head on the operating table to substantially reduce mouth opening and to minimize neck movement. Tracheal intubation will be performed with the help of one of the six devices according to randomization. Insertion success, time necessary for intubation, Cormack-Lehane grade and percentage of glottic opening (POGO) score at laryngoscopy, optimization maneuvers required to aid tracheal intubation, adverse events and technical problems will be recorded. Primary outcome is intubation success at first attempt. DISCUSSION: We will simulate the difficult airway and evaluate different video-laryngoscopes in this highly realistic and clinically challenging scenario, independently from manufacturers of the devices. Because of the sufficiently powered multicenter design this study will deliver important and cutting-edge results that will help clinicians decide which device to use for intubation of the expected and unexpected difficult airway. TRIAL REGISTRATION: NCT01692535.
Resumo:
Breakthrough technologies which now enable the sequencing of individual genomes will irreversibly modify the way diseases are diagnosed, predicted, prevented and treated. For these technologies to reach their full potential requires, upstream, access to high-quality biomedical data and samples from large number of properly informed and consenting individuals and, downstream, the possibility to transform the emerging knowledge into a clinical utility. The Lausanne Institutional Biobank was designed as an integrated, highly versatile infrastructure to harness the power of these emerging technologies and catalyse the discovery and development of innovative therapeutics and biomarkers, and advance the field of personalised medicine. Described here are its rationale, design and governance, as well as parallel initiatives which have been launched locally to address the societal, ethical and technological issues associated with this new bio-resource. Since January 2013, inpatients admitted at Lausanne CHUV University Hospital have been systematically invited to provide a general consent for the use of their biomedical data and samples for research, to complete a standardised questionnaire, to donate a 10-ml sample of blood for future DNA extraction and to be re-contacted for future clinical trials. Over the first 18 months of operation, 14,459 patients were contacted, and 11,051 accepted to participate in the study. This initial 18-month experience illustrates that a systematic hospital-based biobank is feasible; it shows a strong engagement in research from the patient population in this University Hospital setting, and the need for a broad, integrated approach for the future of medicine to reach its full potential.