102 resultados para Interactive design
em Université de Lausanne, Switzerland
Resumo:
The PulseCath iVAC 3L? left ventricular assist device is an option to treat transitory left heart failure or dysfunction post-cardiac surgery. Assisted blood flow should reach up to 3 l/min. In the present in vitro model exact pump flow, depending on various frequencies and afterload was examined. Optimal flow was achieved with inflation/deflation frequencies of about 70-80/min. The maximal flow rate was achieved at about 2.5 l/min with a minimal afterload of 22 mmHg. Handling of the device was easy due to the connection to a standard intra-aortic balloon pump console. With increasing afterload (up to a simulated mean systemic pressure of 66 mmHg) flow rate and cardiac support are in some extent limited.
Resumo:
Double trouble: A hybrid organic-inorganic (organometallic) inhibitor was designed to target glutathione transferases. The metal center is used to direct protein binding, while the organic moiety acts as the active-site inhibitor (see picture). The mechanism of inhibition was studied using a range of biophysical and biochemical methods.
Resumo:
Locating new wind farms is of crucial importance for energy policies of the next decade. To select the new location, an accurate picture of the wind fields is necessary. However, characterizing wind fields is a difficult task, since the phenomenon is highly nonlinear and related to complex topographical features. In this paper, we propose both a nonparametric model to estimate wind speed at different time instants and a procedure to discover underrepresented topographic conditions, where new measuring stations could be added. Compared to space filling techniques, this last approach privileges optimization of the output space, thus locating new potential measuring sites through the uncertainty of the model itself.
Resumo:
The Smart canula concept allows for collapsed cannula insertion, and self-expansion within a vein of the body. (A) Computational fluid dynamics, and (B) bovine experiments (76+/-3.8 kg) were performed for comparative analyses, prior to (C) the first clinical application. For an 18F access, a given flow of 4 l/min (A) resulted in a pressure drop of 49 mmHg for smart cannula versus 140 mmHg for control. The corresponding Reynolds numbers are 680 versus 1170, respectively. (B) For an access of 28F, the maximal flow for smart cannula was 5.8+/-0.5 l/min versus 4.0+/-0.1 l/min for standard (P<0.0001), for 24F 5.5+/-0.6 l/min versus 3.2+/-0.4 l/min (P<0.0001), and for 20F 4.1+/-0.3 l/min versus 1.6+/-0.3 l/min (P<0.0001). The flow obtained with the smart cannula was 270+/-45% (20F), 172+/-26% (24F), and 134+/-13% (28F) of standard (one-way ANOVA, P=0.014). (C) First clinical application (1.42 m2) with a smart cannula showed 3.55 l/min (100% predicted) without additional fluids. All three assessment steps confirm the superior performance of the smart cannula design.
Resumo:
An active, solvent-free solid sampler was developed for the collection of 1,6-hexamethylene diisocyanate (HDI) aerosol and prepolymers. The sampler was made of a filter impregnated with 1-(2-methoxyphenyl)piperazine contained in a filter holder. Interferences with HDI were observed when a set of cellulose acetate filters and a polystyrene filter holder were used; a glass fiber filter and polypropylene filter cassette gave better results. The applicability of the sampling and analytical procedure was validated with a test chamber, constructed for the dynamic generation of HDI aerosol and prepolymers in commercial two-component spray paints (Desmodur(R) N75) used in car refinishing. The particle size distribution, temporal stability, and spatial uniformity of the simulated aerosol were established in order to test the sample. The monitoring of aerosol concentrations was conducted with the solid sampler paired to the reference impinger technique (impinger flasks contained 10 mL of 0.5 mg/mL 1-(2-methoxyphenyl)piperazine in toluene) under a controlled atmosphere in the test chamber. Analyses of derivatized HDI and prepolymers were carried out by using high-performance liquid chromatography and ultraviolet detection. The correlation between the solvent-free and the impinger techniques appeared fairly good (Y = 0.979X - 0.161; R = 0.978), when the tests were conducted in the range of 0.1 to 10 times the threshold limit value (TLV) for HDI monomer and up to 60-mu-g/m3 (3 U.K. TLVs) for total -N = C = O groups.
Resumo:
The comparison of consecutively manufactured tools and firearms has provided much, but not all, of the basis for the profession of firearm and toolmark examination. The authors accept the fundamental soundness of this approach but appeal to the experimental community to close two minor gaps in the experimental procedure. We suggest that "blinding" and attention to appropriateness of other experimental conditions that would consolidate the foundations of our profession. We do not suggest that previous work is unsound.
Resumo:
Objectives: Skin can be partially regenerated after full thickness defects by collagen matrices, In this study, we identified the main limitations of induced regeneration aiming to improve the design of dermal matrices. Methods: Single mice received a 1 cm2, full thickness skin wound on the dorsum, which were grafted with collagen-GAG matrices or left ungrafted. The healing modulation induced by the collagen-GAG matrices was compared to spontaneous healing and to custom designed, bioactive, poly-N-Acetyl- Glucosamine (NAG) matrices. Wound staging was based on macroscopic, histological and immunhistochemical analysis on days 3, 7, 10 and 21 post wounding. Results: Cell density was higher in spontaneously granulating wounds compared to grafted wounds. While grafted wounds exhibited increased levels of cell proliferation on days 7 and 10, vascularity was dramatically reduced. NAG scaffolds accelerated both angiogenesis and wound re-epithelialization. Conclusions: Since slow integration and revascularization severely limit the engraftment of clinically used dermal scaffolds, the design of dermal matrices using bioactive materials represent the next step in skin regeneration.
Resumo:
ABSTRACT: BACKGROUND: There is no recommendation to screen ferritin level in blood donors, even though several studies have noted the high prevalence of iron deficiency after blood donation, particularly among menstruating females. Furthermore, some clinical trials have shown that non-anaemic women with unexplained fatigue may benefit from iron supplementation. Our objective is to determine the clinical effect of iron supplementation on fatigue in female blood donors without anaemia, but with a mean serum ferritin </= 30 ng/ml. METHODS/DESIGN: In a double blind randomised controlled trial, we will measure blood count and ferritin level of women under age 50 yr, who donate blood to the University Hospital of Lausanne Blood Transfusion Department, at the time of the donation and after 1 week. One hundred and forty donors with a ferritin level </= 30 ng/ml and haemoglobin level >/= 120 g/l (non-anaemic) a week after the donation will be included in the study and randomised. A one-month course of oral ferrous sulphate (80 mg/day of elemental iron) will be introduced vs. placebo. Self-reported fatigue will be measured using a visual analogue scale. Secondary outcomes are: score of fatigue (Fatigue Severity Scale), maximal aerobic power (Chester Step Test), quality of life (SF-12), and mood disorders (Prime-MD). Haemoglobin and ferritin concentration will be monitored before and after the intervention. DISCUSSION: Iron deficiency is a potential problem for all blood donors, especially menstruating women. To our knowledge, no other intervention study has yet evaluated the impact of iron supplementation on subjective symptoms after a blood donation. TRIAL REGISTRATION: NCT00689793.
Resumo:
Objective : The main objective of this study was to assess mother-child patterns of interaction in relation to later quality of attachment in a group of children with an orofacial cleft compared with children without cleft. Design : Families were contacted when the child was 2 months old for a direct assessment of mother-child interaction and then at 12 months for a direct assessment of the child's attachment. Data concerning socioeconomical information and posttraumatic stress symptoms in mothers were collected at the first appointment. Participants : Forty families of children with a cleft and 45 families of children without cleft were included in the study. Families were recruited at birth in the University Hospital of Lausanne. Results : Results showed that children with a cleft were more difficult and less cooperative during interaction at 2 months of age with their mother compared with children without a cleft. No significant differences were found in mothers or in dyadic interactive styles. Concerning the child's attachment at 12 months old, no differences were found in attachment security. However, secure children with a cleft were significantly more avoidant with their mother during the reunion episodes than secure children without cleft. Conclusion : Despite the facial disfigurement and the stress engendered by treatment during the first months of the infant's life, children with cleft and their mothers are doing as well as families without cleft with regard to the mothers' mental health, mother-child relationships, and later quality of attachment. A potential contribution for this absence of difference may be the pluridisciplinary support that families of children with cleft benefit from in Lausanne.
Resumo:
SUMMARY : Eukaryotic DNA interacts with the nuclear proteins using non-covalent ionic interactions. Proteins can recognize specific nucleotide sequences based on the sterical interactions with the DNA and these specific protein-DNA interactions are the basis for many nuclear processes, e.g. gene transcription, chromosomal replication, and recombination. New technology termed ChIP-Seq has been recently developed for the analysis of protein-DNA interactions on a whole genome scale and it is based on immunoprecipitation of chromatin and high-throughput DNA sequencing procedure. ChIP-Seq is a novel technique with a great potential to replace older techniques for mapping of protein-DNA interactions. In this thesis, we bring some new insights into the ChIP-Seq data analysis. First, we point out to some common and so far unknown artifacts of the method. Sequence tag distribution in the genome does not follow uniform distribution and we have found extreme hot-spots of tag accumulation over specific loci in the human and mouse genomes. These artifactual sequence tags accumulations will create false peaks in every ChIP-Seq dataset and we propose different filtering methods to reduce the number of false positives. Next, we propose random sampling as a powerful analytical tool in the ChIP-Seq data analysis that could be used to infer biological knowledge from the massive ChIP-Seq datasets. We created unbiased random sampling algorithm and we used this methodology to reveal some of the important biological properties of Nuclear Factor I DNA binding proteins. Finally, by analyzing the ChIP-Seq data in detail, we revealed that Nuclear Factor I transcription factors mainly act as activators of transcription, and that they are associated with specific chromatin modifications that are markers of open chromatin. We speculate that NFI factors only interact with the DNA wrapped around the nucleosome. We also found multiple loci that indicate possible chromatin barrier activity of NFI proteins, which could suggest the use of NFI binding sequences as chromatin insulators in biotechnology applications. RESUME : L'ADN des eucaryotes interagit avec les protéines nucléaires par des interactions noncovalentes ioniques. Les protéines peuvent reconnaître les séquences nucléotidiques spécifiques basées sur l'interaction stérique avec l'ADN, et des interactions spécifiques contrôlent de nombreux processus nucléaire, p.ex. transcription du gène, la réplication chromosomique, et la recombinaison. Une nouvelle technologie appelée ChIP-Seq a été récemment développée pour l'analyse des interactions protéine-ADN à l'échelle du génome entier et cette approche est basée sur l'immuno-précipitation de la chromatine et sur la procédure de séquençage de l'ADN à haut débit. La nouvelle approche ChIP-Seq a donc un fort potentiel pour remplacer les anciennes techniques de cartographie des interactions protéine-ADN. Dans cette thèse, nous apportons de nouvelles perspectives dans l'analyse des données ChIP-Seq. Tout d'abord, nous avons identifié des artefacts très communs associés à cette méthode qui étaient jusqu'à présent insoupçonnés. La distribution des séquences dans le génome ne suit pas une distribution uniforme et nous avons constaté des positions extrêmes d'accumulation de séquence à des régions spécifiques, des génomes humains et de la souris. Ces accumulations des séquences artéfactuelles créera de faux pics dans toutes les données ChIP-Seq, et nous proposons différentes méthodes de filtrage pour réduire le nombre de faux positifs. Ensuite, nous proposons un nouvel échantillonnage aléatoire comme un outil puissant d'analyse des données ChIP-Seq, ce qui pourraient augmenter l'acquisition de connaissances biologiques à partir des données ChIP-Seq. Nous avons créé un algorithme d'échantillonnage aléatoire et nous avons utilisé cette méthode pour révéler certaines des propriétés biologiques importantes de protéines liant à l'ADN nommés Facteur Nucléaire I (NFI). Enfin, en analysant en détail les données de ChIP-Seq pour la famille de facteurs de transcription nommés Facteur Nucléaire I, nous avons révélé que ces protéines agissent principalement comme des activateurs de transcription, et qu'elles sont associées à des modifications de la chromatine spécifiques qui sont des marqueurs de la chromatine ouverte. Nous pensons que lés facteurs NFI interagir uniquement avec l'ADN enroulé autour du nucléosome. Nous avons également constaté plusieurs régions génomiques qui indiquent une éventuelle activité de barrière chromatinienne des protéines NFI, ce qui pourrait suggérer l'utilisation de séquences de liaison NFI comme séquences isolatrices dans des applications de la biotechnologie.
Resumo:
Game theory describes and analyzes strategic interaction. It is usually distinguished between static games, which are strategic situations in which the players choose only once as well as simultaneously, and dynamic games, which are strategic situations involving sequential choices. In addition, dynamic games can be further classified according to perfect and imperfect information. Indeed, a dynamic game is said to exhibit perfect information, whenever at any point of the game every player has full informational access to all choices that have been conducted so far. However, in the case of imperfect information some players are not fully informed about some choices. Game-theoretic analysis proceeds in two steps. Firstly, games are modelled by so-called form structures which extract and formalize the significant parts of the underlying strategic interaction. The basic and most commonly used models of games are the normal form, which rather sparsely describes a game merely in terms of the players' strategy sets and utilities, and the extensive form, which models a game in a more detailed way as a tree. In fact, it is standard to formalize static games with the normal form and dynamic games with the extensive form. Secondly, solution concepts are developed to solve models of games in the sense of identifying the choices that should be taken by rational players. Indeed, the ultimate objective of the classical approach to game theory, which is of normative character, is the development of a solution concept that is capable of identifying a unique choice for every player in an arbitrary game. However, given the large variety of games, it is not at all certain whether it is possible to device a solution concept with such universal capability. Alternatively, interactive epistemology provides an epistemic approach to game theory of descriptive character. This rather recent discipline analyzes the relation between knowledge, belief and choice of game-playing agents in an epistemic framework. The description of the players' choices in a given game relative to various epistemic assumptions constitutes the fundamental problem addressed by an epistemic approach to game theory. In a general sense, the objective of interactive epistemology consists in characterizing existing game-theoretic solution concepts in terms of epistemic assumptions as well as in proposing novel solution concepts by studying the game-theoretic implications of refined or new epistemic hypotheses. Intuitively, an epistemic model of a game can be interpreted as representing the reasoning of the players. Indeed, before making a decision in a game, the players reason about the game and their respective opponents, given their knowledge and beliefs. Precisely these epistemic mental states on which players base their decisions are explicitly expressible in an epistemic framework. In this PhD thesis, we consider an epistemic approach to game theory from a foundational point of view. In Chapter 1, basic game-theoretic notions as well as Aumann's epistemic framework for games are expounded and illustrated. Also, Aumann's sufficient conditions for backward induction are presented and his conceptual views discussed. In Chapter 2, Aumann's interactive epistemology is conceptually analyzed. In Chapter 3, which is based on joint work with Conrad Heilmann, a three-stage account for dynamic games is introduced and a type-based epistemic model is extended with a notion of agent connectedness. Then, sufficient conditions for backward induction are derived. In Chapter 4, which is based on joint work with Jérémie Cabessa, a topological approach to interactive epistemology is initiated. In particular, the epistemic-topological operator limit knowledge is defined and some implications for games considered. In Chapter 5, which is based on joint work with Jérémie Cabessa and Andrés Perea, Aumann's impossibility theorem on agreeing to disagree is revisited and weakened in the sense that possible contexts are provided in which agents can indeed agree to disagree.
Resumo:
The epidemiological methods have become useful tools for the assessment of the effectiveness and safety of health care technologies. The experimental methods, namely the randomized controlled trials (RCT), give the best evidence of the effect of a technology. However, the ethical issues and the very nature of the intervention under study sometimes make it difficult to carry out an RCT. Therefore, quasi-experimental and non-experimental study designs are also applied. The critical issues concerning these designs are discussed. The results of evaluative studies are of importance for decision-makers in health policy. The measurements of the impact of a medical technology should go beyond a statement of its effectiveness, because the essential outcome of an intervention or programme is the health status and quality of life of the individuals and populations concerned.
Resumo:
ABSTRACT The drug discovery process has been profoundly changed recently by the adoption of computational methods helping the design of new drug candidates more rapidly and at lower costs. In silico drug design consists of a collection of tools helping to make rational decisions at the different steps of the drug discovery process, such as the identification of a biomolecular target of therapeutical interest, the selection or the design of new lead compounds and their modification to obtain better affinities, as well as pharmacokinetic and pharmacodynamic properties. Among the different tools available, a particular emphasis is placed in this review on molecular docking, virtual high throughput screening and fragment-based ligand design.