921 resultados para Brams, Steven J.: The win-win solution
Resumo:
3 Summary 3. 1 English The pharmaceutical industry has been facing several challenges during the last years, and the optimization of their drug discovery pipeline is believed to be the only viable solution. High-throughput techniques do participate actively to this optimization, especially when complemented by computational approaches aiming at rationalizing the enormous amount of information that they can produce. In siiico techniques, such as virtual screening or rational drug design, are now routinely used to guide drug discovery. Both heavily rely on the prediction of the molecular interaction (docking) occurring between drug-like molecules and a therapeutically relevant target. Several softwares are available to this end, but despite the very promising picture drawn in most benchmarks, they still hold several hidden weaknesses. As pointed out in several recent reviews, the docking problem is far from being solved, and there is now a need for methods able to identify binding modes with a high accuracy, which is essential to reliably compute the binding free energy of the ligand. This quantity is directly linked to its affinity and can be related to its biological activity. Accurate docking algorithms are thus critical for both the discovery and the rational optimization of new drugs. In this thesis, a new docking software aiming at this goal is presented, EADock. It uses a hybrid evolutionary algorithm with two fitness functions, in combination with a sophisticated management of the diversity. EADock is interfaced with .the CHARMM package for energy calculations and coordinate handling. A validation was carried out on 37 crystallized protein-ligand complexes featuring 11 different proteins. The search space was defined as a sphere of 15 R around the center of mass of the ligand position in the crystal structure, and conversely to other benchmarks, our algorithms was fed with optimized ligand positions up to 10 A root mean square deviation 2MSD) from the crystal structure. This validation illustrates the efficiency of our sampling heuristic, as correct binding modes, defined by a RMSD to the crystal structure lower than 2 A, were identified and ranked first for 68% of the complexes. The success rate increases to 78% when considering the five best-ranked clusters, and 92% when all clusters present in the last generation are taken into account. Most failures in this benchmark could be explained by the presence of crystal contacts in the experimental structure. EADock has been used to understand molecular interactions involved in the regulation of the Na,K ATPase, and in the activation of the nuclear hormone peroxisome proliferatoractivated receptors a (PPARa). It also helped to understand the action of common pollutants (phthalates) on PPARy, and the impact of biotransformations of the anticancer drug Imatinib (Gleevec®) on its binding mode to the Bcr-Abl tyrosine kinase. Finally, a fragment-based rational drug design approach using EADock was developed, and led to the successful design of new peptidic ligands for the a5ß1 integrin, and for the human PPARa. In both cases, the designed peptides presented activities comparable to that of well-established ligands such as the anticancer drug Cilengitide and Wy14,643, respectively. 3.2 French Les récentes difficultés de l'industrie pharmaceutique ne semblent pouvoir se résoudre que par l'optimisation de leur processus de développement de médicaments. Cette dernière implique de plus en plus. de techniques dites "haut-débit", particulièrement efficaces lorsqu'elles sont couplées aux outils informatiques permettant de gérer la masse de données produite. Désormais, les approches in silico telles que le criblage virtuel ou la conception rationnelle de nouvelles molécules sont utilisées couramment. Toutes deux reposent sur la capacité à prédire les détails de l'interaction moléculaire entre une molécule ressemblant à un principe actif (PA) et une protéine cible ayant un intérêt thérapeutique. Les comparatifs de logiciels s'attaquant à cette prédiction sont flatteurs, mais plusieurs problèmes subsistent. La littérature récente tend à remettre en cause leur fiabilité, affirmant l'émergence .d'un besoin pour des approches plus précises du mode d'interaction. Cette précision est essentielle au calcul de l'énergie libre de liaison, qui est directement liée à l'affinité du PA potentiel pour la protéine cible, et indirectement liée à son activité biologique. Une prédiction précise est d'une importance toute particulière pour la découverte et l'optimisation de nouvelles molécules actives. Cette thèse présente un nouveau logiciel, EADock, mettant en avant une telle précision. Cet algorithme évolutionnaire hybride utilise deux pressions de sélections, combinées à une gestion de la diversité sophistiquée. EADock repose sur CHARMM pour les calculs d'énergie et la gestion des coordonnées atomiques. Sa validation a été effectuée sur 37 complexes protéine-ligand cristallisés, incluant 11 protéines différentes. L'espace de recherche a été étendu à une sphère de 151 de rayon autour du centre de masse du ligand cristallisé, et contrairement aux comparatifs habituels, l'algorithme est parti de solutions optimisées présentant un RMSD jusqu'à 10 R par rapport à la structure cristalline. Cette validation a permis de mettre en évidence l'efficacité de notre heuristique de recherche car des modes d'interactions présentant un RMSD inférieur à 2 R par rapport à la structure cristalline ont été classés premier pour 68% des complexes. Lorsque les cinq meilleures solutions sont prises en compte, le taux de succès grimpe à 78%, et 92% lorsque la totalité de la dernière génération est prise en compte. La plupart des erreurs de prédiction sont imputables à la présence de contacts cristallins. Depuis, EADock a été utilisé pour comprendre les mécanismes moléculaires impliqués dans la régulation de la Na,K ATPase et dans l'activation du peroxisome proliferatoractivated receptor a (PPARa). Il a également permis de décrire l'interaction de polluants couramment rencontrés sur PPARy, ainsi que l'influence de la métabolisation de l'Imatinib (PA anticancéreux) sur la fixation à la kinase Bcr-Abl. Une approche basée sur la prédiction des interactions de fragments moléculaires avec protéine cible est également proposée. Elle a permis la découverte de nouveaux ligands peptidiques de PPARa et de l'intégrine a5ß1. Dans les deux cas, l'activité de ces nouveaux peptides est comparable à celles de ligands bien établis, comme le Wy14,643 pour le premier, et le Cilengitide (PA anticancéreux) pour la seconde.
Resumo:
Much of the analytical modeling of morphogen profiles is based on simplistic scenarios, where the source is abstracted to be point-like and fixed in time, and where only the steady state solution of the morphogen gradient in one dimension is considered. Here we develop a general formalism allowing to model diffusive gradient formation from an arbitrary source. This mathematical framework, based on the Green's function method, applies to various diffusion problems. In this paper, we illustrate our theory with the explicit example of the Bicoid gradient establishment in Drosophila embryos. The gradient formation arises by protein translation from a mRNA distribution followed by morphogen diffusion with linear degradation. We investigate quantitatively the influence of spatial extension and time evolution of the source on the morphogen profile. For different biologically meaningful cases, we obtain explicit analytical expressions for both the steady state and time-dependent 1D problems. We show that extended sources, whether of finite size or normally distributed, give rise to more realistic gradients compared to a single point-source at the origin. Furthermore, the steady state solutions are fully compatible with a decreasing exponential behavior of the profile. We also consider the case of a dynamic source (e.g. bicoid mRNA diffusion) for which a protein profile similar to the ones obtained from static sources can be achieved.
Resumo:
Diplomityön tavoite oli tutkia, mitä toimintoja ja tekniikoita uusi, joustava kartonkipakkauslinja sisältää ja mitkä suuntaukset pakkausteollisuudessa tulevat olemaan tärkeitä tulevaisuudessa. Pakkauslinjan päätoimintoja tarkasteltiin nykyisten jatulevaisuuden tekniikoiden pohjalta. Erityisesti tarkasteltiin laser-sovellusten käytön mahdollisuutta pakkauslinjan eri osatoiminnoissa. Katsaus pakkausteollisuuden tulevaisuuden näkymiin luotiin kirjallisuuden ja aikaisempien tutkimusten pohjalta, minkä perusteella työssä oletetaan, että yksilölliset ja monitoimipakkaukset tulevat lisääntymään tulevaisuudessa. Eri tuotantoerien välillä olevat asetusajat tulee saada minimoitua, mutta millä keinoin joustavuus on saavutettavissa? Yksi ratkaisu pakkausten valmistamisessa on käyttää robottisolua, mikä on mahdollista luultavasti ainakin kuppimuotoisten pakkausten kohdalla. Muutenkin robotiikka on lisääntymässä pakkausteollisuudessa. Digitaalisten painotekniikoiden kehitys on mahdollistanut yksilölliset painatukset. Tulevaisuudessa painatus on mahdollista tehdä pakkauslinjan loppupäässä, jopa vasta täytön ja suljennan jälkeen. Laserleikkaus on jo nyt käytössä, mutta tulevaisuudessa myös saumaus ja perinteinen nuuttaus on mahdollista tehdä lasersovelluksia käyttäen. Kehittynyt, väyläpohjainen ohjausjärjestelmä on tulevaisuudessa välttämätön joustavassa pakkauslinjassa. Internetin välityksellä toimiva etäohjattu virheenkorjausdiagnostiikka tulee myös olemaan itsestäänselvyys tulevaisuudessa. Kustannussäästöjä voidaan saavuttaa käyttämälläpakkauslinjassa modulaarista rakennetta. Standardiosien ja standardiosajärjestelmien käyttäminen pienentää myös käyttö- ja huoltokustannuksia. Tärkeää on kuitenkin muistaa, ettei joustavuutta voida saavuttaa pelkästään yhtä ominaisuutta tai tekniikkaa hyödyntäen vaan monia menetelmiä yhdistäen. Suunniteltavan pakkauslinjan toiminta on myös hyvä varmistaa käyttäen apuna mallinnusta ja simulointia.
Resumo:
Menestyäkseen nykymaailmassa ihmiset ovat turvautuneet toisiinsa muodostaen samallaerilaisia yhteisöjä ja verkostoja. Näille yhteisöille on tunnusomaista, että niissä vaikuttavien jäsenien ajatustavat yhtyvät keskenään. Yhteisöissä syntyy uusia ideoita ja keksintöjä. Niiden välittäminen maailmalle on usein kuitenkin ongelmallista. Digitaalisuus, Internet ja monet muut uudet teknologiat tuovat yhden ratkaisun ongelmaan. Eräs uuden teknologian mahdollistama kanava on yhteisötelevisio, jonka kautta yhteisön viestintää voidaan tehokkaasti välittää. Yhteisöilläei kuitenkaan ole teknistä taitoa toteuttaa tällaista palvelua. Yhteisöille, kuten pk-yrityksille, kouluille, seuroille, yhdistyksille ja jopa yksittäisille ihmisille, tuleekin pystyä tarjoamaan valmis konsepti, joka on helposti heidän käytettävissään. Tämä diplomityö toimii teknisenä pohjana Finnish Satellite Television Oy:n yhteisö-tv -palvelukonseptille, joka tullaan ottamaan laajamittaiseen käyttöön vuoden 2007 aikana. Työssä käydään läpi yhteisön ja yhteisöllisyyden tunnusmerkit ja peruspiirteet, luodaan katsaus yhteisötelevision alkutaipaleisiin, nykytilaan ja sen eri ratkaisuihin. Lisäksi tutustutaan yhteisö-tv:n kansainvälisiin ja kotimaisiin kokeiluihin ja pilottiprojekteihin. Työn teknisessä osassa tutkitaan yhteisötelevision mahdollistaviin teknologioihin, siirtoteihin sekä digitaalisiin tuotantojärjestelmiin. Lopuksi työssä kootaan yhteen käytettävyydeltään, liikutettavuudeltaan ja kustannustehokkuudeltaan sopivimmat tekniset toteutusvaihtoehdot konseptin käyttöönottoa varten.
Resumo:
The main goal of this paper is to propose a convergent finite volume method for a reactionâeuro"diffusion system with cross-diffusion. First, we sketch an existence proof for a class of cross-diffusion systems. Then the standard two-point finite volume fluxes are used in combination with a nonlinear positivity-preserving approximation of the cross-diffusion coefficients. Existence and uniqueness of the approximate solution are addressed, and it is also shown that the scheme converges to the corresponding weak solution for the studied model. Furthermore, we provide a stability analysis to study pattern-formation phenomena, and we perform two-dimensional numerical examples which exhibit formation of nonuniform spatial patterns. From the simulations it is also found that experimental rates of convergence are slightly below second order. The convergence proof uses two ingredients of interest for various applications, namely the discrete Sobolev embedding inequalities with general boundary conditions and a space-time $L^1$ compactness argument that mimics the compactness lemma due to Kruzhkov. The proofs of these results are given in the Appendix.
Resumo:
The Institute of Radiation Physics (IRA) is attached to the Department of Medical Radiology at the Vaud University Hospital Center (CHUV) in Lausanne. The Institute's main tasks are strongly linked to the medical activities of the Department: radiotherapy, radiodiagnostics, interventional radiology and nuclear medicine. The Institute also works in the fields of operational radiation protection, radiation metrology and radioecology. In the case of an accident involving radioactive materials, the emergency services are able to call on the assistance of radiation protection specialists. In order to avoid having to create and maintain a specific structure, both burdensome and rarely needed, Switzerland decided to unite all existing emergency services for such events. Thus, the IRA was invited to participate in this network. The challenge is therefore to integrate a university structure, used to academic collaborations and the scientific approach, to an interventional organization accustomed to strict policies, a military-style command structure and "drilled" procedures. The IRA's solution entails mobilizing existing resources and the expertise developed through professional experience. The main asset of this solution is that it involves the participation of committed collaborators who remain in a familiar environment, and are able to use proven materials and mastered procedures, even if the atmosphere of an accident situation differs greatly from regular laboratory routines. However, this solution requires both a commitment to education and training in emergency situations, and a commitment in terms of discipline by each collaborator in order to be integrated into a response plan supervised by an operational command center.
Resumo:
Les échantillons biologiques ne s?arrangent pas toujours en objets ordonnés (cristaux 2D ou hélices) nécessaires pour la microscopie électronique ni en cristaux 3D parfaitement ordonnés pour la cristallographie rayons X alors que de nombreux spécimens sont tout simplement trop << gros D pour la spectroscopie NMR. C?est pour ces raisons que l?analyse de particules isolées par la cryo-microscopie électronique est devenue une technique de plus en plus importante pour déterminer la structure de macromolécules. Néanmoins, le faible rapport signal-sur-bruit ainsi que la forte sensibilité des échantillons biologiques natifs face au faisceau électronique restent deux parmi les facteurs limitant la résolution. La cryo-coloration négative est une technique récemment développée permettant l?observation des échantillons biologiques avec le microscope électronique. Ils sont observés à l?état vitrifié et à basse température, en présence d?un colorant (molybdate d?ammonium). Les avantages de la cryo-coloration négative sont étudiés dans ce travail. Les résultats obtenus révèlent que les problèmes majeurs peuvent êtres évités par l?utilisation de cette nouvelle technique. Les échantillons sont représentés fidèlement avec un SNR 10 fois plus important que dans le cas des échantillons dans l?eau. De plus, la comparaison de données obtenues après de multiples expositions montre que les dégâts liés au faisceau électronique sont réduits considérablement. D?autre part, les résultats exposés mettent en évidence que la technique est idéale pour l?analyse à haute résolution de macromolécules biologiques. La solution vitrifiée de molybdate d?ammonium entourant l?échantillon n?empêche pas l?accès à la structure interne de la protéine. Finalement, plusieurs exemples d?application démontrent les avantages de cette technique nouvellement développée.<br/><br/>Many biological specimens do not arrange themselves in ordered assemblies (tubular or flat 2D crystals) suitable for electron crystallography, nor in perfectly ordered 3D crystals for X-ray diffraction; many other are simply too large to be approached by NMR spectroscopy. Therefore, single-particles analysis has become a progressively more important technique for structural determination of large isolated macromolecules by cryo-electron microscopy. Nevertheless, the low signal-to-noise ratio and the high electron-beam sensitivity of biological samples remain two main resolution-limiting factors, when the specimens are observed in their native state. Cryo-negative staining is a recently developed technique that allows the study of biological samples with the electron microscope. The samples are observed at low temperature, in the vitrified state, but in presence of a stain (ammonium molybdate). In the present work, the advantages of this novel technique are investigated: it is shown that cryo-negative staining can generally overcome most of the problems encountered with cryo-electron microscopy of vitrified native suspension of biological particles. The specimens are faithfully represented with a 10-times higher SNR than in the case of unstained samples. Beam-damage is found to be considerably reduced by comparison of multiple-exposure series of both stained and unstained samples. The present report also demonstrates that cryo-negative staining is capable of high- resolution analysis of biological macromolecules. The vitrified stain solution surrounding the sample does not forbid the access to the interna1 features (ie. the secondary structure) of a protein. This finding is of direct interest for the structural biologist trying to combine electron microscopy and X-ray data. developed electron microscopy technique. Finally, several application examples demonstrate the advantages of this newly
Resumo:
Woven monofilament, multifilament, and spun yarn filter media have long been the standard media in liquid filtration equipment. While the energy for a solid-liquid separation process is determined by the engineering work, it is the interface between the slurry and the equipment - the filter media - that greatly affects the performance characteristics of the unit operation. Those skilled in the art are well aware that a poorly designed filter medium may endanger the whole operation, whereas well-performing filter media can make the operation smooth and economical. As the mineral and pulp producers seek to produce ever finer and more refined fractions of their products, it is becoming increasingly important to be able to dewater slurries with average particle sizes around 1 ¿m using conventional, high-capacity filtration equipment. Furthermore, the surface properties of the media must not allow sticky and adhesive particles to adhere to the media. The aim of this thesis was to test how the dirt-repellency, electrical resistance and highpressure filtration performance of selected woven filter media can be improved by modifying the fabric or yarn with coating, chemical treatment and calendering. The results achieved by chemical surface treatments clearly show that the woven media surface properties can be modified to achieve lower electrical resistance and improved dirt-repellency. The main challenge with the chemical treatments is the abrasion resistance and, while the experimental results indicate that the treatment is sufficiently permanent to resist standard weathering conditions, they may still prove to be inadequately strong in terms of actual use.From the pressure filtration studies in this work, it seems obvious that the conventional woven multifilament fabrics still perform surprisingly well against the coated media in terms of filtrate clarity and cake build-up. Especially in cases where the feed slurry concentration was low and the pressures moderate, the conventional media seemed to outperform the coated media. In the cases where thefeed slurry concentration was high, the tightly woven media performed well against the monofilament reference fabrics, but seemed to do worse than some of the coated media. This result is somewhat surprising in that the high initial specific resistance of the coated media would suggest that the media will blind more easily than the plain woven media. The results indicate, however, that it is actually the woven media that gradually clogs during the coarse of filtration. In conclusion, it seems obvious that there is a pressure limit above which the woven media looses its capacity to keep the solid particles from penetrating the structure. This finding suggests that for extreme pressures the only foreseeable solution is the coated fabrics supported by a strong enough woven fabric to hold thestructure together. Having said that, the high pressure filtration process seems to follow somewhat different laws than the more conventional processes. Based on the results, it may well be that the role of the cloth is most of all to support the cake, and the main performance-determining factor is a long life time. Measuring the pore size distribution with a commercially available porometer gives a fairly accurate picture of the pore size distribution of a fabric, but failsto give insight into which of the pore sizes is the most important in determining the flow through the fabric. Historically air, and sometimes water, permeability measures have been the standard in evaluating media filtration performance including particle retention. Permeability, however, is a function of a multitudeof variables and does not directly allow the estimation of the effective pore size. In this study a new method for estimating the effective pore size and open pore area in a densely woven multifilament fabric was developed. The method combines a simplified equation of the electrical resistance of fabric with the Hagen-Poiseuille flow equation to estimate the effective pore size of a fabric and the total open area of pores. The results are validated by comparison to the measured values of the largest pore size (Bubble point) and the average pore size. The results show good correlation with measured values. However, the measured and estimated values tend to diverge in high weft density fabrics. This phenomenon is thought to be a result of a more tortuous flow path of denser fabrics, and could most probably be cured by using another value for the tortuosity factor.
Resumo:
Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.
Resumo:
Membrane filtration has become increasingly attractive in the processing of both foodand biotechnological products. However, the poor selectivity of the membranes and fouling are the critical factors limiting the development of UF systems for the specific fractionation of protein mixtures. This thesis gives an overview on fractionation of proteins from model protein solutions or from biological solutions. An attempt was made to improve the selectivity of the available membranes by modifying the membranes and by exploiting the different electrostatic interactions between the proteins and the membrane pore surfaces. Fractionation and UF behavior of proteins in the model solutions and in the corresponding biological solutions were compared. Characterization of the membranes and protein adsorptionto the membrane were investigated with combined flux and streaming potential studies. It has been shown that fouling of the membranes can be reduced using "self-rejecting" membranes at pH values where electrostatic repulsion is achieved between the membrane and the proteins in solution. This effect is best shown in UF of dilute single protein solutions at low ionic strengths and low pressures. Fractionation of model proteins in single, binary, and ternary solutionshas been carried out. The results have been compared to the results obtained from fractination of biological solutions. It was generally observed that fractination of proteins from biological solutions are more difficult to carry out owingto the presence of non studied protein components with different properties. Itcan be generally concluded that it is easier to enrich the smaller protein in the permeate but it is also possible to enrich the larger protein in the permeateat pH values close to the isoelectric point of the protein. It should be possible to find an optimal flux and modification to effectively improve the fractination of proteins even with very similar molar masses.
Resumo:
Regular use of mouth rinses modifies the oral habitat, since bacterial populations are submitted to a high selective pressure during the treatment exercised by the active presence of the disinfectant. Mostly mouth rinses are based on the antibacterial effect of Chlorhexidine, Triclosan, essential oils and other antibacterials although other pharmaceutical characteristics can also affect their effectiveness. In this paper we compare"in vitro" the antibacterial effect of different oral rinsing solutions. Minimal Inhibitory Concentrations (MIC) and Minimal Bactericidal Concentrations (MBC) were determined as well as the kinetics of bacterial death in the presence of letal concentrations of the mouth rinses. MIC values expressed as Maximal Inhibitory Dilution (MID) of the mouth rinse ranged from 1 to 1/2048 depending on the microorganism and product, whereas Minimal Biocidal Concentration (MBC), expressed as Maximal Biocidal Dilution (MBD) ranged from 1 to 1/1024, being in general one dilution less than MIC. Maximal Biocidal Dilution is a good tool to measure the actual efficiency of mouth washing solutions. However, kinetics of death seems to be better in our work killing curves demonstrate that bacterial populations are mostly eliminated during the first minute after the contact of bacterial suspension and the mouth-washing solution. In all tested bacterial species mouth-washing solutions tested were able to reduce until suspension treated except 1 and 5
Resumo:
Työssä tutkittiin jalometallien selektiivistä erottamista kloridiliuoksista synteettisten polymeerihartsien avulla. Laboratoriokokeissa keskityttiin tutkimaan kullan erottamista hydrofiilisen polymetakrylaattipohjaisen adsorbentin avulla. Lähtökohtana oli platinarikaste, joka sisälsi kullan lisäksi platinaa, palladiumia, hopeaa, kuparia, rautaa, vismuttia, seleeniä ja telluuria. Mittauksissa tutkittiin eri metallien ja puolimetallien adsorptiota hartsiin tasapaino-, kinetiikka- ja kolonnikokeilla. Työssä käytettiin myös adsorption simulointiin monikomponenttierotuksen dynaamiseen mallintamiseen tarkoitettua tietokoneohjelmaa, johon tarvittavat parametrit estimoitiin kokeellisen datan avulla. Tasapainokokeet yhtä metallia sisältäneistä liuoksista osoittivat, että hartsi adsorboi tehokkaasti kultaa kaikissa tutkituissa suolahappopitoisuuksissa (1-6 M). Kulta muodostaa hartsiin hyvin adsorboituvia tetrakloroauraatti(III)ioneja, [AuCl4]-, jotka ovat erittäin stabiileja pieniin kloridipitoisuuksiin saakka. Suolahappopitoisuudella oli merkitystä ainoastaan raudan adsorptioon, joka kasvoi huomattavasti suolahappopitoisuuden noustessa johtuen raudan taipumuksesta muodostaa hyvin adsorboituvia [FeCl4]--ioneja väkevissä suolahappopitoisuuksissa. Muiden tutkittujen alkuaineiden adsorptiot jäivät alhaisiksi kaikilla suolahappopitoisuuksilla. Rikasteliuoksella tehdyt tasapainokokeet osoittivat, että adsorptiokapasiteetti kullalle riippuu voimakkaasti muista läsnäolevista komponenteista. Kilpaileva adsorptio kuvattiin Langmuir-Freundlich-isotermillä. Kolonnikokeet osoittivat, että hartsi adsorboi kullan lisäksi hieman myös rautaa ja telluuria, jotka saatiin kuitenkin eluoitua hartsista täysin 5 M suolahappopesulla ja sitä seuraavalla 1 M suolahappopesulla. Tehokkaaksi liuokseksi kullan desorboimiseen osoittautui asetonin ja 1 M suolahapon seos. Kolonnierotuksen eri vaiheet pystyttiin tyydyttävästi kuvaamaan simulointimallilla.
Resumo:
Tässä diplomityössä esitellään jatkuvatoimisen alkuaineanalysaattorin kehitykseen liittyvän projektin alkuosa. Tässä osuudessa on tarkoituksena löytää analysaattorin vanhalle keskusyksikölle uusi korvaava kaupallinen prosessorikortti sekä suunnitella ja toteuttaa uudelle keskusyksikölle analysaattorin toiminnan vaatima ohjelma. Hihna-analysaattori on sulautettu reaaliaikajärjestelmä. Työssä esitellään sulautetun järjestelmän suunnittelun ja toteutuksen yleisiä toimintatapoja ja ratkaisuja. Erilaisista toteutusvaihtoehdoista esitellään niiden etuja ja haittoja. Työn toteutuksessa käytetään PC/104-standardin mukaisia valmiita kaupallisia yksiköitä. Tämä ISA-standardin laajennus soveltuu hyvin käytettäväksi sulautetussa järjestelmissä. Uudella keskusyksiköllä on mahdollista liittyä analysaattorin jäljelle jääviin yksiköihin erillisen sovitinkortin välityksellä. Työn lopputuloksena valittu toteutusratkaisu mahdollistaa analysaattorijärjestelmän vapaan jatkokehityksen, mikä ei ollut mahdollista vanhalla toteutuksella. Analysaattoriin on nyt mahdollista kehittää uusia ominaisuuksia, ja lisäksi sen nykyinen toiminta on hallitaan paremmin.
Resumo:
Empower Oy on energia-alan palveluja tarjoava yritys. Energianhallintajärjestelmää käytetään energiatietojen hallintaan ja ylläpitoon sekä tietojen esittämiseen loppukäyttäjille. Palvelun näytöt ja raportit on toteutettu web-pohjaisen käyttöliittymän kautta. Yhtiössä käynnistyi suurprojekti vanhan energianhallintajärjestelmän korvaamiseksi. Vanha järjestelmä otettiin käyttöön vuonna 1995 ja EMS-projekti käynnistettiin vuonna 2001. Diplomityö tehtiin osana EMS-projektia ja työn tavoitteina oli selvittää perusjärjestelmän käyttämän tietokantaratkaisun toimivuutta ja soveltuvuutta tehtävään sekä tutkailla eri tietokantamalleja teoreettisesti. Lisäksi työhön kuului erillisten haku- ja muutoskomponenttien ja rajapintojen toteuttaminen. Näiden avulla voidaan hakea ja muuttaa tietoa perusjärjestelmän pohjalla toimivasta oliorelaatiotietokannasta. Perusjärjestelmän DOR-tietokannaksi (Domain Object Repository) kutsuttu kokonaisuus on olioläheinen tietovarasto, josta tietoa haetaan ilmoittamalla haettavan olion tyyppi ja siihen liitoksissa olevat tyypit. Hakutulokseen mukaan haluttavat ominaisuudet ilmoitetaan kultakin tyypiltä erikseen. Haettaessa ja muutettaessa oliopohjaista DOR-tietoa, tulee noudattaa järjestelmän käyttämiä tietomalleja. Haku- ja muutoskomponentit toteutettiin Microsoftin kehittämällä .NET-teknologialla. Tietokantamallien teoreettinen tarkastelu auttoi ymmärtämään järjestelmän pohjalla toimivaa tietokantaratkaisua. Työssä selvisi, että perusjärjestelmän hyödyntämä oliorelaatiotietokanta soveltuu varsin hyvin tarkoitukseensa. Haku- ja muutoskomponenttien toteutus onnistui ja ne toimivat helppokäyttöisenä rajapintana energianhallintajärjestelmän tietokantaan.
Resumo:
A variable temperature field sets exacting demands to the structure under mechanical load. Most of all the lifetime of the rotating drum structure depends on temperature differences between parts inside the drum. The temperature difference was known because of the measurements made before. The list of demands was created based on customers’ needs. The limits of this paper were set to the inner structure of the drum. Creation of ideas for the inner structure was started open minded. The main principle in the creation process was to create new ideas for the function of the product with the help of sub-functions. The sub-functions were created as independent as possible. The best sub-functions were combined together and the new working principles were created based on them. Every working principle was calculated separately and criticized at the end of the calculation process. The main objective was to create the new kind of structure, which is not based too much to the old, inoperative structure. The affect of own weight of the inner structure to the stress values was quite small but it was also taken into consideration when calculating the maximum stress value of the structure. Because of very complex structures all of the calculations were made with the help of the ProE – Mechanica software. The fatigue analyze was made also for the best structure solution.