969 resultados para intervention modelling experiments


Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the projection of an increasing world population, hand-in-hand with a journey towards a bigger number of developed countries, further demand on basic chemical building blocks, as ethylene and propylene, has to be properly addressed in the next decades. The methanol-to-olefins (MTO) is an interesting reaction to produce those alkenes using coal, gas or alternative sources, like biomass, through syngas as a source for the production of methanol. This technology has been widely applied since 1985 and most of the processes are making use of zeolites as catalysts, particularly ZSM-5. Although its selectivity is not especially biased over light olefins, it resists to a quick deactivation by coke deposition, making it quite attractive when it comes to industrial environments; nevertheless, this is a highly exothermic reaction, which is hard to control and to anticipate problems, such as temperature runaways or hot-spots, inside the catalytic bed. The main focus of this project is to study those temperature effects, by addressing both experimental, where the catalytic performance and the temperature profiles are studied, and modelling fronts, which consists in a five step strategy to predict the weight fractions and activity. The mind-set of catalytic testing is present in all the developed assays. It was verified that the selectivity towards light olefins increases with temperature, although this also leads to a much faster catalyst deactivation. To oppose this effect, experiments were carried using a diluted bed, having been able to increase the catalyst lifetime between 32% and 47%. Additionally, experiments with three thermocouples placed inside the catalytic bed were performed, analysing the deactivation wave and the peaks of temperature throughout the bed. Regeneration was done between consecutive runs and it was concluded that this action can be a powerful means to increase the catalyst lifetime, maintaining a constant selectivity towards light olefins, by losing acid strength in a steam stabilised zeolitic structure. On the other hand, developments on the other approach lead to the construction of a raw basic model, able to predict weight fractions, that should be tuned to be a tool for deactivation and temperature profiles prediction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In thee present paper the classical concept of the corpuscular gene is dissected out in order to show the inconsistency of some genetical and cytological explanations based on it. The author begins by asking how do the genes perform their specific functions. Genetists say that colour in plants is sometimes due to the presence in the cytoplam of epidermal cells of an organic complex belonging to the anthocyanins and that this complex is produced by genes. The author then asks how can a gene produce an anthocyanin ? In accordance to Haldane's view the first product of a gene may be a free copy of the gene itself which is abandoned to the nucleus and then to the cytoplasm where it enters into reaction with other gene products. If, thus, the different substances which react in the cell for preparing the characters of the organism are copies of the genes then the chromosome must be very extravagant a thing : chain of the most diverse and heterogeneous substances (the genes) like agglutinins, precipitins, antibodies, hormones, erzyms, coenzyms, proteins, hydrocarbons, acids, bases, salts, water soluble and insoluble substances ! It would be very extrange that so a lot of chemical genes should not react with each other. remaining on the contrary, indefinitely the same in spite of the possibility of approaching and touching due to the stato of extreme distension of the chromosomes mouving within the fluid medium of the resting nucleus. If a given medium becomes acid in virtue of the presence of a free copy of an acid gene, then gene and character must be essentially the same thing and the difference between genotype and phenotype disappears, epigenesis gives up its place to preformation, and genetics goes back to its most remote beginnings. The author discusses the complete lack of arguments in support of the view that genes are corpuscular entities. To show the emharracing situation of the genetist who defends the idea of corpuscular genes, Dobzhansky's (1944) assertions that "Discrete entities like genes may be integrated into systems, the chromosomes, functioning as such. The existence of organs and tissues does not preclude their cellular organization" are discussed. In the opinion of the present writer, affirmations as such abrogate one of the most important characteristics of the genes, that is, their functional independence. Indeed, if the genes are independent, each one being capable of passing through mutational alterations or separating from its neighbours without changing them as Dobzhansky says, then the chromosome, genetically speaking, does not constitute a system. If on the other hand, theh chromosome be really a system it will suffer, as such, the influence of the alteration or suppression of the elements integrating it, and in this case the genes cannot be independent. We have therefore to decide : either the chromosome is. a system and th genes are not independent, or the genes are independent and the chromosome is not a syntem. What cannot surely exist is a system (the chromosome) formed by independent organs (the genes), as Dobzhansky admits. The parallel made by Dobzhansky between chromosomes and tissues seems to the author to be inadequate because we cannot compare heterogeneous things like a chromosome considered as a system made up by different organs (the genes), with a tissue formed, as we know, by the same organs (the cells) represented many times. The writer considers the chromosome as a true system and therefore gives no credit to the genes as independent elements. Genetists explain position effects in the following way : The products elaborated by the genes react with each other or with substances previously formed in the cell by the action of other gene products. Supposing that of two neighbouring genes A and B, the former reacts with a certain substance of the cellular medium (X) giving a product C which will suffer the action, of the latter (B). it follows that if the gene changes its position to a place far apart from A, the product it elaborates will spend more time for entering into contact with the substance C resulting from the action of A upon X, whose concentration is greater in the proximities of A. In this condition another gene produtc may anticipate the product of B in reacting with C, the normal course of reactions being altered from this time up. Let we see how many incongruencies and contradictions exist in such an explanation. Firstly, it has been established by genetists that the reaction due.to gene activities are specific and develop in a definite order, so that, each reaction prepares the medium for the following. Therefore, if the medium C resulting from the action of A upon x is the specific medium for the activity of B, it follows that no other gene, in consequence of its specificity, can work in this medium. It is only after the interference of B, changing the medium, that a new gene may enter into action. Since the genotype has not been modified by the change of the place of the gene, it is evident that the unique result we have to attend is a little delay without seious consequence in the beginning of the reaction of the product of B With its specific substratum C. This delay would be largely compensated by a greater amount of the substance C which the product of B should found already prepared. Moreover, the explanation did not take into account the fact that the genes work in the resting nucleus and that in this stage the chromosomes, very long and thin, form a network plunged into the nuclear sap. in which they are surely not still, changing from cell to cell and In the same cell from time to time, the distance separating any two genes of the same chromosome or of different ones. The idea that the genes may react directly with each other and not by means of their products, would lead to the concept of Goidschmidt and Piza, in accordance to which the chromosomes function as wholes. Really, if a gene B, accustomed to work between A and C (as for instance in the chromosome ABCDEF), passes to function differently only because an inversion has transferred it to the neighbourhood of F (as in AEDOBF), the gene F must equally be changed since we cannot almH that, of two reacting genes, only one is modified The genes E and A will be altered in the same way due to the change of place-of the former. Assuming that any modification in a gene causes a compensatory modification in its neighbour in order to re-establich the equilibrium of the reactions, we conclude that all the genes are modified in consequence of an inversion. The same would happen by mutations. The transformation of B into B' would changeA and C into A' and C respectively. The latter, reacting withD would transform it into D' and soon the whole chromosome would be modified. A localized change would therefore transform a primitive whole T into a new one T', as Piza pretends. The attraction point-to-point by the chromosomes is denied by the nresent writer. Arguments and facts favouring the view that chromosomes attract one another as wholes are presented. A fact which in the opinion of the author compromises sereously the idea of specific attraction gene-to-gene is found inthe behavior of the mutated gene. As we know, in homozygosis, the spme gene is represented twice in corresponding loci of the chromosomes. A mutation in one of them, sometimes so strong that it is capable of changing one sex into the opposite one or even killing the individual, has, notwithstading that, no effect on the previously existing mutual attraction of the corresponding loci. It seems reasonable to conclude that, if the genes A and A attract one another specifically, the attraction will disappear in consequence of the mutation. But, as in heterozygosis the genes continue to attract in the same way as before, it follows that the attraction is not specific and therefore does not be a gene attribute. Since homologous genes attract one another whatever their constitution, how do we understand the lack cf attraction between non homologous genes or between the genes of the same chromosome ? Cnromosome pairing is considered as being submitted to the same principles which govern gametes copulation or conjugation of Ciliata. Modern researches on the mating types of Ciliata offer a solid ground for such an intepretation. Chromosomes conjugate like Ciliata of the same variety, but of different mating types. In a cell there are n different sorts of chromosomes comparable to the varieties of Ciliata of the same species which do not mate. Of each sort there are in the cell only two chromosomes belonging to different mating types (homologous chromosomes). The chromosomes which will conjugate (belonging to the same "variety" but to different "mating types") produce a gamone-like substance that promotes their union, being without action upon the other chromosomes. In this simple way a single substance brings forth the same result that in the case of point-to-point attraction would be reached through the cooperation of as many different substances as the genes present in the chromosome. The chromosomes like the Ciliata, divide many times before they conjugate. (Gonial chromosomes) Like the Ciliata, when they reach maturity, they copulate. (Cyte chromosomes). Again, like the Ciliata which aggregate into clumps before mating, the chrorrasrmes join together in one side of the nucleus before pairing. (.Synizesis). Like the Ciliata which come out from the clumps paired two by two, the chromosomes leave the synizesis knot also in pairs. (Pachytene) The chromosomes, like the Ciliata, begin pairing at any part of their body. After some time the latter adjust their mouths, the former their kinetochores. During conjugation the Ciliata as well as the chromosomes exchange parts. Finally, the ones as the others separate to initiate a new cycle of divisions. It seems to the author that the analogies are to many to be overlooked. When two chemical compounds react with one another, both are transformed and new products appear at the and of the reaction. In the reaction in which the protoplasm takes place, a sharp difference is to be noted. The protoplasm, contrarily to what happens with the chemical substances, does not enter directly into reaction, but by means of products of its physiological activities. More than that while the compounds with Wich it reacts are changed, it preserves indefinitely its constitution. Here is one of the most important differences in the behavior of living and lifeless matter. Genes, accordingly, do not alter their constitution when they enter into reaction. Genetists contradict themselves when they affirm, on the one hand, that genes are entities which maintain indefinitely their chemical composition, and on the other hand, that mutation is a change in the chemica composition of the genes. They are thus conferring to the genes properties of the living and the lifeless substances. The protoplasm, as we know, without changing its composition, can synthesize different kinds of compounds as enzyms, hormones, and the like. A mutation, in the opinion of the writer would then be a new property acquired by the protoplasm without altering its chemical composition. With regard to the activities of the enzyms In the cells, the author writes : Due to the specificity of the enzyms we have that what determines the order in which they will enter into play is the chemical composition of the substances appearing in the protoplasm. Suppose that a nucleoproteln comes in relation to a protoplasm in which the following enzyms are present: a protease which breaks the nucleoproteln into protein and nucleic acid; a polynucleotidase which fragments the nucleic acid into nucleotids; a nucleotidase which decomposes the nucleotids into nucleoids and phosphoric acid; and, finally, a nucleosidase which attacs the nucleosids with production of sugar and purin or pyramidin bases. Now, it is evident that none of the enzyms which act on the nucleic acid and its products can enter into activity before the decomposition of the nucleoproteln by the protease present in the medium takes place. Leikewise, the nucleosidase cannot works without the nucleotidase previously decomposing the nucleotids, neither the latter can act before the entering into activity of the polynucleotidase for liberating the nucleotids. The number of enzyms which may work at a time depends upon the substances present m the protoplasm. The start and the end of enzym activities, the direction of the reactions toward the decomposition or the synthesis of chemical compounds, the duration of the reactions, all are in the dependence respectively o fthe nature of the substances, of the end products being left in, or retired from the medium, and of the amount of material present. The velocity of the reaction is conditioned by different factors as temperature, pH of the medium, and others. Genetists fall again into contradiction when they say that genes act like enzyms, controlling the reactions in the cells. They do not remember that to cintroll a reaction means to mark its beginning, to determine its direction, to regulate its velocity, and to stop it Enzyms, as we have seen, enjoy none of these properties improperly attributed to them. If, therefore, genes work like enzyms, they do not controll reactions, being, on the contrary, controlled by substances and conditions present in the protoplasm. A gene, like en enzym, cannot go into play, in the absence of the substance to which it is specific. Tne genes are considered as having two roles in the organism one preparing the characters attributed to them and other, preparing the medium for the activities of other genes. At the first glance it seems that only the former is specific. But, if we consider that each gene acts only when the appropriated medium is prepared for it, it follows that the medium is as specific to the gene as the gene to the medium. The author concludes from the analysis of the manner in which genes perform their function, that all the genes work at the same time anywhere in the organism, and that every character results from the activities of all the genes. A gene does therefore not await for a given medium because it is always in the appropriated medium. If the substratum in which it opperates changes, its activity changes correspondingly. Genes are permanently at work. It is true that they attend for an adequate medium to develop a certain actvity. But this does not mean that it is resting while the required cellular environment is being prepared. It never rests. While attending for certain conditions, it opperates in the previous enes It passes from medium to medium, from activity to activity, without stopping anywhere. Genetists are acquainted with situations in which the attended results do not appear. To solve these situations they use to make appeal to the interference of other genes (modifiers, suppressors, activators, intensifiers, dilutors, a. s. o.), nothing else doing in this manner than displacing the problem. To make genetcal systems function genetists confer to their hypothetical entities truly miraculous faculties. To affirm as they do w'th so great a simplicity, that a gene produces an anthocyanin, an enzym, a hormone, or the like, is attribute to the gene activities that onlv very complex structures like cells or glands would be capable of producing Genetists try to avoid this difficulty advancing that the gene works in collaboration with all the other genes as well as with the cytoplasm. Of course, such an affirmation merely means that what works at each time is not the gene, but the whole cell. Consequently, if it is the whole cell which is at work in every situation, it follows that the complete set of genes are permanently in activity, their activity changing in accordance with the part of the organism in which they are working. Transplantation experiments carried out between creeper and normal fowl embryos are discussed in order to show that there is ro local gene action, at least in some cases in which genetists use to recognize such an action. The author thinks that the pleiotropism concept should be applied only to the effects and not to the causes. A pleiotropic gene would be one that in a single actuation upon a more primitive structure were capable of producing by means of secondary influences a multiple effect This definition, however, does not preclude localized gene action, only displacing it. But, if genetics goes back to the egg and puts in it the starting point for all events which in course of development finish by producing the visible characters of the organism, this will signify a great progress. From the analysis of the results of the study of the phenocopies the author concludes that agents other than genes being also capaole of determining the same characters as the genes, these entities lose much of their credit as the unique makers of the organism. Insisting about some points already discussed, the author lays once more stress upon the manner in which the genes exercise their activities, emphasizing that the complete set of genes works jointly in collaboration with the other elements of the cell, and that this work changes with development in the different parts of the organism. To defend this point of view the author starts fron the premiss that a nerve cell is different from a muscle cell. Taking this for granted the author continues saying that those cells have been differentiated as systems, that is all their parts have been changed during development. The nucleus of the nerve cell is therefore different from the nucleus of the muscle cell not only in shape, but also in function. Though fundamentally formed by th same parts, these cells differ integrally from one another by the specialization. Without losing anyone of its essenial properties the protoplasm differentiates itself into distinct kinds of cells, as the living beings differentiate into species. The modified cells within the organism are comparable to the modified organisms within the species. A nervo and a muscle cell of the same organism are therefore like two species originated from a common ancestor : integrally distinct. Like the cytoplasm, the nucleus of a nerve cell differs from the one of a muscle cell in all pecularities and accordingly, nerve cell chromosomes are different from muscle cell chromosomes. We cannot understand differentiation of a part only of a cell. The differentiation must be of the whole cell as a system. When a cell in the course of development becomes a nerve cell or a muscle cell , it undoubtedly acquires nerve cell or muscle cell cytoplasm and nucleus respectively. It is not admissible that the cytoplasm has been changed r.lone, the nucleus remaining the same in both kinds of cells. It is therefore legitimate to conclude that nerve ceil ha.s nerve cell chromosomes and muscle cell, muscle cell chromosomes. Consequently, the genes, representing as they do, specific functions of the chromossomes, are different in different sorts of cells. After having discussed the development of the Amphibian egg on the light of modern researches, the author says : We have seen till now that the development of the egg is almost finished and the larva about to become a free-swimming tadepole and, notwithstanding this, the genes have not yet entered with their specific work. If the haed and tail position is determined without the concourse of the genes; if dorso-ventrality and bilaterality of the embryo are not due to specific gene actions; if the unequal division of the blastula cells, the different speed with which the cells multiply in each hemisphere, and the differential repartition of the substances present in the cytoplasm, all this do not depend on genes; if gastrulation, neurulation. division of the embryo body into morphogenetic fields, definitive determination of primordia, and histological differentiation of the organism go on without the specific cooperation of the genes, it is the case of asking to what then the genes serve ? Based on the mechanism of plant galls formation by gall insects and on the manner in which organizers and their products exercise their activities in the developing organism, the author interprets gene action in the following way : The genes alter structures which have been formed without their specific intervention. Working in one substratum whose existence does not depend o nthem, the genes would be capable of modelling in it the particularities which make it characteristic for a given individual. Thus, the tegument of an animal, as a fundamental structure of the organism, is not due to gene action, but the presence or absence of hair, scales, tubercles, spines, the colour or any other particularities of the skin, may be decided by the genes. The organizer decides whether a primordium will be eye or gill. The details of these organs, however, are left to the genetic potentiality of the tissue which received the induction. For instance, Urodele mouth organizer induces Anura presumptive epidermis to develop into mouth. But, this mouth will be farhioned in the Anura manner. Finalizing the author presents his own concept of the genes. The genes are not independent material particles charged with specific activities, but specific functions of the whole chromosome. To say that a given chromosome has n genes means that this chromonome, in different circumstances, may exercise n distinct activities. Thus, under the influence of a leg evocator the chromosome, as whole, develops its "leg" activity, while wbitm the field of influence of an eye evocator it will develop its "eye" activity. Translocations, deficiencies and inversions will transform more or less deeply a whole into another one, This new whole may continue to produce the same activities it had formerly in addition to those wich may have been induced by the grafted fragment, may lose some functions or acquire entirely new properties, that is, properties that none of them had previously The theoretical possibility of the chromosomes acquiring new genetical properties in consequence of an exchange of parts postulated by the present writer has been experimentally confirmed by Dobzhansky, who verified that, when any two Drosophila pseudoobscura II - chromosomes exchange parts, the chossover chromosomes show new "synthetic" genetical effects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Els bacteris són la forma dominant de vida del planeta: poden sobreviure en medis molt adversos, i en alguns casos poden generar substàncies que quan les ingerim ens són tòxiques. La seva presència en els aliments fa que la microbiologia predictiva sigui un camp imprescindible en la microbiologia dels aliments per garantir la seguretat alimentària. Un cultiu bacterià pot passar per quatre fases de creixement: latència, exponencial, estacionària i de mort. En aquest treball s’ha avançat en la comprensió dels fenòmens intrínsecs a la fase de latència, que és de gran interès en l’àmbit de la microbiologia predictiva. Aquest estudi, realitzat al llarg de quatre anys, s’ha abordat des de la metodologia Individual-based Modelling (IbM) amb el simulador INDISIM (INDividual DIScrete SIMulation), que ha estat millorat per poder fer-ho. INDISIM ha permès estudiar dues causes de la fase de latència de forma separada, i abordar l’estudi del comportament del cultiu des d’una perspectiva mesoscòpica. S’ha vist que la fase de latència ha de ser estudiada com un procés dinàmic, i no definida per un paràmetre. L’estudi de l’evolució de variables com la distribució de propietats individuals entre la població (per exemple, la distribució de masses) o la velocitat de creixement, han permès distingir dues etapes en la fase de latència, inicial i de transició, i aprofundir en la comprensió del que passa a nivell cel•lular. S’han observat experimentalment amb citometria de flux diversos resultats previstos per les simulacions. La coincidència entre simulacions i experiments no és trivial ni casual: el sistema estudiat és un sistema complex, i per tant la coincidència del comportament al llarg del temps de diversos paràmetres interrelacionats és un aval a la metodologia emprada en les simulacions. Es pot afirmar, doncs, que s’ha verificat experimentalment la bondat de la metodologia INDISIM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigates the issue of self-selection of stakeholders into participation and collaboration in policy-relevant experiments. We document and test the implications of self-selection in the context of randomised policy experiment we conducted in primary schools in the UK. The main questions we ask are (1) is there evidence of selection on key observable characteristics likely to matter for the outcome of interest and (2) does selection matter for the estimates of treatment eff ects. The experimental work consists in testing the e ffects of an intervention aimed at encouraging children to make more healthy choices at lunch. We recruited schools through local authorities and randomised schools across two incentive treatments and a control group. We document the selection taking place both at the level of local authorities and at the school level. Overall we nd mild evidence of selection on key observables such as obesity levels and socio-economic characteristics. We find evidence of selection along indicators of involvement in healthy lifestyle programmes at the school level, but the magnitude is small. Moreover, We do not find signifi cant di erences in the treatment e ffects of the experiment between variables which, albeit to a mild degree, are correlated with selection into the experiment. To our knowledge, this is the rst study providing direct evidence on the magnitude of self-selection in fi eld experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé: L'évaluation de l'exposition aux nuisances professionnelles représente une étape importante dans l'analyse de poste de travail. Les mesures directes sont rarement utilisées sur les lieux même du travail et l'exposition est souvent estimée sur base de jugements d'experts. Il y a donc un besoin important de développer des outils simples et transparents, qui puissent aider les spécialistes en hygiène industrielle dans leur prise de décision quant aux niveaux d'exposition. L'objectif de cette recherche est de développer et d'améliorer les outils de modélisation destinés à prévoir l'exposition. Dans un premier temps, une enquête a été entreprise en Suisse parmi les hygiénistes du travail afin d'identifier les besoins (types des résultats, de modèles et de paramètres observables potentiels). Il a été constaté que les modèles d'exposition ne sont guère employés dans la pratique en Suisse, l'exposition étant principalement estimée sur la base de l'expérience de l'expert. De plus, l'émissions de polluants ainsi que leur dispersion autour de la source ont été considérés comme des paramètres fondamentaux. Pour tester la flexibilité et la précision des modèles d'exposition classiques, des expériences de modélisations ont été effectuées dans des situations concrètes. En particulier, des modèles prédictifs ont été utilisés pour évaluer l'exposition professionnelle au monoxyde de carbone et la comparer aux niveaux d'exposition répertoriés dans la littérature pour des situations similaires. De même, l'exposition aux sprays imperméabilisants a été appréciée dans le contexte d'une étude épidémiologique sur une cohorte suisse. Dans ce cas, certains expériences ont été entreprises pour caractériser le taux de d'émission des sprays imperméabilisants. Ensuite un modèle classique à deux-zone a été employé pour évaluer la dispersion d'aérosol dans le champ proche et lointain pendant l'activité de sprayage. D'autres expériences ont également été effectuées pour acquérir une meilleure compréhension des processus d'émission et de dispersion d'un traceur, en se concentrant sur la caractérisation de l'exposition du champ proche. Un design expérimental a été développé pour effectuer des mesures simultanées dans plusieurs points d'une cabine d'exposition, par des instruments à lecture directe. Il a été constaté que d'un point de vue statistique, la théorie basée sur les compartiments est sensée, bien que l'attribution à un compartiment donné ne pourrait pas se faire sur la base des simples considérations géométriques. Dans une étape suivante, des données expérimentales ont été collectées sur la base des observations faites dans environ 100 lieux de travail différents: des informations sur les déterminants observés ont été associées aux mesures d'exposition des informations sur les déterminants observés ont été associé. Ces différentes données ont été employées pour améliorer le modèle d'exposition à deux zones. Un outil a donc été développé pour inclure des déterminants spécifiques dans le choix du compartiment, renforçant ainsi la fiabilité des prévisions. Toutes ces investigations ont servi à améliorer notre compréhension des outils des modélisations ainsi que leurs limitations. L'intégration de déterminants mieux adaptés aux besoins des experts devrait les inciter à employer cet outil dans leur pratique. D'ailleurs, en augmentant la qualité des outils des modélisations, cette recherche permettra non seulement d'encourager leur utilisation systématique, mais elle pourra également améliorer l'évaluation de l'exposition basée sur les jugements d'experts et, par conséquent, la protection de la santé des travailleurs. Abstract Occupational exposure assessment is an important stage in the management of chemical exposures. Few direct measurements are carried out in workplaces, and exposures are often estimated based on expert judgements. There is therefore a major requirement for simple transparent tools to help occupational health specialists to define exposure levels. The aim of the present research is to develop and improve modelling tools in order to predict exposure levels. In a first step a survey was made among professionals to define their expectations about modelling tools (what types of results, models and potential observable parameters). It was found that models are rarely used in Switzerland and that exposures are mainly estimated from past experiences of the expert. Moreover chemical emissions and their dispersion near the source have also been considered as key parameters. Experimental and modelling studies were also performed in some specific cases in order to test the flexibility and drawbacks of existing tools. In particular, models were applied to assess professional exposure to CO for different situations and compared with the exposure levels found in the literature for similar situations. Further, exposure to waterproofing sprays was studied as part of an epidemiological study on a Swiss cohort. In this case, some laboratory investigation have been undertaken to characterize the waterproofing overspray emission rate. A classical two-zone model was used to assess the aerosol dispersion in the near and far field during spraying. Experiments were also carried out to better understand the processes of emission and dispersion for tracer compounds, focusing on the characterization of near field exposure. An experimental set-up has been developed to perform simultaneous measurements through direct reading instruments in several points. It was mainly found that from a statistical point of view, the compartmental theory makes sense but the attribution to a given compartment could ñó~be done by simple geometric consideration. In a further step the experimental data were completed by observations made in about 100 different workplaces, including exposure measurements and observation of predefined determinants. The various data obtained have been used to improve an existing twocompartment exposure model. A tool was developed to include specific determinants in the choice of the compartment, thus largely improving the reliability of the predictions. All these investigations helped improving our understanding of modelling tools and identify their limitations. The integration of more accessible determinants, which are in accordance with experts needs, may indeed enhance model application for field practice. Moreover, while increasing the quality of modelling tool, this research will not only encourage their systematic use, but might also improve the conditions in which the expert judgments take place, and therefore the workers `health protection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The carbohydrate-binding specificity of lectins from the seeds of Canavalia maritima and Dioclea grandiflora was studied by hapten-inhibition of haemagglutination using various sugars and sugar derivatives as inhibitors, including N-acetylneuraminic acid and N-acetylmuramic acid. Despite some discrepancies, both lectins exhibited a very similar carbohydrate-binding specificity as previously reported for other lectins from Diocleinae (tribe Phaseoleae, sub-tribe Diocleinae). Accordingly, both lectins exhibited almost identical hydropathic profiles and their three-dimensional models built up from the atomic coordinates of ConA looked very similar. However, docking experiments of glucose and mannose in their monosaccharide-binding sites, by comparison with the ConA-mannose complex used as a model, revealed conformational changes in side chains of the amino acid residues involved in the binding of monosaccharides. These results fully agree with crystallographic data showing that binding of specific ligands to ConA requires conformational chances of its monosaccharide-binding site.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cry11Bb is an insecticidal crystal protein produced by Bacillus thuringiensis subsp. medellin during its stationary phase; this -endotoxin is active against dipteran insects and has great potential for mosquito borne disease control. Here, we report the first theoretical model of the tridimensional structure of a Cry11 toxin. The tridimensional structure of the Cry11Bb toxin was obtained by homology modelling on the structures of the Cry1Aa and Cry3Aa toxins. In this work we give a brief description of our model and hypothesize the residues of the Cry11Bb toxin that could be important in receptor recognition and pore formation. This model will serve as a starting point for the design of mutagenesis experiments aimed to the improvement of toxicity, and to provide a new tool for the elucidation of the mechanism of action of these mosquitocidal proteins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: Reassessment of ongoing antibiotic therapy is an important step towards appropriate use of antibiotics. This study was conducted to evaluate the impact of a short questionnaire designed to encourage reassessment of intravenous antibiotic therapy after 3 days. PATIENTS AND METHODS: Patients hospitalized on the surgical and medical wards of a university hospital and treated with an intravenous antibiotic for 3-4 days were randomly allocated to either an intervention or control group. The intervention consisted of mailing to the physician in charge of the patient a three-item questionnaire referring to possible adaptation of the antibiotic therapy. The primary outcome was the time elapsed from randomization until a first modification of the initial intravenous antibiotic therapy. It was compared within both groups using Cox proportional-hazard modelling. RESULTS: One hundred and twenty-six eligible patients were randomized in the intervention group and 125 in the control group. Time to modification of intravenous antibiotic therapy was 14% shorter in the intervention group (adjusted hazard ratio for modification 1.28, 95% CI 0.99-1.67, P = 0.06). It was significantly shorter in the intervention group compared with a similar group of 151 patients observed during a 2 month period preceding the study (adjusted hazard ratio 1.17, 95% CI 1.03-1.32, P = 0.02). CONCLUSION: The results suggest that a short questionnaire, easily adaptable to automatization, has the potential to foster reassessment of antibiotic therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this project, we have investigated new ways of modelling and analysis of human vasculature from Medical images. The research was divided in two main areas: cerebral vasculature analysis and coronary arteries modeling. Regarding cerebral vasculature analysis, we have studed cerebral aneurysms, internal carotid and the Circle of Willis (CoW). Aneurysms are abnormal vessel enlargements that can rupture causing important cerebral damages or death. The understanding of this pathology, together with its virtual treatment, and image diagnosis and prognosis, includes identification and detailed measurement of the aneurysms. In this context, we have proposed two automatic aneurysm isolation method, to separate the abnormal part of the vessel from the healthy part, to homogenize and speed-up the processing pipeline usually employed to study this pathology, [Cardenes2011TMI, arrabide2011MedPhys]. The results obtained from both methods have been also compared and validatied in [Cardenes2012MBEC]. A second important task here the analysis of the internal carotid [Bogunovic2011Media] and the automatic labelling of the CoW, Bogunovic2011MICCAI, Bogunovic2012TMI]. The second area of research covers the study of coronary arteries, specially coronary bifurcations because there is where the formation of atherosclerotic plaque is more common, and where the intervention is more challenging. Therefore, we proposed a novel modelling method from Computed Tomography Angiography (CTA) images, combined with Conventional Coronary Angiography (CCA), to obtain realistic vascular models of coronary bifurcations, presented in [Cardenes2011MICCAI], and fully validated including phantom experiments in [Cardene2013MedPhys]. The realistic models obtained from this method are being used to simulate stenting procedures, and to investigate the hemodynamic variables in coronary bifurcations in the works submitted in [Morlachi2012, Chiastra2012]. Additionally, another preliminary work has been done to reconstruct the coronary tree from rotational angiography, and published in [Cardenes2012ISBI].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of the research is to link granular physics with the modelling of rock avalanches. Laboratory experiments consist to find a convenient granular material, i.e. grainsize and physical behaviour, and testing it on simple slope geometry. When the appropriate sliding material is selected, we attempted to model the debris avalanche and the spreading on a slope with different substratum to understand the relationship between the volume and the reach angle, i.e. angle of the line joining the top of the scar and the end of the deposit. For a better understanding of the mass spreading, the deposits are scanned with a laser scanner. Datasets are compared to see how the grain size and volume influence a debris avalanche. The relationship between the roughness and grainsize of the substratum shows that the spreading of the sliding mass is increased when the roughness of the substratum starts to be equivalent or greater than the grainsize of the flowing mass. The runout distance displays a more complex relationship, because a long runout distance implies that grains are less spread. This means that if the substratum is too rough the distance diminishes, as well if it is too smooth because the effect on the apparent friction decreases. Up to now our findings do not permit to validate any previous model (Melosh, 1987; Bagnold 1956).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experimental and theoretical investigations for growth of silicon nanoparticles (4 to 14 nm) in radio frequency discharge were carried out. Growth processes were performed with gas mixtures of SiH4 and Ar in a plasma chemical reactor at low pressure. A distinctive feature of presented kinetic model of generation and growth of nanoparticles (compared to our earlier model) is its ability to investigate small"critical" dimensions of clusters, determining the rate of particle production and taking into account the influence of SiH2 and Si2Hm dimer radicals. The experiments in the present study were extended to high pressure (≥20 Pa) and discharge power (≥40 W). Model calculations were compared to experimental measurements, investigating the dimension of silicon nanoparticles as a function of time, discharge power, gas mixture, total pressure, and gas flow.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract One of the most important issues in molecular biology is to understand regulatory mechanisms that control gene expression. Gene expression is often regulated by proteins, called transcription factors which bind to short (5 to 20 base pairs),degenerate segments of DNA. Experimental efforts towards understanding the sequence specificity of transcription factors is laborious and expensive, but can be substantially accelerated with the use of computational predictions. This thesis describes the use of algorithms and resources for transcriptionfactor binding site analysis in addressing quantitative modelling, where probabilitic models are built to represent binding properties of a transcription factor and can be used to find new functional binding sites in genomes. Initially, an open-access database(HTPSELEX) was created, holding high quality binding sequences for two eukaryotic families of transcription factors namely CTF/NF1 and LEFT/TCF. The binding sequences were elucidated using a recently described experimental procedure called HTP-SELEX, that allows generation of large number (> 1000) of binding sites using mass sequencing technology. For each HTP-SELEX experiments we also provide accurate primary experimental information about the protein material used, details of the wet lab protocol, an archive of sequencing trace files, and assembled clone sequences of binding sequences. The database also offers reasonably large SELEX libraries obtained with conventional low-throughput protocols.The database is available at http://wwwisrec.isb-sib.ch/htpselex/ and and ftp://ftp.isrec.isb-sib.ch/pub/databases/htpselex. The Expectation-Maximisation(EM) algorithm is one the frequently used methods to estimate probabilistic models to represent the sequence specificity of transcription factors. We present computer simulations in order to estimate the precision of EM estimated models as a function of data set parameters(like length of initial sequences, number of initial sequences, percentage of nonbinding sequences). We observed a remarkable robustness of the EM algorithm with regard to length of training sequences and the degree of contamination. The HTPSELEX database and the benchmarked results of the EM algorithm formed part of the foundation for the subsequent project, where a statistical framework called hidden Markov model has been developed to represent sequence specificity of the transcription factors CTF/NF1 and LEF1/TCF using the HTP-SELEX experiment data. The hidden Markov model framework is capable of both predicting and classifying CTF/NF1 and LEF1/TCF binding sites. A covariance analysis of the binding sites revealed non-independent base preferences at different nucleotide positions, providing insight into the binding mechanism. We next tested the LEF1/TCF model by computing binding scores for a set of LEF1/TCF binding sequences for which relative affinities were determined experimentally using non-linear regression. The predicted and experimentally determined binding affinities were in good correlation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Huolimatta korkeasta automaatioasteesta sorvausteollisuudessa, muutama keskeinen ongelma estää sorvauksen täydellisen automatisoinnin. Yksi näistä ongelmista on työkalun kuluminen. Tämä työ keskittyy toteuttamaan automaattisen järjestelmän kulumisen, erityisesti viistekulumisen, mittaukseen konenäön avulla. Kulumisen mittausjärjestelmä poistaa manuaalisen mittauksen tarpeen ja minimoi ajan, joka käytetään työkalun kulumisen mittaukseen. Mittauksen lisäksi tutkitaan kulumisen mallinnusta sekä ennustamista. Automaattinen mittausjärjestelmä sijoitettiin sorvin sisälle ja järjestelmä integroitiin onnistuneesti ulkopuolisten järjestelmien kanssa. Tehdyt kokeet osoittivat, että mittausjärjestelmä kykenee mittaamaan työkalun kulumisen järjestelmän oikeassa ympäristössä. Mittausjärjestelmä pystyy myös kestämään häiriöitä, jotka ovat konenäköjärjestelmille yleisiä. Työkalun kulumista mallinnusta tutkittiin useilla eri menetelmillä. Näihin kuuluivat muiden muassa neuroverkot ja tukivektoriregressio. Kokeet osoittivat, että tutkitut mallit pystyivät ennustamaan työkalun kulumisasteen käytetyn ajan perusteella. Parhaan tuloksen antoivat neuroverkot Bayesiläisellä regularisoinnilla.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The results shown in this thesis are based on selected publications of the 2000s decade. The work was carried out in several national and EC funded public research projects and in close cooperation with industrial partners. The main objective of the thesis was to study and quantify the most important phenomena of circulating fluidized bed combustors by developing and applying proper experimental and modelling methods using laboratory scale equipments. An understanding of the phenomena plays an essential role in the development of combustion and emission performance, and the availability and controls of CFB boilers. Experimental procedures to study fuel combustion behaviour under CFB conditions are presented in the thesis. Steady state and dynamic measurements under well controlled conditions were carried out to produce the data needed for the development of high efficiency, utility scale CFB technology. The importance of combustion control and furnace dynamics is emphasized when CFB boilers are scaled up with a once through steam cycle. Qualitative information on fuel combustion characteristics was obtained directly by comparing flue gas oxygen responses during the impulse change experiments with fuel feed. A one-dimensional, time dependent model was developed to analyse the measurement data Emission formation was studied combined with fuel combustion behaviour. Correlations were developed for NO, N2O, CO and char loading, as a function of temperature and oxygen concentration in the bed area. An online method to characterize char loading under CFB conditions was developed and validated with the pilot scale CFB tests. Finally, a new method to control air and fuel feeds in CFB combustion was introduced. The method is based on models and an analysis of the fluctuation of the flue gas oxygen concentration. The effect of high oxygen concentrations on fuel combustion behaviour was also studied to evaluate the potential of CFB boilers to apply oxygenfiring technology to CCS. In future studies, it will be necessary to go through the whole scale up chain from laboratory phenomena devices through pilot scale test rigs to large scale, commercial boilers in order to validate the applicability and scalability of the, results. This thesis shows the chain between the laboratory scale phenomena test rig (bench scale) and the CFB process test rig (pilot). CFB technology has been scaled up successfully from an industrial scale to a utility scale during the last decade. The work shown in the thesis, for its part, has supported the development by producing new detailed information on combustion under CFB conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The condensation rate has to be high in the safety pressure suppression pool systems of Boiling Water Reactors (BWR) in order to fulfill their safety function. The phenomena due to such a high direct contact condensation (DCC) rate turn out to be very challenging to be analysed either with experiments or numerical simulations. In this thesis, the suppression pool experiments carried out in the POOLEX facility of Lappeenranta University of Technology were simulated. Two different condensation modes were modelled by using the 2-phase CFD codes NEPTUNE CFD and TransAT. The DCC models applied were the typical ones to be used for separated flows in channels, and their applicability to the rapidly condensing flow in the condensation pool context had not been tested earlier. A low Reynolds number case was the first to be simulated. The POOLEX experiment STB-31 was operated near the conditions between the ’quasi-steady oscillatory interface condensation’ mode and the ’condensation within the blowdown pipe’ mode. The condensation models of Lakehal et al. and Coste & Lavi´eville predicted the condensation rate quite accurately, while the other tested ones overestimated it. It was possible to get the direct phase change solution to settle near to the measured values, but a very high resolution of calculation grid was needed. Secondly, a high Reynolds number case corresponding to the ’chugging’ mode was simulated. The POOLEX experiment STB-28 was chosen, because various standard and highspeed video samples of bubbles were recorded during it. In order to extract numerical information from the video material, a pattern recognition procedure was programmed. The bubble size distributions and the frequencies of chugging were calculated with this procedure. With the statistical data of the bubble sizes and temporal data of the bubble/jet appearance, it was possible to compare the condensation rates between the experiment and the CFD simulations. In the chugging simulations, a spherically curvilinear calculation grid at the blowdown pipe exit improved the convergence and decreased the required cell count. The compressible flow solver with complete steam-tables was beneficial for the numerical success of the simulations. The Hughes-Duffey model and, to some extent, the Coste & Lavi´eville model produced realistic chugging behavior. The initial level of the steam/water interface was an important factor to determine the initiation of the chugging. If the interface was initialized with a water level high enough inside the blowdown pipe, the vigorous penetration of a water plug into the pool created a turbulent wake which invoked the chugging that was self-sustaining. A 3D simulation with a suitable DCC model produced qualitatively very realistic shapes of the chugging bubbles and jets. The comparative FFT analysis of the bubble size data and the pool bottom pressure data gave useful information to distinguish the eigenmodes of chugging, bubbling, and pool structure oscillations.