990 resultados para Addition techniques
Resumo:
The main instrument used in psychological measurement is the self-report questionnaire. One of its majordrawbacks however is its susceptibility to response biases. A known strategy to control these biases hasbeen the use of so-called ipsative items. Ipsative items are items that require the respondent to makebetween-scale comparisons within each item. The selected option determines to which scale the weight ofthe answer is attributed. Consequently in questionnaires only consisting of ipsative items everyrespondent is allotted an equal amount, i.e. the total score, that each can distribute differently over thescales. Therefore this type of response format yields data that can be considered compositional from itsinception.Methodological oriented psychologists have heavily criticized this type of item format, since the resultingdata is also marked by the associated unfavourable statistical properties. Nevertheless, clinicians havekept using these questionnaires to their satisfaction. This investigation therefore aims to evaluate bothpositions and addresses the similarities and differences between the two data collection methods. Theultimate objective is to formulate a guideline when to use which type of item format.The comparison is based on data obtained with both an ipsative and normative version of threepsychological questionnaires, which were administered to 502 first-year students in psychology accordingto a balanced within-subjects design. Previous research only compared the direct ipsative scale scoreswith the derived ipsative scale scores. The use of compositional data analysis techniques also enables oneto compare derived normative score ratios with direct normative score ratios. The addition of the secondcomparison not only offers the advantage of a better-balanced research strategy. In principle it also allowsfor parametric testing in the evaluation
Resumo:
Résumé Malgré l'apparition de nouvelles techniques chirurgicales dites « sans tension », l'antalgie post-opératoire après cure de hernie inguinale reste un défi pour les anesthésiologistes. Récemment on a suggéré que l'addition de ketamine ou d'un anti-inflammatoire non-stéroïdien (AINS) à un anesthésique local pourrait améliorer et prolonger l'analgésie postopératoire. Le but de cette étude, à laquelle ont participé 36 patients ASA I-II, était d'évaluer si la coadministration de S(+) ketamine ou de ketorolac renforcerait les effets analgésiques de la bupivacaïne après cure ambulatoire de hernie inguinale sous anesthésie générale. L'analgésie a consisté en une infiltration de la plaie associé à un bloc inguinal avec soit 30 ml de bupivacaïne 0,5 % (n=12), soit 27 ml de bupivacaïne 0,5 % + 3 ml de S(+) ketamine (75 mg) (n=12), soit 28 ml de bupivacaïne 0,5 % + 2 ml de ketorolac (60 mg) (n=12). La prise orale d'antalgique en phase postopératoire était standardisée. L'intensité des douleurs a été évaluée au moyen d'une échelle visuelle analogique (EVA), d'un score verbal d'estimation et par algométrie de pression respectivement 2, 4, 6, 24 et 48 heures après l'intervention. Les trois groupes de patients ont présenté le score de douleur évalué par EVA le plus élevé à 24 heures, score significativement différent de ceux mesurés à 6 et 48 heures (P <0.05). A part une sensation de douleurs significativement moindre (score verbal d'estimation) dans le groupe ketorolac à 24 et 48 heures et seulement à 48 heures dans le groupe ketamine, il n'y avait aucune autre différence entre les groupes pour la durée de l'étude (48 heures) en ce qui concerne les scores de douleur, les seuils de douleur à la pression ou la prise postopératoire d'antalgiques « de secours ». En conclusion, l'addition de S(+) ketamine ou de ketorolac, n'améliore que marginalement l'effet analgésique de la bupivacaïne. Ceci peut-être mis en relation avec la technique de cure de hernie « sans tension » induisant un bas niveau de douleur postopératoire. Abstract Objective: The aim of the study was to assess whether coadministration of S(±) ketamine or ketorolac would enhance or prolong local analgesic effect of bupivacaine after inguinal hernia repair. Design: Prospective double-blind randomized study evaluating pain intensity after surgery under general anesthesia. Setting: Outpatient facilities of the University Hospital of Lausanne. Patient: Thirty-six ASA I-II outpatients scheduled for elective day-case inguinal herniorraphy. Intervention: Analgesia strategy consisted of a wound infiltration and an inguinal field block either with 30 mL bupivacairie (0.5%) or with the same volume of a mixture of 27 mL bupivacaine (0.5%) + 3 mL S(+) ketamine (75 mg) or a 28 mL bupivacaine (0.5%) + 2 ML ketorolac (60 mg). Postoperative analgesic regimen was standardized. Outcome Measures: Pain intensity was assessed with a Visual Analog Seale, a verbal rating score, and by pressure algometry 2, 4, 6, 24, and 48 hours after surgery. Results: The 3 groups of patients experienced the highest Visual Analog Scale pain score at 24 hours, which was different from those at 6 and 48 hours (P < 0.05). Apart from a significantly lower pain sensation (verbal rating score) in the ketorolac group at 24 and 48 hours and only at 48 hours with ketamine, there were no other differences in pain scores, pain pressure thresholds, or rescue analgesic consumption between groups throughout the 48-hour study period. Conclusion: The addition of S (+)-ketamine or ketorolac only minimally improves the analgesic effect of bupivacaine. This may be related to the tension-free hernia repair technique associated with low postoperative pain.
Resumo:
Iterative image reconstruction algorithms provide significant improvements over traditional filtered back projection in computed tomography (CT). Clinically available through recent advances in modern CT technology, iterative reconstruction enhances image quality through cyclical image calculation, suppressing image noise and artifacts, particularly blooming artifacts. The advantages of iterative reconstruction are apparent in traditionally challenging cases-for example, in obese patients, those with significant artery calcification, or those with coronary artery stents. In addition, as clinical use of CT has grown, so have concerns over ionizing radiation associated with CT examinations. Through noise reduction, iterative reconstruction has been shown to permit radiation dose reduction while preserving diagnostic image quality. This approach is becoming increasingly attractive as the routine use of CT for pediatric and repeated follow-up evaluation grows ever more common. Cardiovascular CT in particular, with its focus on detailed structural and functional analyses, stands to benefit greatly from the promising iterative solutions that are readily available.
Resumo:
Résumé La protéomique basée sur la spectrométrie de masse est l'étude du proteome l'ensemble des protéines exprimées au sein d'une cellule, d'un tissu ou d'un organisme - par cette technique. Les protéines sont coupées à l'aide d'enzymes en plus petits morceaux -les peptides -, et, séparées par différentes techniques. Les différentes fractions contenant quelques centaines de peptides sont ensuite analysées dans un spectromètre de masse. La masse des peptides est enregistrée et chaque peptide est séquentiellement fragmenté pour en obtenir sa séquence. L'information de masse et séquence est ensuite comparée à une base de données de protéines afin d'identifier la protéine d'origine. Dans une première partie, la thèse décrit le développement de méthodes d'identification. Elle montre l'importance de l'enrichissement de protéines comme moyen d'accès à des protéines de moyenne à faible abondance dans le lait humain. Elle utilise des injections répétées pour augmenter la couverture en protéines et la confiance dans l'identification. L'impacte de nouvelle version de base de données sur la liste des protéines identifiées est aussi démontré. De plus, elle utilise avec succès la spectrométrie de masse comme alternative aux anticorps, pour valider la présence de 34 constructions de protéines pathogéniques du staphylocoque doré exprimées dans une souche de lactocoque. Dans une deuxième partie, la thèse décrit le développement de méthodes de quantification. Elle expose de nouvelles approches de marquage des terminus des protéines aux isotopes stables et décrit la première méthode de marquage des groupements carboxyliques au niveau protéine à l'aide de réactifs composé de carbone 13. De plus, une nouvelle méthode, appelée ANIBAL, marquant tous les groupements amines et carboxyliques au niveau de la protéine, est exposée. Summary Mass spectrometry-based proteomics is the study of the proteome -the set of all expressed proteins in a cell, tissue or organism -using mass spectrometry. Proteins are cut into smaller pieces - peptides - using proteolytic enzymes and separated using different separation techniques. The different fractions containing several hundreds of peptides are than analyzed by mass spectrometry. The mass of the peptides entering the instrument are recorded and each peptide is sequentially fragmented to obtain its amino acid sequence. Each peptide sequence with its corresponding mass is then searched against a protein database to identify the protein to which it belongs. This thesis presents new method developments in this field. In a first part, the thesis describes development of identification methods. It shows the importance of protein enrichment methods to gain access to medium-to-low abundant proteins in a human milk sample. It uses repeated injection to increase protein coverage and confidence in identification and demonstrates the impact of new database releases on protein identification lists. In addition, it successfully uses mass spectrometry as an alternative to antibody-based assays to validate the presence of 34 different recombinant constructs of Staphylococcus aureus pathogenic proteins expressed in a Lactococcus lactis strain. In a second part, development of quantification methods is described. It shows new stable isotope labeling approaches based on N- and C-terminus labeling of proteins and describes the first method of labeling of carboxylic groups at the protein level using 13C stable isotopes. In addition, a new quantitative approach called ANIBAL is explained that labels all amino and carboxylic groups at the protein level.
Resumo:
The coverage and volume of geo-referenced datasets are extensive and incessantly¦growing. The systematic capture of geo-referenced information generates large volumes¦of spatio-temporal data to be analyzed. Clustering and visualization play a key¦role in the exploratory data analysis and the extraction of knowledge embedded in¦these data. However, new challenges in visualization and clustering are posed when¦dealing with the special characteristics of this data. For instance, its complex structures,¦large quantity of samples, variables involved in a temporal context, high dimensionality¦and large variability in cluster shapes.¦The central aim of my thesis is to propose new algorithms and methodologies for¦clustering and visualization, in order to assist the knowledge extraction from spatiotemporal¦geo-referenced data, thus improving making decision processes.¦I present two original algorithms, one for clustering: the Fuzzy Growing Hierarchical¦Self-Organizing Networks (FGHSON), and the second for exploratory visual data analysis:¦the Tree-structured Self-organizing Maps Component Planes. In addition, I present¦methodologies that combined with FGHSON and the Tree-structured SOM Component¦Planes allow the integration of space and time seamlessly and simultaneously in¦order to extract knowledge embedded in a temporal context.¦The originality of the FGHSON lies in its capability to reflect the underlying structure¦of a dataset in a hierarchical fuzzy way. A hierarchical fuzzy representation of¦clusters is crucial when data include complex structures with large variability of cluster¦shapes, variances, densities and number of clusters. The most important characteristics¦of the FGHSON include: (1) It does not require an a-priori setup of the number¦of clusters. (2) The algorithm executes several self-organizing processes in parallel.¦Hence, when dealing with large datasets the processes can be distributed reducing the¦computational cost. (3) Only three parameters are necessary to set up the algorithm.¦In the case of the Tree-structured SOM Component Planes, the novelty of this algorithm¦lies in its ability to create a structure that allows the visual exploratory data analysis¦of large high-dimensional datasets. This algorithm creates a hierarchical structure¦of Self-Organizing Map Component Planes, arranging similar variables' projections in¦the same branches of the tree. Hence, similarities on variables' behavior can be easily¦detected (e.g. local correlations, maximal and minimal values and outliers).¦Both FGHSON and the Tree-structured SOM Component Planes were applied in¦several agroecological problems proving to be very efficient in the exploratory analysis¦and clustering of spatio-temporal datasets.¦In this thesis I also tested three soft competitive learning algorithms. Two of them¦well-known non supervised soft competitive algorithms, namely the Self-Organizing¦Maps (SOMs) and the Growing Hierarchical Self-Organizing Maps (GHSOMs); and the¦third was our original contribution, the FGHSON. Although the algorithms presented¦here have been used in several areas, to my knowledge there is not any work applying¦and comparing the performance of those techniques when dealing with spatiotemporal¦geospatial data, as it is presented in this thesis.¦I propose original methodologies to explore spatio-temporal geo-referenced datasets¦through time. Our approach uses time windows to capture temporal similarities and¦variations by using the FGHSON clustering algorithm. The developed methodologies¦are used in two case studies. In the first, the objective was to find similar agroecozones¦through time and in the second one it was to find similar environmental patterns¦shifted in time.¦Several results presented in this thesis have led to new contributions to agroecological¦knowledge, for instance, in sugar cane, and blackberry production.¦Finally, in the framework of this thesis we developed several software tools: (1)¦a Matlab toolbox that implements the FGHSON algorithm, and (2) a program called¦BIS (Bio-inspired Identification of Similar agroecozones) an interactive graphical user¦interface tool which integrates the FGHSON algorithm with Google Earth in order to¦show zones with similar agroecological characteristics.
Resumo:
Wet pavement friction is known to be one of the most important roadway safety parameters. In this research, frictional properties of flexible (asphalt) pavements were investigated. As a part of this study, a laboratory device to polish asphalt specimens was refined and a procedure to evaluate mixture frictional properties was proposed. Following this procedure, 46 different Superpave mixtures, one stone matrix asphalt (SMA) mixture and one porous friction course (PFC) mixture were tested. In addition, 23 different asphalt and two concrete field sections were also tested for friction and noise. The results of both field and laboratory measurements were used to develop an International Friction Index (IFI)-based protocol for measurement of the frictional characteristics of asphalt pavements for laboratory friction measurements. Based on the results of the study, it appears the content of high friction aggregate should be 20% or more of the total aggregate blend when used with other, polish susceptible coarse aggregates; the frictional properties increased substantially as the friction aggregate content increased above 20%. Both steel slag and quartzite were found to improve the frictional properties of the blend, though steel slag had a lower polishing rate. In general, mixes containing soft limestone demonstrated lower friction values than comparable mixes with hard limestone or dolomite. Larger nominal maximum aggregate size mixes had better overall frictional performance than smaller sized mixes. In addition, mixes with higher fineness moduli generally had higher macrotexture and friction.
Resumo:
The widespread implementation of GIS-based 3D topographical models has been a great aid in the development and testing of archaeological hypotheses. In this paper, a topographical reconstruction of the ancient city of Tarraco, the Roman capital of the Tarraconensis province, is presented. This model is based on topographical data obtained through archaeological excavations, old photographic documentation, georeferenced archive maps depicting the pre-modern city topography, modern detailed topographical maps and differential GPS measurements. The addition of the Roman urban architectural features to the model offers the possibility to test hypotheses concerning the ideological background manifested in the city shape. This is accomplished mainly through the use of 3D views from the main city accesses. These techniques ultimately demonstrate the ‘theatre-shaped’ layout of the city (to quote Vitrubius) as well as its southwest oriented architecture, whose monumental character was conceived to present a striking aspect to visitors, particularly those arriving from the sea.
Resumo:
The high cost of feed ingredients, the use of non-renewable sources of phosphate and the dramatic increase in the environmental load resulting from the excessive land application of manure are major challenges for the livestock industry. Precision feeding is proposed as an essential approach to improve the utilization of dietary nitrogen, phosphorus and other nutrients and thus reduce feeding costs and nutrient excretion. Precision feeding requires accurate knowledge of the nutritional value of feedstuffs and animal nutrient requirements, the formulation of diets in accordance with environmental constraints, and the gradual adjustment of the dietary nutrient supply to match the requirements of the animals. After the nutritional potential of feed ingredients has been precisely determined and has been improved by the addition of enzymes (e.g. phytases) or feed treatments, the addition of environmental objectives to the traditional feed formulation algorithms can promote the sustainability of the swine industry by reducing nutrient excretion in swine operations with small increases in feeding costs. Increasing the number of feeding phases can also contribute to significant reductions in nutrient excretion and feeding costs. However, the use of precision feeding techniques in which pigs are fed individually with daily tailored diets can further improve the efficiency with which pigs utilize dietary nutrients. Precision feeding involves the use of feeding techniques that allow the provision of the right amount of feed with the right composition at the right time to each pig in the herd. Using this approach, it has been estimated that feeding costs can be reduced by more than 4.6%, and nitrogen and phosphorus excretion can both be reduced by more than 38%. Moreover, the integration of precision feeding techniques into large-group production systems can provide real-time off-farm monitoring of feed and animals for optimal slaughter and production strategies, thus improving the environmental sustainability of pork production, animal well-being and meat-product quality.
Resumo:
Post-testicular sperm maturation occurs in the epididymis. The ion concentration and proteins secreted into the epididymal lumen, together with testicular factors, are believed to be responsible for the maturation of spermatozoa. Disruption of the maturation of spermatozoa in the epididymis provides a promising strategy for generating a male contraceptive. However, little is known about the proteins involved. For drug development, it is also essential to have tools to study the function of these proteins in vitro. One approach for screening novel targets is to study the secretory products of the epididymis or the G protein-coupled receptors (GPCRs) that are involved in the maturation process of the spermatozoa. The modified Ca2+ imaging technique to monitor release from PC12 pheochromocytoma cells can also be applied to monitor secretory products involved in the maturational processes of spermatozoa. PC12 pheochromocytoma cells were chosen for evaluation of this technique as they release catecholamines from their cell body, thus behaving like endocrine secretory cells. The results of the study demonstrate that depolarisation of nerve growth factor -differentiated PC12 cells releases factors which activate nearby randomly distributed HEL erythroleukemia cells. Thus, during the release process, the ligands reach concentrations high enough to activate receptors even in cells some distance from the release site. This suggests that communication between randomly dispersed cells is possible even if the actual quantities of transmitter released are extremely small. The development of a novel method to analyse GPCR-dependent Ca2+ signalling in living slices of mouse caput epididymis is an additional tool for screening for drug targets. By this technique it was possible to analyse functional GPCRs in the epithelial cells of the ductus epididymis. The results revealed that, both P2X- and P2Y-type purinergic receptors are responsible for the rapid and transient Ca2+ signal detected in the epithelial cells of caput epididymides. Immunohistochemical and reverse transcriptase-polymerase chain reaction (RTPCR) analyses showed the expression of at least P2X1, P2X2, P2X4 and P2X7, and P2Y1 and P2Y2 receptors in the epididymis. Searching for epididymis-specific promoters for transgene delivery into the epididymis is of key importance for the development of specific models for drug development. We used EGFP as the reporter gene to identify proper promoters to deliver transgenes into the epithelial cells of the mouse epididymis in vivo. Our results revealed that the 5.0 kb murine Glutathione peroxidase 5 (GPX5) promoter can be used to target transgene expression into the epididymis while the 3.8 kb Cysteine-rich secretory protein-1 (CRISP-1) promoter can be used to target transgene expression into the testis. Although the visualisation of EGFP in living cells in culture usually poses few problems, the detection of EGFP in tissue sections can be more difficult because soluble EGFP molecules can be lost if the cell membrane is damaged by freezing, sectioning, or permeabilisation. Furthermore, the fluorescence of EGFP is dependent on its conformation. Therefore, fixation protocols that immobilise EGFP may also destroy its usefulness as a fluorescent reporter. We therefore developed a novel tissue preparation and preservation techniques for EGFP. In addition, fluorescence spectrophotometry with epididymal epithelial cells in suspension revealed the expression of functional purinergic, adrenergic, cholinergic and bradykinin receptors in these cell lines (mE-Cap27 and mE-Cap28). In conclusion, we developed new tools for studying the role of the epididymis in sperm maturation. We developed a new technique to analyse GPCR dependent Ca2+ signalling in living slices of mouse caput epididymis. In addition, we improved the method of detecting reporter gene expression. Furthermore, we characterised two epididymis-specific gene promoters, analysed the expression of GPCRs in epididymal epithelial cells and developed a novel technique for measurement of secretion from cells.
Resumo:
Formal software development processes and well-defined development methodologies are nowadays seen as the definite way to produce high-quality software within time-limits and budgets. The variety of such high-level methodologies is huge ranging from rigorous process frameworks like CMMI and RUP to more lightweight agile methodologies. The need for managing this variety and the fact that practically every software development organization has its own unique set of development processes and methods have created a profession of software process engineers. Different kinds of informal and formal software process modeling languages are essential tools for process engineers. These are used to define processes in a way which allows easy management of processes, for example process dissemination, process tailoring and process enactment. The process modeling languages are usually used as a tool for process engineering where the main focus is on the processes themselves. This dissertation has a different emphasis. The dissertation analyses modern software development process modeling from the software developers’ point of view. The goal of the dissertation is to investigate whether the software process modeling and the software process models aid software developers in their day-to-day work and what are the main mechanisms for this. The focus of the work is on the Software Process Engineering Metamodel (SPEM) framework which is currently one of the most influential process modeling notations in software engineering. The research theme is elaborated through six scientific articles which represent the dissertation research done with process modeling during an approximately five year period. The research follows the classical engineering research discipline where the current situation is analyzed, a potentially better solution is developed and finally its implications are analyzed. The research applies a variety of different research techniques ranging from literature surveys to qualitative studies done amongst software practitioners. The key finding of the dissertation is that software process modeling notations and techniques are usually developed in process engineering terms. As a consequence the connection between the process models and actual development work is loose. In addition, the modeling standards like SPEM are partially incomplete when it comes to pragmatic process modeling needs, like light-weight modeling and combining pre-defined process components. This leads to a situation, where the full potential of process modeling techniques for aiding the daily development activities can not be achieved. Despite these difficulties the dissertation shows that it is possible to use modeling standards like SPEM to aid software developers in their work. The dissertation presents a light-weight modeling technique, which software development teams can use to quickly analyze their work practices in a more objective manner. The dissertation also shows how process modeling can be used to more easily compare different software development situations and to analyze their differences in a systematic way. Models also help to share this knowledge with others. A qualitative study done amongst Finnish software practitioners verifies the conclusions of other studies in the dissertation. Although processes and development methodologies are seen as an essential part of software development, the process modeling techniques are rarely used during the daily development work. However, the potential of these techniques intrigues the practitioners. As a conclusion the dissertation shows that process modeling techniques, most commonly used as tools for process engineers, can also be used as tools for organizing the daily software development work. This work presents theoretical solutions for bringing the process modeling closer to the ground-level software development activities. These theories are proven feasible by presenting several case studies where the modeling techniques are used e.g. to find differences in the work methods of the members of a software team and to share the process knowledge to a wider audience.
Resumo:
Forty-one wild house mice (Mus musculus) were trapped in an urban area, near railways, in Santa Fe city, Argentina. Both kidneys from each mouse were removed for bacteriological and histological examination. One kidney was inoculated into Fletcher semi-solid medium and isolates were serologically typed. The other kidney was microscopically examined after hematoxylin-eosin, silver impregnation and immunohistochemical stains. Leptospires, all of them belonging to the Ballum serogroup, were isolated from 16 (39%) out of 41 samples. The presence of the agent was recorded in 18 (44%) and in 19 (46%) out of 41 silver impregnated and immunohistochemically stained samples respectively. Additionally, leptospires were detected in high number on the apical surface of epithelial cells and in the lumen of medullary tubules and they were less frequently seen on the apical surface of epithelial cells or in the lumen of the cortical tubules, which represents an unusual finding in carrier animals. Microscopic lesions consisting of focal mononuclear interstitial nephritis, glomerular shrinkage and desquamation of tubular epithelial cells were observed in 13 of 19 infected and in 10 of 22 non-infected mice; differences in presence of lesions between infected and non-infected animals were not statistically significant (P=0,14). The three techniques, culture, silver impregnation and immunohistochemistry, had a high agreement (k³0.85) and no significant differences between them were detected (P>0.05). In addition, an unusual location of leptospires in kidneys of carrier animals was reported, but a relationship between lesions and presence of leptospires could not be established.
Resumo:
Chromosome abnormalities and the mitotic index in lymphocyte cultures and micronuclei in buccal mucosa cells were investigated in a sample of underground mineral coal miners from Southern Brazil. A decreased mitotic index, an excess of micronuclei and a higher frequency of chromosome abnormalities (fragments, polyploidy and overall chromosome alterations) were observed in the miners when compared to age-paired normal controls from the same area. An alternative assay for clastogenesis in occupational exposition was tested by submitting lymphocytes from non-exposed individuals to a pool of plasmas from the exposed population. This assay proved to be very convenient, as the lymphocytes obtained from the same individuals can be used as target as well as control cells. Also, it yielded a larger number of metaphases and of successful cultures than with common lymphocyte cultures from miners. A significantly higher frequency of chromatid gaps, fragments and overall alterations were observed when lymphocytes from control subjects were exposed to miner plasma pools. Control plasma pools did not significantly induce any type of chromosome alterations in the cultures of normal subjects, thus indicating that the results are not due to the effect of the addition of plasma pools per se.
Resumo:
In this thesis, stepwise titration with hydrochloric acid was used to obtain chemical reactivities and dissolution rates of ground limestones and dolostones of varying geological backgrounds (sedimentary, metamorphic or magmatic). Two different ways of conducting the calculations were used: 1) a first order mathematical model was used to calculate extrapolated initial reactivities (and dissolution rates) at pH 4, and 2) a second order mathematical model was used to acquire integrated mean specific chemical reaction constants (and dissolution rates) at pH 5. The calculations of the reactivities and dissolution rates were based on rate of change of pH and particle size distributions of the sample powders obtained by laser diffraction. The initial dissolution rates at pH 4 were repeatedly higher than previously reported literature values, whereas the dissolution rates at pH 5 were consistent with former observations. Reactivities and dissolution rates varied substantially for dolostones, whereas for limestones and calcareous rocks, the variation can be primarily explained by relatively large sample standard deviations. A list of the dolostone samples in a decreasing order of initial reactivity at pH 4 is: 1) metamorphic dolostones with calcite/dolomite ratio higher than about 6% 2) sedimentary dolostones without calcite 3) metamorphic dolostones with calcite/dolomite ratio lower than about 6% The reactivities and dissolution rates were accompanied by a wide range of experimental techniques to characterise the samples, to reveal how different rocks changed during the dissolution process, and to find out which factors had an influence on their chemical reactivities. An emphasis was put on chemical and morphological changes taking place at the surfaces of the particles via X-ray Photoelectron Spectroscopy (XPS) and Scanning Electron Microscopy (SEM). Supporting chemical information was obtained with X-Ray Fluorescence (XRF) measurements of the samples, and Inductively Coupled Plasma-Mass Spectrometry (ICP-MS) and Inductively Coupled Plasma-Optical Emission Spectrometry (ICP-OES) measurements of the solutions used in the reactivity experiments. Information on mineral (modal) compositions and their occurrence was provided by X-Ray Diffraction (XRD), Energy Dispersive X-ray analysis (EDX) and studying thin sections with a petrographic microscope. BET (Brunauer, Emmet, Teller) surface areas were determined from nitrogen physisorption data. Factors increasing chemical reactivity of dolostones and calcareous rocks were found to be sedimentary origin, higher calcite concentration and smaller quartz concentration. Also, it is assumed that finer grain size and larger BET surface areas increase the reactivity although no certain correlation was found in this thesis. Atomic concentrations did not correlate with the reactivities. Sedimentary dolostones, unlike metamorphic ones, were found to have porous surface structures after dissolution. In addition, conventional (XPS) and synchrotron based (HRXPS) X-ray Photoelectron Spectroscopy were used to study bonding environments on calcite and dolomite surfaces. Both samples are insulators, which is why neutralisation measures such as electron flood gun and a conductive mask were used. Surface core level shifts of 0.7 ± 0.1 eV for Ca 2p spectrum of calcite and 0.75 ± 0.05 eV for Mg 2p and Ca 3s spectra of dolomite were obtained. Some satellite features of Ca 2p, C 1s and O 1s spectra have been suggested to be bulk plasmons. The origin of carbide bonds was suggested to be beam assisted interaction with hydrocarbons found on the surface. The results presented in this thesis are of particular importance for choosing raw materials for wet Flue Gas Desulphurisation (FGD) and construction industry. Wet FGD benefits from high reactivity, whereas construction industry can take advantage of slow reactivity of carbonate rocks often used in the facades of fine buildings. Information on chemical bonding environments may help to create more accurate models for water-rock interactions of carbonates.
Resumo:
Adenoviral vectors are currently the most widely used gene therapeutic vectors, but their inability to integrate into host chromosomal DNA shortened their transgene expression and limited their use in clinical trials. In this project, we initially planned to develop a technique to test the effect of the early region 1 (E1) on adenovirus integration by comparing the integration efficiencies between an E1-deleted adenoviral vector (SubE1) and an Elcontaining vector (SubE3). However, we did not harvest any SubE3 virus, even if we repeated the transfection and successfully rescued the SubE1 virus (2/4 transfections generated viruses) and positive control virus (6/6). The failure of rescuing SubE3 could be caused by the instability of the genomic plasmid pFG173, as it had frequent intemal deletions when we were purifying It. Therefore, we developed techniques to test the effect of E1 on homologous recombination (HR) since literature suggested that adenovirus integration is initiated by HR. We attempted to silence the E1 in 293 cells by transfecting E1A/B-specific small interfering RNA (siRNA). However, no silenced phenotype was observed, even if we varied the concentrations of E1A/B siRNA (from 30 nM to 270 nM) and checked the silencing effects at different time points (48, 72, 96 h). One possible explanation would be that the E1A/B siRNA sequences are not potent enough to Induce the silenced phenotype. For evaluating HR efficiencies, an HR assay system based on bacterial transfonmatJon was designed. We constmcted two plasmids ( designated as pUC19-dl1 and pUC19-dl2) containing different defective lacZa cassettes (forming white colonies after transformation) that can generate a functional lacZa cassette (forming blue colonies) through HR after transfecting into 293 cells. The HR efficiencies would be expressed as the percentages of the blue colonies among all the colonies. Unfortunately, after transfonnation of plasmid isolated from 293 cells, no colony was found, even at a transformation efficiency of 1.8x10^ colonies/pg pUC19, suggesting the sensitivity of this system was low. To enhance the sensitivity, PCR was used. We designed a set of primers that can only amplify the recombinant plasmid fomied through HR. Therefore, the HR efficiencies among different treatments can be evaluated by the amplification results, and this system could be used to test the effect of E1 region on adenovirus integration. In addition, to our knowledge there was no previous studies using PCR/ Realtime PCR to evaluate HR efficiency, so this system also provides a PCR-based method to carry out the HR assays.
Resumo:
This case study explored strategies and techniques in order to assist individuals with learning disabilities in their academic achievement. Of particular focus was how a literacy-based program, titled The Spring Reading Program, utilizes effective tactics and approaches that result in academic growth. The Spring Reading Program, offered by the Learning Disabilities Association of Niagara Region (LDANR) and partnered with John McNamara from Brock University, supports children with reading disabilities academically. In addition, the program helps children increase their confidence and motivation towards literacy. I began this study by outlining the importance of reading followed by and exploration of what educators and researchers have demonstrated regarding effective literacy instruction for children with learning disabilities. I studied effective strategies and techniques in the Spring Reading Program by conducting a qualitative case study of the program. This case study subsequently presents in depth, 4 specific strategies: Hands-on activities, motivation, engagement, and one-on-one instruction. Each strategy demonstrates its effectiveness through literature and examples from the Spring Reading Program.