940 resultados para Distance-based techniques
Resumo:
Three recombinant antigens of Treponema pallidum Nichols strain were fused with GST, cloned and expressed in Escherichia coli, resulting in high levels of GST-rTp47 and GST-rTp17 expression, and supplementation with arginine tRNA for the AGR codon was needed to obtain GST-rTp15 overexpression. Purified fusion protein yields were 1.9, 1.7 and 5.3 mg/l of cell culture for GST-rTp47, GST-rTp17 and GST-rTp15, respectively. The identities of the antigens obtained were confirmed by automated DNA sequencing using ABI Prism 310 and peptide mapping by Finningan LC/MS. These recombinant antigens were evaluated by immuno-slot blot techniques applied to 137 serum samples from patients with a clinical and laboratory diagnosis of syphilis (61 samples), from healthy blood donors (50 samples), individuals with sexually transmitted disease other than syphilis (3 samples), and from individuals with other spirochetal diseases such as Lyme disease (20 samples) and leptospirosis (3 samples). The assay had sensitivity of 95.1% (95% CI, 86.1 to 98.7%) and a specificity of 94.7% (95% CI, 87.0 to 98.7%); a stronger reactivity was observed with fraction rTp17. The immunoreactivity results showed that fusion recombinant antigens based-immuno-slot blot techniques are suitable for use in diagnostic assays for syphilis.
Resumo:
The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.
Resumo:
This thesis studies metamaterial-inspired mirrors which provide the most general control over the amplitude and phase of the reflected wavefront. The goal is to explore practical possibilities in designing fully reflective electromagnetic structures with full control over reflection phase. The first part of the thesis describes a planar focusing metamirror with the focal distance less than the operating wavelength. Its practical applicability from the viewpoint of aberrations when the incident angle deviates from the normal one is verified numerically and experimentally. The results indicate that the proposed focusing metamirror can be efficiently employed in many different applications due to its advantages over other conventional mirrors. In the second part of the thesis a new theoretical concept of reflecting metasurface operation is introduced based on Huygens’ principle. This concept in contrast to known approaches takes into account all the requirements of perfect metamirror operation. The theory shows a route to improve the previously proposed metamirrors through tilting the individual inclusions of the structure at a chosen angle from normal. It is numerically tested and the results demonstrate improvements over the previous design.
Resumo:
The purpose of the present study was to compare the sensitivity and specificity of V3 enzyme immunoassay (solid phase EIA and EIA inhibition) and restriction fragment length polymorphism (RFLP) with the DNA sequencing "gold standard" to identify the Brazilian HIV-1 variants of subtype B and B"-GWGR. Peripheral blood mononuclear cells were collected from 61 HIV-1-infected individuals attending a clinic in São Paulo. Proviral DNA was amplified and sequentially cleaved with the Fok I restriction enzyme. Plasma samples were submitted to a V3-loop biotinylated synthetic peptide EIA. Direct partial DNA sequencing of the env gene was performed on all samples. Based on EIA results, the sensitivity for detecting B-GPGR was 70%, compared to 64% for the Brazilian variant B"-GWGR while, the specificity of B-GPGR detection was 85%, compared to 88% for GWGR. The assessment of RFLP revealed 68% sensitivity and 94% specificity for the B-GPGR strain compared to 84 and 90% for the B"-GWGR variant. Moreover, direct DNA sequencing was able to detect different base sequences corresponding to amino acid sequences at the tip of the V3 loop in 22 patients. These results show a similar performance of V3 serology and RLFP in identifying the Brazilian variant GWGR. However, V3 peptide serology may give indeterminate results. Therefore, we suggest that V3 serology be used instead of DNA sequencing where resources are limited. Samples giving indeterminate results by V3 peptide serology should be analyzed by direct DNA sequencing to distinguish between B-GPGR and the Brazilian variant B"-GWGR.
Resumo:
In the present review, we describe a systematic study of the sulfated polysaccharides from marine invertebrates, which led to the discovery of a carbohydrate-based mechanism of sperm-egg recognition during sea urchin fertilization. We have described unique polymers present in these organisms, especially sulfated fucose-rich compounds found in the egg jelly coat of sea urchins. The polysaccharides have simple, linear structures consisting of repeating units of oligosaccharides. They differ among the various species of sea urchins in specific patterns of sulfation and/or position of the glycosidic linkage within their repeating units. These polysaccharides show species specificity in inducing the acrosome reaction in sea urchin sperm, providing a clear-cut example of a signal transduction event regulated by sulfated polysaccharides. This distinct carbohydrate-mediated mechanism of sperm-egg recognition coexists with the bindin-protein system. Possibly, the genes involved in the biosynthesis of these sulfated fucans did not evolve in concordance with evolutionary distance but underwent a dramatic change near the tip of the Strongylocentrotid tree. Overall, we established a direct causal link between the molecular structure of a sulfated polysaccharide and a cellular physiological event - the induction of the sperm acrosome reaction in sea urchins. Small structural changes modulate an entire system of sperm-egg recognition and species-specific fertilization in sea urchins. We demonstrated that sulfated polysaccharides - in addition to their known function in cell proliferation, development, coagulation, and viral infection - mediate fertilization, and respond to evolutionary mechanisms that lead to species diversity.
Resumo:
In the field of molecular biology, scientists adopted for decades a reductionist perspective in their inquiries, being predominantly concerned with the intricate mechanistic details of subcellular regulatory systems. However, integrative thinking was still applied at a smaller scale in molecular biology to understand the underlying processes of cellular behaviour for at least half a century. It was not until the genomic revolution at the end of the previous century that we required model building to account for systemic properties of cellular activity. Our system-level understanding of cellular function is to this day hindered by drastic limitations in our capability of predicting cellular behaviour to reflect system dynamics and system structures. To this end, systems biology aims for a system-level understanding of functional intraand inter-cellular activity. Modern biology brings about a high volume of data, whose comprehension we cannot even aim for in the absence of computational support. Computational modelling, hence, bridges modern biology to computer science, enabling a number of assets, which prove to be invaluable in the analysis of complex biological systems, such as: a rigorous characterization of the system structure, simulation techniques, perturbations analysis, etc. Computational biomodels augmented in size considerably in the past years, major contributions being made towards the simulation and analysis of large-scale models, starting with signalling pathways and culminating with whole-cell models, tissue-level models, organ models and full-scale patient models. The simulation and analysis of models of such complexity very often requires, in fact, the integration of various sub-models, entwined at different levels of resolution and whose organization spans over several levels of hierarchy. This thesis revolves around the concept of quantitative model refinement in relation to the process of model building in computational systems biology. The thesis proposes a sound computational framework for the stepwise augmentation of a biomodel. One starts with an abstract, high-level representation of a biological phenomenon, which is materialised into an initial model that is validated against a set of existing data. Consequently, the model is refined to include more details regarding its species and/or reactions. The framework is employed in the development of two models, one for the heat shock response in eukaryotes and the second for the ErbB signalling pathway. The thesis spans over several formalisms used in computational systems biology, inherently quantitative: reaction-network models, rule-based models and Petri net models, as well as a recent formalism intrinsically qualitative: reaction systems. The choice of modelling formalism is, however, determined by the nature of the question the modeler aims to answer. Quantitative model refinement turns out to be not only essential in the model development cycle, but also beneficial for the compilation of large-scale models, whose development requires the integration of several sub-models across various levels of resolution and underlying formal representations.
Resumo:
The increased awareness and evolved consumer habits have set more demanding standards for the quality and safety control of food products. The production of foodstuffs which fulfill these standards can be hampered by different low-molecular weight contaminants. Such compounds can consist of, for example residues of antibiotics in animal use or mycotoxins. The extremely small size of the compounds has hindered the development of analytical methods suitable for routine use, and the methods currently in use require expensive instrumentation and qualified personnel to operate them. There is a need for new, cost-efficient and simple assay concepts which can be used for field testing and are capable of processing large sample quantities rapidly. Immunoassays have been considered as the golden standard for such rapid on-site screening methods. The introduction of directed antibody engineering and in vitro display technologies has facilitated the development of novel antibody based methods for the detection of low-molecular weight food contaminants. The primary aim of this study was to generate and engineer antibodies against low-molecular weight compounds found in various foodstuffs. The three antigen groups selected as targets of antibody development cause food safety and quality defects in wide range of products: 1) fluoroquinolones: a family of synthetic broad-spectrum antibacterial drugs used to treat wide range of human and animal infections, 2) deoxynivalenol: type B trichothecene mycotoxin, a widely recognized problem for crops and animal feeds globally, and 3) skatole, or 3-methyindole is one of the two compounds responsible for boar taint, found in the meat of monogastric animals. This study describes the generation and engineering of antibodies with versatile binding properties against low-molecular weight food contaminants, and the consecutive development of immunoassays for the detection of the respective compounds.
Resumo:
Project scope is to utilize Six Sigma DMAIC approach and lean principles to improve production quality of the case company. Six Sigma tools and techniques are explored through a literature review and later used in the quality control phase. The focus is set on the Pareto analysis to demonstrate the most evident development areas in the production. Materials that are not delivered to the customer or materials that damaged during transportation comprise the biggest share of all feedbacks. The goal is set to reduce these feedbacks by 50 %. Production observation pointed out that not only material shortages but also over-production is a daily situation. As a result, an initial picking list where the purchased and own production components can be seen, is created, reduction of over- and underproduction and material marking improvement are seen the most competitive options so that the goal can be reached. The picking list development should still continue to make sure that the list can be used not only in the case study but also in the industrial scale. The reduction of material missing category can be evaluated reliably not sooner than in few years because it takes time to gather the needed statistical information.
Resumo:
While red-green-blue (RGB) image of retina has quite limited information, retinal multispectral images provide both spatial and spectral information which could enhance the capability of exploring the eye-related problems in their early stages. In this thesis, two learning-based algorithms for reconstructing of spectral retinal images from the RGB images are developed by a two-step manner. First, related previous techniques are reviewed and studied. Then, the most suitable methods are enhanced and combined to have new algorithms for the reconstruction of spectral retinal images. The proposed approaches are based on radial basis function network to learn a mapping from tristimulus colour space to multi-spectral space. The resemblance level of reproduced spectral images and original images is estimated using spectral distance metrics spectral angle mapper, spectral correlation mapper, and spectral information divergence, which show a promising result from the suggested algorithms.
Resumo:
Although local grape growers view bird depredation as a significant economic issue, the most recent research on the problem in the Niagara Peninsula is three decades old. Peer-reviewed publications on the subject are rare, and researchers have struggled to develop bird-damage assessment techniques useful for facilitating management programmes. I used a variation of Stevenson and Virgo's (1971) visual estimation procedure to quantify spatial and temporal trends in bird damage to grapes within single vineyard plots at two locations near St. Catharines, Ontario. I present a novel approach to managing the rank-data from visual estimates, which is unprecedented in its sensitivity to spatial trends in bird damage. I also review its valid use in comparative statistical analysis. Spatial trends in 3 out of 4 study plots confirmed a priori predictions about localisation in bird damage based on optimal foraging from a central location (staging area). Damage to grape clusters was: (1) greater near the edges of vineyard plots and decreased with distance towards the center, (2) greater in areas adjacent to staging areas for birds, and (3) vertically stratified, with upper-tier clusters sustaining more damage than lower-tier clusters. From a management perspective, this predictive approach provides vineyard owners with the ability to identify the portions of plots likely to be most susceptible to bird damage, and thus the opportunity to focus deterrent measures in these areas. Other management considerations at Henry of Pelham were: (1) wind damage to ice-wine Riesling and Vidal was much higher than bird damage, (2) plastic netting with narrow mesh provided more effective protection agsiinst birds than nylon netting with wider mesh, and (3) no trends in relative susceptibility of varietals by colour (red vs green) were evident.
Resumo:
This investigation of geochemistry and mineralogy of heavy metals in fine grained (<63^m) sediment of the Welland River was imdertaken to: 1) describe metal dispersion patterns relative to a source, identify minerals forming and existing at the outfall region and relate sediment particle size to chemistry; 2) to delineate sample handling, preparation and evaluate, modify and develop analytical methods for heavy metal analysis of complex environmental samples. Ajoint project between Brock University and Geoscience Laboratories was initiated to test a contaminated site of the Welland River at the base of Atlas Speciality Steels Co. Methods were developed and utilized for particle size separation and two acid extraction techniques: 1) Partial extraction; 2) Total extraction. The mineralogical assessment identified calcite, dolomite, quartz and clays. These minerals are typical of the carbonate-shale rock basement of the Niagara Peninsula. Minerals such as, mullite and ferrocolumbite were found at the outfall region. These are not typical of the local geology and are generally associated with industrial pollutants. Partial and total extraction techniques were used to characterize the sediments based on chemical distribution, elemental behaviour and analytical differences. The majority of elements were lower in concentration in the partial extraction technique; suggesting these elements are bound in an acid extractable phase (exchangeable, organic and carbonate phases). The total extraction technique yielded higher elemental concentrations taking difficult oxides and silicates into solution. Geochemical analyses of grain size separates revealed that heavy metal (Co, Ni, V, Mn, Fe, Ba) concentrations did not increase with decreasing grain size. This is a function of the anthropogenic mill scale input into the river. The background elements (Sc, Y, Sr, Mg, Al and Ti) showed an increase in concentration to the finest grain size suggesting that it is directly related to the local mineralogy and geology. Dispersion patterns ofmetals fall into two distinct categories: 1) the heavy metals (Co, Cu, Ni, Zn, V and Cr), and 2) the background elements (Be, Sc, Y, Sr, Al and Ti). The heavy metals show a marked increase in the outfall region, while the background elements show a significant decrease at the outfall. This pattern is attributed to a "dilution effect" ofthe natural sediments by the anthropogenic mill scale sediments. Multivariant statistical analysis and correlation coefficient matrix results clearly support these results and conclusions. These results indicate the outfall region ofthe Welland River is highly contaminated with to heavy metals from the industrialized area of Welland. A short distance downstream, the metal concentrations return to baseline geochemical levels. It appears, contaminants rapidly come out of suspension and are deposited in close proximity to the source. Therefore, it is likely that dredging the sediment from the river may cause resuspension of contaminated sediments, but may not distribute the sediment as far as initially anticipated.
Resumo:
Adenoviral vectors are currently the most widely used gene therapeutic vectors, but their inability to integrate into host chromosomal DNA shortened their transgene expression and limited their use in clinical trials. In this project, we initially planned to develop a technique to test the effect of the early region 1 (E1) on adenovirus integration by comparing the integration efficiencies between an E1-deleted adenoviral vector (SubE1) and an Elcontaining vector (SubE3). However, we did not harvest any SubE3 virus, even if we repeated the transfection and successfully rescued the SubE1 virus (2/4 transfections generated viruses) and positive control virus (6/6). The failure of rescuing SubE3 could be caused by the instability of the genomic plasmid pFG173, as it had frequent intemal deletions when we were purifying It. Therefore, we developed techniques to test the effect of E1 on homologous recombination (HR) since literature suggested that adenovirus integration is initiated by HR. We attempted to silence the E1 in 293 cells by transfecting E1A/B-specific small interfering RNA (siRNA). However, no silenced phenotype was observed, even if we varied the concentrations of E1A/B siRNA (from 30 nM to 270 nM) and checked the silencing effects at different time points (48, 72, 96 h). One possible explanation would be that the E1A/B siRNA sequences are not potent enough to Induce the silenced phenotype. For evaluating HR efficiencies, an HR assay system based on bacterial transfonmatJon was designed. We constmcted two plasmids ( designated as pUC19-dl1 and pUC19-dl2) containing different defective lacZa cassettes (forming white colonies after transformation) that can generate a functional lacZa cassette (forming blue colonies) through HR after transfecting into 293 cells. The HR efficiencies would be expressed as the percentages of the blue colonies among all the colonies. Unfortunately, after transfonnation of plasmid isolated from 293 cells, no colony was found, even at a transformation efficiency of 1.8x10^ colonies/pg pUC19, suggesting the sensitivity of this system was low. To enhance the sensitivity, PCR was used. We designed a set of primers that can only amplify the recombinant plasmid fomied through HR. Therefore, the HR efficiencies among different treatments can be evaluated by the amplification results, and this system could be used to test the effect of E1 region on adenovirus integration. In addition, to our knowledge there was no previous studies using PCR/ Realtime PCR to evaluate HR efficiency, so this system also provides a PCR-based method to carry out the HR assays.
Resumo:
This study assessed the effectiveness of a reciprocal teaching program as a method of teaching reading comprehension, using narrative text material in a t.ypical grade seven classroom. In order to determine the effectiveness of the reciprocal teaching program, this method was compared to two other reading instruction approaches that, unlike rcciprocal teaching, did not include social interaction components. Two intact grade scven classes, and a grade seven teacher, participated in this study. Students were appropriately assigned to three treatment groups by reading achievement level as determined from a norm-referenced test. Training proceeded for a five week intervention period during regularly scheduled English periods. Throughout the program curriculum-based tests were administered. These tests were designed to assess comprehension in two distinct ways; namely, character analysis components as they relate to narrative text, and strategy use components as they contribute to student understanding of narrative and expository text. Pre, post, and maintenance tests were administered to measure overall training effects. Moreover, during intervention, training probes were administered in the last period of each week to evaluate treatment group performance. AU curriculum-based tests were coded and comparisons of pre, post, maintenance tests and training probes were presented in graph form. Results showed that the reciprocal group achieved some improvement in reading comprehension scores in the strategy use component of the tests. No improvements were observed for the character analysis components of the curriculum-based tests and the norm-referenced tests. At pre and post intervention, interviews requiring students to respond to questions that addressed metacomprehension awareness of study strategies were administered. The intelviews were coded and comparisons were made between the two intelVicws. No significant improvements were observed regarding student awareness of ten identified study strategies . This study indicated that reciprocal teaching is a viable approach that can be utilized to help students acquire more effective comprehension strategies. However, the maximum utility of the technique when administered to a population of grade seven students performing at average to above average levels of reading achievement has yet to be determined. In order to explore this issue, the refinement of training materials and curriculum-based measurements need to be explored. As well, this study revealed that reciprocal teaching placed heavier demands on the classroom teacher when compared to other reading instruction methods. This may suggest that innovative and intensive teacher training techniques are required before it is feasible to use this method in the classroom.
Resumo:
This study assessed the usefulness of a cognitive behavior modification (CBM) intervention package with mentally retarded students in overcoming learned helplessness and improving learning strategies. It also examined the feasibility of instructing teachers in the use of such a training program for a classroom setting. A modified single subject design across individuals was employed using two groups of three subjects. Three students from each of two segregated schools for the mentally retarded were selected using a teacher questionnaire and pupil checklist of the most learned helpless students enrolled there. Three additional learned helplessness assessments were conducted on each subject before and after the intervention in order to evaluate the usefulness of the program in alleviating learned helplessness. A classroom environment was created with the three students from each school engaged in three twenty minute work sessions a week with the experimenter and a tutor experimenter (TE) as instructors. Baseline measurements were established on seven targeted behaviors for each subject: task-relevant speech, task-irrelevant speech, speech denoting a positive evaluation of performance, speech denoting a negative evaluation of performance, proportion of time on task, non-verbal positive evaluation of performance and non-verbal negative evaluation of performance. The intervention package combined a variety of CBM techniques such as Meichenbaum's (1977) Stop, Look and Listen approach, role rehearsal and feedback. During the intervention each subject met with his TE twice a week for an individual half-hour session and one joint twenty minute session with all three students, the experimentor and one TE. Five weeks after the end of this experiment one follow up probe was conducted. All baseline, post-intervention and probe sessions were videotaped. The seven targeted behaviors were coded and comparisons of baseline, post intervention, and probe testing were presented in graph form. Results showed a reduction in learned helplessness in all subjects. Improvement was noted in each of the seven targeted behaviors for each of the six subjects. This study indicated that mentally retarded children can be taught to reduce learned helplessness with the aid of a CBM intervention package. It also showed that CBM is a viable approach in helping mentally retarded students acquire more effective learning strategies. Because the TEs (Tutor experimenters) had no trouble learning and implementing this program, it was considered feasible for teachers to use similar methods in the classroom.
Resumo:
This study had three purposes related to the effective implem,entation and practice of computer-mediated online distance education (C-MODE) at the elementary level: (a) To identify a preliminary framework of criteria 'or guidelines for effective implementation and practice, (b) to identify areas ofC-MODE for which criteria or guidelines of effectiveness have not yet been developed, and (c) to develop an implementation and practice criteria questionnaire based on a review of the distance education literature, and to use the questionnaire in an exploratory survey of elementary C-MODE practitioners. Using the survey instrument, the beliefs and attitudes of 16 elementary C'- MODE practitioners about what constitutes effective implementation and practice principles were investigated. Respondents, who included both administrators and instructors, provided information about themselves and the program in which they worked. They rated 101 individual criteria statenlents on a 5 point Likert scale with a \. point range that included the values: 1 (Strongly Disagree), 2 (Disagree), 3 (Neutral or Undecided), 4 (Agree), 5 (Strongly Agree). Respondents also provided qualitative data by commenting on the individual statements, or suggesting other statements they considered important. Eighty-two different statements or guidelines related to the successful implementation and practice of computer-mediated online education at the elementary level were endorsed. Response to a small number of statements differed significantly by gender and years of experience. A new area for investigation, namely, the role ofparents, which has received little attention in the online distance education literature, emerged from the findings. The study also identified a number of other areas within an elementary context where additional research is necessary. These included: (a) differences in the factors that determine learning in a distance education setting and traditional settings, (b) elementary students' ability to function in an online setting, (c) the role and workload of instructors, (d) the importance of effective, timely communication with students and parents, and (e) the use of a variety of media.