959 resultados para field methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Accelerating bone healing around dental implants can reduce the long-term period between the insertion of implants and functional rehabilitation. Objective: This in vivo study evaluated the effect of a constant electromagnetic field (CEF) on bone healing around dental implants in dogs. Materials and methods: Eight dental implants were placed immediately after extraction of the first premolar and molar teeth on the mandible of two male dogs and divided into experimental (CEF) and control groups. A CEF at magnetic intensity of 0.8 mT with a pulse width of 25 mu s and frequency of 1.5 MHz was applied on the implants for 20 min per day for 2 weeks. Result and conclusion: After qualitative histological analysis, a small quantity of newly formed bone was observed in the gap between the implant surface and alveolar bone in both groups.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we employ renormalization group methods to study the general behavior of field theories possessing anisotropic scaling in the spacetime variables. The Lorentz symmetry breaking that accompanies these models are either soft, if no higher spatial derivative is present, or it may have a more complex structure if higher spatial derivatives are also included. Both situations are discussed in models with only scalar fields and also in models with fermions as a Yukawa-like model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background The field cancerization concept in photodamaged patients suggests that the entire sun-exposed surface of the skin has an increased risk for the development of (pre)-malignant lesions, mainly epithelial tumours. Topical photodynamic therapy (PDT) is a noninvasive therapeutic method for multiple actinic keratosis (AK) with excellent outcome. Objectives To evaluate the clinical, histological and immunohistochemical changes in human skin with field cancerization after multiple sessions of PDT with methyl-aminolaevulinate (MAL). Methods Twenty-six patients with photodamaged skin and multiple AK on the face received three consecutive sessions of MAL-PDT with red light (37 J cm(-2)), 1 month apart. Biopsies before and 3 months after the last treatment session were taken from normal-appearing skin on the field-cancerized area. Immunohistochemical stainings were performed for TP-53, procollagen-I, metalloproteinase-1 (MMP-1) and tenascin-C (Tn-C). Results All 26 patients completed the study. The global score for photodamage improved considerably in all patients (P < 0.001). The AK clearance rate was 89.5% at the end of the study. Two treatment sessions were as effective as three MAL-PDT sessions. A significant decrease in atypia grade and extent of keratinocyte atypia was observed histologically (P < 0.001). Also, a significant increase in collagen deposition (P = 0.001) and improvement of solar elastosis (P = 0.002) were noticed after PDT. However, immunohistochemistry showed only a trend for decreased TP-53 expression (not significant), increased procollagen-I and MMP-1 expressions (not significant) and an increased expression of Tn-C (P = 0.024). Conclusions Clinical and histological improvement in field cancerization after multiple sessions of MAL-PDT is proven. The decrease in severity and extent of keratinocyte atypia associated with a decreased expression of TP-53 suggest a reduced carcinogenic potential of the sun-damaged area. The significant increase of new collagen deposition and the reduction of solar elastosis explain the clinical improvement of photodamaged skin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The magnetic field in the local interstellar medium (ISM) provides a key indicator of the galactic environment of the Sun and influences the shape of the heliosphere. We have studied the interstellar magnetic field (ISMF) in the solar vicinity using polarized starlight for stars within 40 pc of the Sun and 90 degrees of the heliosphere nose. In Frisch et al. (Paper I), we developed a method for determining the local ISMF direction by finding the best match to a group of interstellar polarization position angles obtained toward nearby stars, based on the assumption that the polarization is parallel to the ISMF. In this paper, we extend the analysis by utilizing weighted fits to the position angles and by including new observations acquired for this study. We find that the local ISMF is pointed toward the galactic coordinates l, b = 47 degrees +/- 20 degrees, 25 degrees +/- 20 degrees. This direction is close to the direction of the ISMF that shapes the heliosphere, l, b = 33 degrees +/- 4 degrees, 55 degrees +/- 4 degrees, as traced by the center of the "Ribbon" of energetic neutral atoms discovered by the Interstellar Boundary Explorer (IBEX) mission. Both the magnetic field direction and the kinematics of the local ISM are consistent with a scenario where the local ISM is a fragment of the Loop I superbubble. A nearby ordered component of the local ISMF has been identified in the region l approximate to 0 degrees -> 80 degrees and b approximate to 0 degrees -> 30 degrees, where PlanetPol data show a distance-dependent increase of polarization strength. The ordered component extends to within 8 pc of the Sun and implies a weak curvature in the nearby ISMF of +/- 0 degrees.25 pc(-1). This conclusion is conditioned on the small sample of stars available for defining this rotation. Variations from the ordered component suggest a turbulent component of +/- 23 degrees. The ordered component and standard relations between polarization, color excess, and H-o column density predict a reasonable increase of N(H) with distance in the local ISM. The similarity of the ISMF directions traced by the polarizations, the IBEX Ribbon, and pulsars inside the Local Bubble in the third galactic quadrant suggest that the ISMF is relatively uniform over spatial scales of 8-200 pc and is more similar to interarm than spiral-arm magnetic fields. The ISMF direction from the polarization data is also consistent with small-scale spatial asymmetries detected in GeV-TeV cosmic rays with a galactic origin. The peculiar geometrical relation found earlier between the cosmic microwave background dipole moment, the heliosphere nose, and the ISMF direction is supported by this study. The interstellar radiation field at +/- 975 angstrom does not appear to play a role in grain alignment for the low-density ISM studied here.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. The prevalence of early childhood caries (ECC) is high in developing countries; thus, sensitive methods for the early diagnosis of ECC are of prime importance to implement the appropriate preventive measures. Aim. To investigate the effects of the addition of early caries lesions (ECL) into WHO threshold caries detection methods on the prevalence of caries in primary teeth and the epidemiological profile of the studied population. Design. In total, 351 3-to 4-year-old preschoolers participated in this cross-sectional study. Clinical exams were conducted by one calibrated examiner using WHO and WHO + ECL criteria. During the exams, a mirror, a ball-ended probe, gauze, and an artificial light were used. The data were analysed by Wilcoxon and Mc-Nemar's tests (a = 0.05). Results. Good intra-examiner Kappa values at tooth /surface levels were obtained for WHO and WHO + ECL criteria (0.93 /0.87 and 0.75 /0.78, respectively). The dmfs scores were significantly higher (P < 0.05) when WHO + ECL criteria were used. ECLs were the predominant caries lesions in the majority of teeth. Conclusions. The results strongly suggest that the WHO + ECL diagnosis method could be used to identify ECL in young children under field conditions, increasing the prevalence and classification of caries activity and providing valuable information for the early establishment of preventive measures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. CoRoT is a pioneering space mission whose primary goals are stellar seismology and extrasolar planets search. Its surveys of large stellar fields generate numerous planetary candidates whose lightcurves have transit-like features. An extensive analytical and observational follow-up effort is undertaken to classify these candidates. Aims. We present the list of planetary transit candidates from the CoRoT LRa01 star field in the Monoceros constellation toward the Galactic anti-center direction. The CoRoT observations of LRa01 lasted from 24 October 2007 to 3 March 2008. Methods. We acquired and analyzed 7470 chromatic and 3938 monochromatic lightcurves. Instrumental noise and stellar variability were treated with several filtering tools by different teams from the CoRoT community. Different transit search algorithms were applied to the lightcurves. Results. Fifty-one stars were classified as planetary transit candidates in LRa01. Thirty-seven (i.e., 73% of all candidates) are "good" planetary candidates based on photometric analysis only. Thirty-two (i.e., 87% of the "good" candidates) have been followed-up. At the time of writing twenty-two cases were solved and five planets were discovered: three transiting hot-Jupiters (CoRoT-5b, CoRoT-12b, and CoRoT-21b), the first terrestrial transiting planet (CoRoT-7b), and another planet in the same system (CoRoT-7c, detected by radial velocity survey only). Evidence of another non-transiting planet in the CoRoT-7 system, namely CoRoT-7d, was recently found as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To investigate the rate of visual field and optic disc change in patients with distinct patterns of glaucomatous optic disc damage. Design: Prospective longitudinal study. Participants: A total of 131 patients with open-angle glaucoma with focal (n = 45), diffuse (n = 42), and sclerotic (n = 44) optic disc damage. Methods: Patients were examined every 4 months with standard automated perimetry (SAP, SITA Standard, 24-2 test, Humphrey Field Analyzer, Carl Zeiss Meditec, Dublin, CA) and confocal scanning laser tomography (CSLT, Heidelberg Retina Tomograph, Heidelberg Engineering GmbH, Heidelberg, Germany) for a period of 4 years. During this time, patients were treated according to a predefined protocol to achieve a target intraocular pressure (IOP). Rates of change were estimated by robust linear regression of visual field mean deviation (MD) and global optic disc neuroretinal rim area with follow-up time. Main Outcome Measures: Rates of change in MD and rim area. Results: Rates of visual field change in patients with focal optic disc damage (mean -0.34, standard deviation [SD] 0.69 dB/year) were faster than in patients with sclerotic (mean - 0.14, SD 0.77 dB/year) and diffuse (mean + 0.01, SD 0.37 dB/year) optic disc damage (P = 0.003, Kruskal-Wallis). Rates of optic disc change in patients with focal optic disc damage (mean - 11.70, SD 25.5 x 10(-3) mm(2)/year) were faster than in patients with diffuse (mean -9.16, SD 14.9 x 10(-3) mm(2)/year) and sclerotic (mean -0.45, SD 20.6 x 10(-3) mm(2)/year) optic disc damage, although the differences were not statistically significant (P = 0.11). Absolute IOP reduction from untreated levels was similar among the groups (P = 0.59). Conclusions: Patients with focal optic disc damage had faster rates of visual field change and a tendency toward faster rates of optic disc deterioration when compared with patients with diffuse and sclerotic optic disc damage, despite similar IOP reductions during follow-up. Financial Disclosure(s): Proprietary or commercial disclosure may be found after the references. Ophthalmology 2012; 119: 294-303 (C) 2012 by the American Academy of Ophthalmology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A specific separated-local-field NMR experiment, dubbed Dipolar-Chemical-Shift Correlation (DIPSHIFT) is frequently used to study molecular motions by probing reorientations through the changes in XH dipolar coupling and T-2. In systems where the coupling is weak or the reorientation angle is small, a recoupled variant of the DIPSHIFT experiment is applied, where the effective dipolar coupling is amplified by a REDOR-like pi-pulse train. However, a previously described constant-time variant of this experiment is not sensitive to the motion-induced T-2 effect, which precludes the observation of motions over a large range of rates ranging from hundreds of Hz to around a MHz. We present a DIPSHIFT implementation which amplifies the dipolar couplings and is still sensitive to T-2 effects. Spin dynamics simulations, analytical calculations and experiments demonstrate the sensitivity of the technique to molecular motions, and suggest the best experimental conditions to avoid imperfections. Furthermore, an in-depth theoretical analysis of the interplay of REDOR-like recoupling and proton decoupling based on Average-Hamiltonian Theory was performed, which allowed explaining the origin of many artifacts found in literature data. (C) 2012 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Accurate malaria diagnosis is mandatory for the treatment and management of severe cases. Moreover, individuals with asymptomatic malaria are not usually screened by health care facilities, which further complicates disease control efforts. The present study compared the performances of a malaria rapid diagnosis test (RDT), the thick blood smear method and nested PCR for the diagnosis of symptomatic malaria in the Brazilian Amazon. In addition, an innovative computational approach was tested for the diagnosis of asymptomatic malaria. Methods The study was divided in two parts. For the first part, passive case detection was performed in 311 individuals with malaria-related symptoms from a recently urbanized community in the Brazilian Amazon. A cross-sectional investigation compared the diagnostic performance of the RDT Optimal-IT, nested PCR and light microscopy. The second part of the study involved active case detection of asymptomatic malaria in 380 individuals from riverine communities in Rondônia, Brazil. The performances of microscopy, nested PCR and an expert computational system based on artificial neural networks (MalDANN) using epidemiological data were compared. Results Nested PCR was shown to be the gold standard for diagnosis of both symptomatic and asymptomatic malaria because it detected the major number of cases and presented the maximum specificity. Surprisingly, the RDT was superior to microscopy in the diagnosis of cases with low parasitaemia. Nevertheless, RDT could not discriminate the Plasmodium species in 12 cases of mixed infections (Plasmodium vivax + Plasmodium falciparum). Moreover, the microscopy presented low performance in the detection of asymptomatic cases (61.25% of correct diagnoses). The MalDANN system using epidemiological data was worse that the light microscopy (56% of correct diagnoses). However, when information regarding plasma levels of interleukin-10 and interferon-gamma were inputted, the MalDANN performance sensibly increased (80% correct diagnoses). Conclusions An RDT for malaria diagnosis may find a promising use in the Brazilian Amazon integrating a rational diagnostic approach. Despite the low performance of the MalDANN test using solely epidemiological data, an approach based on neural networks may be feasible in cases where simpler methods for discriminating individuals below and above threshold cytokine levels are available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hermite interpolation is increasingly showing to be a powerful numerical solution tool, as applied to different kinds of second order boundary value problems. In this work we present two Hermite finite element methods to solve viscous incompressible flows problems, in both two- and three-dimension space. In the two-dimensional case we use the Zienkiewicz triangle to represent the velocity field, and in the three-dimensional case an extension of this element to tetrahedra, still called a Zienkiewicz element. Taking as a model the Stokes system, the pressure is approximated with continuous functions, either piecewise linear or piecewise quadratic, according to the version of the Zienkiewicz element in use, that is, with either incomplete or complete cubics. The methods employ both the standard Galerkin or the Petrov–Galerkin formulation first proposed in Hughes et al. (1986) [18], based on the addition of a balance of force term. A priori error analyses point to optimal convergence rates for the PG approach, and for the Galerkin formulation too, at least in some particular cases. From the point of view of both accuracy and the global number of degrees of freedom, the new methods are shown to have a favorable cost-benefit ratio, as compared to velocity Lagrange finite elements of the same order, especially if the Galerkin approach is employed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The human movement analysis (HMA) aims to measure the abilities of a subject to stand or to walk. In the field of HMA, tests are daily performed in research laboratories, hospitals and clinics, aiming to diagnose a disease, distinguish between disease entities, monitor the progress of a treatment and predict the outcome of an intervention [Brand and Crowninshield, 1981; Brand, 1987; Baker, 2006]. To achieve these purposes, clinicians and researchers use measurement devices, like force platforms, stereophotogrammetric systems, accelerometers, baropodometric insoles, etc. This thesis focus on the force platform (FP) and in particular on the quality assessment of the FP data. The principal objective of our work was the design and the experimental validation of a portable system for the in situ calibration of FPs. The thesis is structured as follows: Chapter 1. Description of the physical principles used for the functioning of a FP: how these principles are used to create force transducers, such as strain gauges and piezoelectrics transducers. Then, description of the two category of FPs, three- and six-component, the signals acquisition (hardware structure), and the signals calibration. Finally, a brief description of the use of FPs in HMA, for balance or gait analysis. Chapter 2. Description of the inverse dynamics, the most common method used in the field of HMA. This method uses the signals measured by a FP to estimate kinetic quantities, such as joint forces and moments. The measures of these variables can not be taken directly, unless very invasive techniques; consequently these variables can only be estimated using indirect techniques, as the inverse dynamics. Finally, a brief description of the sources of error, present in the gait analysis. Chapter 3. State of the art in the FP calibration. The selected literature is divided in sections, each section describes: systems for the periodic control of the FP accuracy; systems for the error reduction in the FP signals; systems and procedures for the construction of a FP. In particular is detailed described a calibration system designed by our group, based on the theoretical method proposed by ?. This system was the “starting point” for the new system presented in this thesis. Chapter 4. Description of the new system, divided in its parts: 1) the algorithm; 2) the device; and 3) the calibration procedure, for the correct performing of the calibration process. The algorithm characteristics were optimized by a simulation approach, the results are here presented. In addiction, the different versions of the device are described. Chapter 5. Experimental validation of the new system, achieved by testing it on 4 commercial FPs. The effectiveness of the calibration was verified by measuring, before and after calibration, the accuracy of the FPs in measuring the center of pressure of an applied force. The new system can estimate local and global calibration matrices; by local and global calibration matrices, the non–linearity of the FPs was quantified and locally compensated. Further, a non–linear calibration is proposed. This calibration compensates the non– linear effect in the FP functioning, due to the bending of its upper plate. The experimental results are presented. Chapter 6. Influence of the FP calibration on the estimation of kinetic quantities, with the inverse dynamics approach. Chapter 7. The conclusions of this thesis are presented: need of a calibration of FPs and consequential enhancement in the kinetic data quality. Appendix: Calibration of the LC used in the presented system. Different calibration set–up of a 3D force transducer are presented, and is proposed the optimal set–up, with particular attention to the compensation of non–linearities. The optimal set–up is verified by experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation An actual issue of great interest, both under a theoretical and an applicative perspective, is the analysis of biological sequences for disclosing the information that they encode. The development of new technologies for genome sequencing in the last years, opened new fundamental problems since huge amounts of biological data still deserve an interpretation. Indeed, the sequencing is only the first step of the genome annotation process that consists in the assignment of biological information to each sequence. Hence given the large amount of available data, in silico methods became useful and necessary in order to extract relevant information from sequences. The availability of data from Genome Projects gave rise to new strategies for tackling the basic problems of computational biology such as the determination of the tridimensional structures of proteins, their biological function and their reciprocal interactions. Results The aim of this work has been the implementation of predictive methods that allow the extraction of information on the properties of genomes and proteins starting from the nucleotide and aminoacidic sequences, by taking advantage of the information provided by the comparison of the genome sequences from different species. In the first part of the work a comprehensive large scale genome comparison of 599 organisms is described. 2,6 million of sequences coming from 551 prokaryotic and 48 eukaryotic genomes were aligned and clustered on the basis of their sequence identity. This procedure led to the identification of classes of proteins that are peculiar to the different groups of organisms. Moreover the adopted similarity threshold produced clusters that are homogeneous on the structural point of view and that can be used for structural annotation of uncharacterized sequences. The second part of the work focuses on the characterization of thermostable proteins and on the development of tools able to predict the thermostability of a protein starting from its sequence. By means of Principal Component Analysis the codon composition of a non redundant database comprising 116 prokaryotic genomes has been analyzed and it has been showed that a cross genomic approach can allow the extraction of common determinants of thermostability at the genome level, leading to an overall accuracy in discriminating thermophilic coding sequences equal to 95%. This result outperform those obtained in previous studies. Moreover, we investigated the effect of multiple mutations on protein thermostability. This issue is of great importance in the field of protein engineering, since thermostable proteins are generally more suitable than their mesostable counterparts in technological applications. A Support Vector Machine based method has been trained to predict if a set of mutations can enhance the thermostability of a given protein sequence. The developed predictor achieves 88% accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nano(bio)science and nano(bio)technology play a growing and tremendous interest both on academic and industrial aspects. They are undergoing rapid developments on many fronts such as genomics, proteomics, system biology, and medical applications. However, the lack of characterization tools for nano(bio)systems is currently considered as a major limiting factor to the final establishment of nano(bio)technologies. Flow Field-Flow Fractionation (FlFFF) is a separation technique that is definitely emerging in the bioanalytical field, and the number of applications on nano(bio)analytes such as high molar-mass proteins and protein complexes, sub-cellular units, viruses, and functionalized nanoparticles is constantly increasing. This can be ascribed to the intrinsic advantages of FlFFF for the separation of nano(bio)analytes. FlFFF is ideally suited to separate particles over a broad size range (1 nm-1 μm) according to their hydrodynamic radius (rh). The fractionation is carried out in an empty channel by a flow stream of a mobile phase of any composition. For these reasons, fractionation is developed without surface interaction of the analyte with packing or gel media, and there is no stationary phase able to induce mechanical or shear stress on nanosized analytes, which are for these reasons kept in their native state. Characterization of nano(bio)analytes is made possible after fractionation by interfacing the FlFFF system with detection techniques for morphological, optical or mass characterization. For instance, FlFFF coupling with multi-angle light scattering (MALS) detection allows for absolute molecular weight and size determination, and mass spectrometry has made FlFFF enter the field of proteomics. Potentialities of FlFFF couplings with multi-detection systems are discussed in the first section of this dissertation. The second and the third sections are dedicated to new methods that have been developed for the analysis and characterization of different samples of interest in the fields of diagnostics, pharmaceutics, and nanomedicine. The second section focuses on biological samples such as protein complexes and protein aggregates. In particular it focuses on FlFFF methods developed to give new insights into: a) chemical composition and morphological features of blood serum lipoprotein classes, b) time-dependent aggregation pattern of the amyloid protein Aβ1-42, and c) aggregation state of antibody therapeutics in their formulation buffers. The third section is dedicated to the analysis and characterization of structured nanoparticles designed for nanomedicine applications. The discussed results indicate that FlFFF with on-line MALS and fluorescence detection (FD) may become the unparallel methodology for the analysis and characterization of new, structured, fluorescent nanomaterials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The subject of this Ph.D. research thesis is the development and application of multiplexed analytical methods based on bioluminescent whole-cell biosensors. One of the main goals of analytical chemistry is multianalyte testing in which two or more analytes are measured simultaneously in a single assay. The advantages of multianalyte testing are work simplification, high throughput, and reduction in the overall cost per test. The availability of multiplexed portable analytical systems is of particular interest for on-field analysis of clinical, environmental or food samples as well as for the drug discovery process. To allow highly sensitive and selective analysis, these devices should combine biospecific molecular recognition with ultrasensitive detection systems. To address the current need for rapid, highly sensitive and inexpensive devices for obtaining more data from each sample,genetically engineered whole-cell biosensors as biospecific recognition element were combined with ultrasensitive bioluminescence detection techniques. Genetically engineered cell-based sensing systems were obtained by introducing into bacterial, yeast or mammalian cells a vector expressing a reporter protein whose expression is controlled by regulatory proteins and promoter sequences. The regulatory protein is able to recognize the presence of the analyte (e.g., compounds with hormone-like activity, heavy metals…) and to consequently activate the expression of the reporter protein that can be readily measured and directly related to the analyte bioavailable concentration in the sample. Bioluminescence represents the ideal detection principle for miniaturized analytical devices and multiplexed assays thanks to high detectability in small sample volumes allowing an accurate signal localization and quantification. In the first chapter of this dissertation is discussed the obtainment of improved bioluminescent proteins emitting at different wavelenghts, in term of increased thermostability, enhanced emission decay kinetic and spectral resolution. The second chapter is mainly focused on the use of these proteins in the development of whole-cell based assay with improved analytical performance. In particular since the main drawback of whole-cell biosensors is the high variability of their analyte specific response mainly caused by variations in cell viability due to aspecific effects of the sample’s matrix, an additional bioluminescent reporter has been introduced to correct the analytical response thus increasing the robustness of the bioassays. The feasibility of using a combination of two or more bioluminescent proteins for obtaining biosensors with internal signal correction or for the simultaneous detection of multiple analytes has been demonstrated by developing a dual reporter yeast based biosensor for androgenic activity measurement and a triple reporter mammalian cell-based biosensor for the simultaneous monitoring of two CYP450 enzymes activation, involved in cholesterol degradation, with the use of two spectrally resolved intracellular luciferases and a secreted luciferase as a control for cells viability. In the third chapter is presented the development of a portable multianalyte detection system. In order to develop a portable system that can be used also outside the laboratory environment even by non skilled personnel, cells have been immobilized into a new biocompatible and transparent polymeric matrix within a modified clear bottom black 384 -well microtiter plate to obtain a bioluminescent cell array. The cell array was placed in contact with a portable charge-coupled device (CCD) light sensor able to localize and quantify the luminescent signal produced by different bioluminescent whole-cell biosensors. This multiplexed biosensing platform containing whole-cell biosensors was successfully used to measure the overall toxicity of a given sample as well as to obtain dose response curves for heavy metals and to detect hormonal activity in clinical samples (PCT/IB2010/050625: “Portable device based on immobilized cells for the detection of analytes.” Michelini E, Roda A, Dolci LS, Mezzanotte L, Cevenini L , 2010). At the end of the dissertation some future development steps are also discussed in order to develop a point of care (POCT) device that combine portability, minimum sample pre-treatment and highly sensitive multiplexed assays in a short assay time. In this POCT perspective, field-flow fractionation (FFF) techniques, in particular gravitational variant (GrFFF) that exploit the earth gravitational field to structure the separation, have been investigated for cells fractionation, characterization and isolation. Thanks to the simplicity of its equipment, amenable to miniaturization, the GrFFF techniques appears to be particularly suited for its implementation in POCT devices and may be used as pre-analytical integrated module to be applied directly to drive target analytes of raw samples to the modules where biospecifc recognition reactions based on ultrasensitive bioluminescence detection occurs, providing an increase in overall analytical output.