937 resultados para Quantification methodologies
Resumo:
Axial X-ray Computed tomography (CT) scanning provides a convenient means of recording the three-dimensional form of soil structure. The technique has been used for nearly two decades, but initial development has concentrated on qualitative description of images. More recently, increasing effort has been put into quantifying the geometry and topology of macropores likely to contribute to preferential now in soils. Here we describe a novel technique for tracing connected macropores in the CT scans. After object extraction, three-dimensional mathematical morphological filters are applied to quantify the reconstructed structure. These filters consist of sequences of so-called erosions and/or dilations of a 32-face structuring element to describe object distances and volumes of influence. The tracing and quantification methodologies were tested on a set of undisturbed soil cores collected in a Swiss pre-alpine meadow, where a new earthworm species (Aporrectodea nocturna) was accidentally introduced. Given the reduced number of samples analysed in this study, the results presented only illustrate the potential of the method to reconstruct and quantify macropores. Our results suggest that the introduction of the new species induced very limited chance to the soil structured for example, no difference in total macropore length or mean diameter was observed. However. in the zone colonised by, the new species. individual macropores tended to have a longer average length. be more vertical and be further apart at some depth. Overall, the approach proved well suited to the analysis of the three-dimensional architecture of macropores. It provides a framework for the analysis of complex structures, which are less satisfactorily observed and described using 2D imaging. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The electrooxidative behavior of pravastatin (PRV) in aqueous media was studied by square-wave voltammetry at a glassycarbon electrode (GCE) and at a screen-printed carbon electrode (SPCE). Maximum peak current intensities in a pH 5.0 buffer were obtained at +1.3 V vs. AgCl/Ag and +1.0 V vs. Ag for the GCE and SPCE surface respectively. Validation of the developed methodologies revealed good performance characteristics and confirmed their applicability to the quantification of PRV in pharmaceutical products, without significant sample pretreatment. A comparative analysis between the two electrode types showed that SPCEs are preferred as an electrode surface because of their higher sensitivity and the elimination of the need to clean the electrode’s surface for its renewal, which frequently is, if not always, the rate-limiting step in voltammetric analysis.
Resumo:
Dissertação para obtenção do Grau de Doutor em Engenharia Mecânica
Resumo:
Abstract : The human body is composed of a huge number of cells acting together in a concerted manner. The current understanding is that proteins perform most of the necessary activities in keeping a cell alive. The DNA, on the other hand, stores the information on how to produce the different proteins in the genome. Regulating gene transcription is the first important step that can thus affect the life of a cell, modify its functions and its responses to the environment. Regulation is a complex operation that involves specialized proteins, the transcription factors. Transcription factors (TFs) can bind to DNA and activate the processes leading to the expression of genes into new proteins. Errors in this process may lead to diseases. In particular, some transcription factors have been associated with a lethal pathological state, commonly known as cancer, associated with uncontrolled cellular proliferation, invasiveness of healthy tissues and abnormal responses to stimuli. Understanding cancer-related regulatory programs is a difficult task, often involving several TFs interacting together and influencing each other's activity. This Thesis presents new computational methodologies to study gene regulation. In addition we present applications of our methods to the understanding of cancer-related regulatory programs. The understanding of transcriptional regulation is a major challenge. We address this difficult question combining computational approaches with large collections of heterogeneous experimental data. In detail, we design signal processing tools to recover transcription factors binding sites on the DNA from genome-wide surveys like chromatin immunoprecipitation assays on tiling arrays (ChIP-chip). We then use the localization about the binding of TFs to explain expression levels of regulated genes. In this way we identify a regulatory synergy between two TFs, the oncogene C-MYC and SP1. C-MYC and SP1 bind preferentially at promoters and when SP1 binds next to C-NIYC on the DNA, the nearby gene is strongly expressed. The association between the two TFs at promoters is reflected by the binding sites conservation across mammals, by the permissive underlying chromatin states 'it represents an important control mechanism involved in cellular proliferation, thereby involved in cancer. Secondly, we identify the characteristics of TF estrogen receptor alpha (hERa) target genes and we study the influence of hERa in regulating transcription. hERa, upon hormone estrogen signaling, binds to DNA to regulate transcription of its targets in concert with its co-factors. To overcome the scarce experimental data about the binding sites of other TFs that may interact with hERa, we conduct in silico analysis of the sequences underlying the ChIP sites using the collection of position weight matrices (PWMs) of hERa partners, TFs FOXA1 and SP1. We combine ChIP-chip and ChIP-paired-end-diTags (ChIP-pet) data about hERa binding on DNA with the sequence information to explain gene expression levels in a large collection of cancer tissue samples and also on studies about the response of cells to estrogen. We confirm that hERa binding sites are distributed anywhere on the genome. However, we distinguish between binding sites near promoters and binding sites along the transcripts. The first group shows weak binding of hERa and high occurrence of SP1 motifs, in particular near estrogen responsive genes. The second group shows strong binding of hERa and significant correlation between the number of binding sites along a gene and the strength of gene induction in presence of estrogen. Some binding sites of the second group also show presence of FOXA1, but the role of this TF still needs to be investigated. Different mechanisms have been proposed to explain hERa-mediated induction of gene expression. Our work supports the model of hERa activating gene expression from distal binding sites by interacting with promoter bound TFs, like SP1. hERa has been associated with survival rates of breast cancer patients, though explanatory models are still incomplete: this result is important to better understand how hERa can control gene expression. Thirdly, we address the difficult question of regulatory network inference. We tackle this problem analyzing time-series of biological measurements such as quantification of mRNA levels or protein concentrations. Our approach uses the well-established penalized linear regression models where we impose sparseness on the connectivity of the regulatory network. We extend this method enforcing the coherence of the regulatory dependencies: a TF must coherently behave as an activator, or a repressor on all its targets. This requirement is implemented as constraints on the signs of the regressed coefficients in the penalized linear regression model. Our approach is better at reconstructing meaningful biological networks than previous methods based on penalized regression. The method is tested on the DREAM2 challenge of reconstructing a five-genes/TFs regulatory network obtaining the best performance in the "undirected signed excitatory" category. Thus, these bioinformatics methods, which are reliable, interpretable and fast enough to cover large biological dataset, have enabled us to better understand gene regulation in humans.
Resumo:
Free radicals induce lipid peroxidation, playing an important role in pathological processes. The injury mediated by free radicals can be measured by conjugated dienes, malondialdehyde, 4-hydroxynonenal, and others. However, malondialdehyde has been pointed out as the main product to evaluate lipid peroxidation. Most assays determine malondialdehyde by its reaction with thiobarbituric acid, which can be measured by indirect (spectrometry) and direct methodologies (chromatography). Though there is some controversy among the methodologies, the selective HPLC-based assays provide a more reliable lipid peroxidation measure. This review describes significant aspects about MDA determination, its importance in pathologies and biological samples treatment.
Resumo:
The increasing presence of products derived from genetically modified (GM) plants in human and animal diets has led to the development of detection methods to distinguish biotechnology-derived foods from conventional ones. The conventional and real-time PCR have been used, respectively, to detect and quantify GM residues in highly processed foods. DNA extraction is a critical step during the analysis process. Some factors such as DNA degradation, matrix effects, and the presence of PCR inhibitors imply that a detection or quantification limit, established for a given method, is restricted to a matrix used during validation and cannot be projected to any other matrix outside the scope of the method. In Brazil, sausage samples were the main class of processed products in which Roundup Ready® (RR) soybean residues were detected. Thus, the validation of methodologies for the detection and quantification of those residues is absolutely necessary. Sausage samples were submitted to two different methods of DNA extraction: modified Wizard and the CTAB method. The yield and quality were compared for both methods. DNA samples were analyzed by conventional and real-time PCR for the detection and quantification of Roundup Ready® soybean in the samples. At least 200 ng of total sausage DNA was necessary for a reliable quantification. Reactions containing DNA amounts below this value led to large variations on the expected GM percentage value. In conventional PCR, the detection limit varied from 1.0 to 500 ng, depending on the GM soybean content in the sample. The precision, performance, and linearity were relatively high indicating that the method used for analysis was satisfactory.
Resumo:
In this thesis, the applications of the recurrence quantification analysis in metal cutting operation in a lathe, with specific objective to detect tool wear and chatter, are presented.This study is based on the discovery that process dynamics in a lathe is low dimensional chaotic. It implies that the machine dynamics is controllable using principles of chaos theory. This understanding is to revolutionize the feature extraction methodologies used in condition monitoring systems as conventional linear methods or models are incapable of capturing the critical and strange behaviors associated with the metal cutting process.As sensor based approaches provide an automated and cost effective way to monitor and control, an efficient feature extraction methodology based on nonlinear time series analysis is much more demanding. The task here is more complex when the information has to be deduced solely from sensor signals since traditional methods do not address the issue of how to treat noise present in real-world processes and its non-stationarity. In an effort to get over these two issues to the maximum possible, this thesis adopts the recurrence quantification analysis methodology in the study since this feature extraction technique is found to be robust against noise and stationarity in the signals.The work consists of two different sets of experiments in a lathe; set-I and set-2. The experiment, set-I, study the influence of tool wear on the RQA variables whereas the set-2 is carried out to identify the sensitive RQA variables to machine tool chatter followed by its validation in actual cutting. To obtain the bounds of the spectrum of the significant RQA variable values, in set-i, a fresh tool and a worn tool are used for cutting. The first part of the set-2 experiments uses a stepped shaft in order to create chatter at a known location. And the second part uses a conical section having a uniform taper along the axis for creating chatter to onset at some distance from the smaller end by gradually increasing the depth of cut while keeping the spindle speed and feed rate constant.The study concludes by revealing the dependence of certain RQA variables; percent determinism, percent recurrence and entropy, to tool wear and chatter unambiguously. The performances of the results establish this methodology to be viable for detection of tool wear and chatter in metal cutting operation in a lathe. The key reason is that the dynamics of the system under study have been nonlinear and the recurrence quantification analysis can characterize them adequately.This work establishes that principles and practice of machining can be considerably benefited and advanced from using nonlinear dynamics and chaos theory.
Resumo:
AEA Technology has provided an assessment of the probability of α-mode containment failure for the Sizewell B PWR. After a preliminary review of the methodologies available it was decided to use the probabilistic approach described in the paper, based on an extension of the methodology developed by Theofanous et al. (Nucl. Sci. Eng. 97 (1987) 259–325). The input to the assessment is 12 probability distributions; the bases for the quantification of these distributions are discussed. The α-mode assessment performed for the Sizewell B PWR has demonstrated the practicality of the event-tree method with input data represented by probability distributions. The assessment itself has drawn attention to a number of topics, which may be plant and sequence dependent, and has indicated the importance of melt relocation scenarios. The α-mode failure probability following an accident that leads to core melt relocation to the lower head for the Sizewell B PWR has been assessed as a few parts in 10 000, on the basis of current information. This assessment has been the first to consider elevated pressures (6 MPa and 15 MPa) besides atmospheric pressure, but the results suggest only a modest sensitivity to system pressure.
Resumo:
Of the many sources of urban greenhouse gas (GHG) emissions, solid waste is the only one for which management decisions are undertaken primarily by municipal governments themselves and is hence often the largest component of cities’ corporate inventories. It is essential that decision-makers select an appropriate quantification methodology and have an appreciation of methodological strengths and shortcomings. This work compares four different waste emissions quantification methods, including Intergovernmental Panel on Climate Change (IPCC) 1996 guidelines, IPCC 2006 guidelines, U.S. Environmental Protection Agency (EPA) Waste Reduction Model (WARM), and the Federation of Canadian Municipalities- Partners for Climate Protection (FCM-PCP) quantification tool. Waste disposal data for the greater Toronto area (GTA) in 2005 are used for all methodologies; treatment options (including landfill, incineration, compost, and anaerobic digestion) are examined where available in methodologies. Landfill was shown to be the greatest source of GHG emissions, contributing more than three-quarters of total emissions associated with waste management. Results from the different landfill gas (LFG) quantification approaches ranged from an emissions source of 557 kt carbon dioxide equivalents (CO2e) (FCM-PCP) to a carbon sink of −53 kt CO2e (EPA WARM). Similar values were obtained between IPCC approaches. The IPCC 2006 method was found to be more appropriate for inventorying applications because it uses a waste-in-place (WIP) approach, rather than a methane commitment (MC) approach, despite perceived onerous data requirements for WIP. MC approaches were found to be useful from a planning standpoint; however, uncertainty associated with their projections of future parameter values limits their applicability for GHG inventorying. MC and WIP methods provided similar results in this case study; however, this is case specific because of similarity in assumptions of present and future landfill parameters and quantities of annual waste deposited in recent years being relatively consistent.
Resumo:
A novel analytical approach, based on a miniaturized extraction technique, the microextraction by packed sorbent (MEPS), followed by ultrahigh pressure liquid chromatography (UHPLC) separation combined with a photodiode array (PDA) detection, has been developed and validated for the quantitative determination of sixteen biologically active phenolic constituents of wine. In addition to performing routine experiments to establish the validity of the assay to internationally accepted criteria (linearity, sensitivity, selectivity, precision, accuracy), experiments are included to assess the effect of the important experimental parameters on the MEPS performance such as the type of sorbent material (C2, C8, C18, SIL, and M1), number of extraction cycles (extract-discard), elution volume, sample volume, and ethanol content, were studied. The optimal conditions of MEPS extraction were obtained using C8 sorbent and small sample volumes (250 μL) in five extraction cycle and in a short time period (about 5 min for the entire sample preparation step). The wine bioactive phenolics were eluted by 250 μL of the mixture containing 95% methanol and 5% water, and the separation was carried out on a HSS T3 analytical column (100 mm × 2.1 mm, 1.8 μm particle size) using a binary mobile phase composed of aqueous 0.1% formic acid (eluent A) and methanol (eluent B) in the gradient elution mode (10 min of total analysis). The method gave satisfactory results in terms of linearity with r2-values > 0.9986 within the established concentration range. The LOD varied from 85 ng mL−1 (ferulic acid) to 0.32 μg mL−1 ((+)-catechin), whereas the LOQ values from 0.028 μg mL−1 (ferulic acid) to 1.08 μg mL−1 ((+)-catechin). Typical recoveries ranged between 81.1 and 99.6% for red wines and between 77.1 and 99.3% for white wines, with relative standard deviations (RSD) no larger than 10%. The extraction yields of the MEPSC8/UHPLC–PDA methodology were found between 78.1 (syringic acid) and 99.6% (o-coumaric acid) for red wines and between 76.2 and 99.1% for white wines. The inter-day precision, expressed as the relative standard deviation (RSD%), varied between 0.2% (p-coumaric and o-coumaric acids) and 7.5% (gentisic acid) while the intra-day precision between 0.2% (o-coumaric and cinnamic acids) and 4.7% (gallic acid and (−)-epicatechin). On the basis of analytical validation, it is shown that the MEPSC8/UHPLC–PDA methodology proves to be an improved, reliable, and ultra-fast approach for wine bioactive phenolics analysis, because of its capability for determining simultaneously in a single chromatographic run several bioactive metabolites with high sensitivity, selectivity and resolving power within only 10 min. Preliminary studies have been carried out on 34 real whole wine samples, in order to assess the performance of the described procedure. The new approach offers decreased sample preparation and analysis time, and moreover is cheaper, more environmentally friendly and easier to perform as compared to traditional methodologies.
Resumo:
In this work, the reduction reaction of paraquat herbicide was used to obtain analytical signals using electrochemical techniques of differential pulse voltammetry, square wave voltammetry and multiple square wave voltammetry. Analytes were prepared with laboratory purified water and natural water samples (from Mogi-Guacu River, SP). The electrochemical techniques were applied to 1.0 mol L-1 Na2SO4 solutions, at pH 5.5, and containing different concentrations of paraquat, in the range of 1 to 10 mu mol L-1, using a gold ultramicroelectrode. 5 replicate experiments were conducted and in each the mean value for peak currents obtained -0.70 V vs. Ag/AgCl yielded excellent linear relationships with pesticide concentrations. The slope values for the calibration plots (method sensitivity) were 4.06 x 10(-3), 1.07 x 10(-2) and 2.95 x 10(-2) A mol(-1) L for purified water by differential pulse voltammetry, square wave voltammetry and multiple square wave voltammetry, respectively. For river water samples, the slope values were 2.60 x 10(-3), 1.06 x 10(-2) and 3.35 x 10(-2) A mol(-1) L, respectively, showing a small interference from the natural matrix components in paraquat determinations. The detection limits for paraquat determinations were calculated by two distinct methodologies, i.e., as proposed by IUPAC and a statistical method. The values obtained with multiple square waves voltammetry were 0.002 and 0.12 mu mol L-1, respectively, for pure water electrolytes. The detection limit from IUPAC recommendations, when inserted in the calibration curve equation, an analytical signal (oxidation current) is smaller than the one experimentally observed for the blank solution under the same experimental conditions. This is inconsistent with the definition of detection limit, thus the IUPAC methodology requires further discussion. The same conclusion can be drawn by the analyses of detection limits obtained with the other techniques studied.
Resumo:
3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.
Resumo:
This thesis reports an integrated analytical and physicochemical approach for the study of natural substances and new drugs based on mass spectrometry techniques combined with liquid chromatography. In particular, Chapter 1 concerns the study of Berberine a natural substance with pharmacological activity for the treatment of hepatobiliary and intestinal diseases. The first part focused on the relationships between physicochemical properties, pharmacokinetics and metabolism of Berberine and its metabolites. For this purpose a sensitive HPLC-ES-MS/MS method have been developed, validated and used to determine these compounds during their physicochemical properties studies and plasma levels of berberine and its metabolites including berberrubine(M1), demethylenberberine(M3), and jatrorrhizine(M4) in humans. Data show that M1, could have an efficient intestinal absorption by passive diffusion due to a keto-enol tautomerism confirmed by NMR studies and its higher plasma concentration. In the second part of Chapter 1, a comparison between M1 and BBR in vivo biodistribution in rat has been studied. In Chapter 2 a new HPLC-ES-MS/MS method for the simultaneous determination and quantification of glucosinolates, as glucoraphanin, glucoerucin and sinigrin, and isothiocyanates, as sulforaphane and erucin, has developed and validated. This method has been used for the analysis of functional foods enriched with vegetable extracts. Chapter 3 focused on a physicochemical study of the interaction between the bile acid sequestrants used in the treatment of hypercholesterolemia including colesevelam and cholestyramine with obeticolic acid (OCA), potent agonist of nuclear receptor farnesoid X (FXR). In particular, a new experimental model for the determination of equilibrium binding isotherm was developed. Chapter 4 focused on methodological aspects of new hard ionization coupled with liquid chromatography (Direct-EI-UHPLC-MS) not yet commercially available and potentially useful for qualitative analysis and for “transparent” molecules to soft ionization techniques. This method was applied to the analysis of several steroid derivatives.
Resumo:
One of the most serious problems of the modern medicine is the growing emergence of antibiotic resistance among pathogenic bacteria. In this circumstance, different and innovative approaches for treating infections caused by multidrug-resistant bacteria are imperatively required. Bacteriophage Therapy is one among the fascinating approaches to be taken into account. This consists of the use of bacteriophages, viruses that infect bacteria, in order to defeat specific bacterial pathogens. Phage therapy is not an innovative idea, indeed, it was widely used around the world in the 1930s and 1940s, in order to treat various infection diseases, and it is still used in Eastern Europe and the former Soviet Union. Nevertheless, Western scientists mostly lost interest in further use and study of phage therapy and abandoned it after the discovery and the spread of antibiotics. The advancement of scientific knowledge of the last years, together with the encouraging results from recent animal studies using phages to treat bacterial infections, and above all the urgent need for novel and effective antimicrobials, have given a prompt for additional rigorous researches in this field. In particular, in the laboratory of synthetic biology of the department of Life Sciences at the University of Warwick, a novel approach was adopted, starting from the original concept of phage therapy, in order to study a concrete alternative to antibiotics. The innovative idea of the project consists in the development of experimental methodologies, which allow to engineer a programmable synthetic phage system using a combination of directed evolution, automation and microfluidics. The main aim is to make “the therapeutics of tomorrow individualized, specific, and self-regulated” (Jaramillo, 2015). In this context, one of the most important key points is the Bacteriophage Quantification. Therefore, in this research work, a mathematical model describing complex dynamics occurring in biological systems involving continuous growth of bacteriophages, modulated by the performance of the host organisms, was implemented as algorithms into a working software using MATLAB. The developed program is able to predict different unknown concentrations of phages much faster than the classical overnight Plaque Assay. What is more, it gives a meaning and an explanation to the obtained data, making inference about the parameter set of the model, that are representative of the bacteriophage-host interaction.
Resumo:
This paper shows the results of a research aimed to formulate a general model for supporting the implementation and management of an urban road pricing scheme. After a preliminary work, to define the state of the art in the field of sustainable urban mobility strategies, the problem has been theoretically set up in terms of transport economy, introducing the external costs’ concept duly translated into the principle of pricing for the use of public infrastructures. The research is based on the definition of a set of direct and indirect indicators to qualify the urban areas by land use, mobility, environmental and economic conditions. These indicators have been calculated for a selected set of typical urban areas in Europe on the basis of the results of a survey carried out by means of a specific questionnaire. Once identified the most typical and interesting applications of the road pricing concept in cities such as London (Congestion Charging), Milan (Ecopass), Stockholm (Congestion Tax) and Rome (ZTL), a large benchmarking exercise and the cross analysis of direct and indirect indicators, has allowed to define a simple general model, guidelines and key requirements for the implementation of a pricing scheme based traffic restriction in a generic urban area. The model has been finally applied to the design of a road pricing scheme for a particular area in Madrid, and to the quantification of the expected results of its implementation from a land use, mobility, environmental and economic perspective.