913 resultados para MS-based methods
Resumo:
With recent advances in remote sensing processing technology, it has become more feasible to begin analysis of the enormous historic archive of remotely sensed data. This historical data provides valuable information on a wide variety of topics which can influence the lives of millions of people if processed correctly and in a timely manner. One such field of benefit is that of landslide mapping and inventory. This data provides a historical reference to those who live near high risk areas so future disasters may be avoided. In order to properly map landslides remotely, an optimum method must first be determined. Historically, mapping has been attempted using pixel based methods such as unsupervised and supervised classification. These methods are limited by their ability to only characterize an image spectrally based on single pixel values. This creates a result prone to false positives and often without meaningful objects created. Recently, several reliable methods of Object Oriented Analysis (OOA) have been developed which utilize a full range of spectral, spatial, textural, and contextual parameters to delineate regions of interest. A comparison of these two methods on a historical dataset of the landslide affected city of San Juan La Laguna, Guatemala has proven the benefits of OOA methods over those of unsupervised classification. Overall accuracies of 96.5% and 94.3% and F-score of 84.3% and 77.9% were achieved for OOA and unsupervised classification methods respectively. The greater difference in F-score is a result of the low precision values of unsupervised classification caused by poor false positive removal, the greatest shortcoming of this method.
Resumo:
Traditional engineering design methods are based on Simon's (1969) use of the concept function, and as such collectively suffer from both theoretical and practical shortcomings. Researchers in the field of affordance-based design have borrowed from ecological psychology in an attempt to address the blind spots of function-based design, developing alternative ontologies and design processes. This dissertation presents function and affordance theory as both compatible and complimentary. We first present a hybrid approach to design for technology change, followed by a reconciliation and integration of function and affordance ontologies for use in design. We explore the integration of a standard function-based design method with an affordance-based design method, and demonstrate how affordance theory can guide the early application of function-based design. Finally, we discuss the practical and philosophical ramifications of embracing affordance theory's roots in ecology and ecological psychology, and explore the insights and opportunities made possible by an ecological approach to engineering design. The primary contribution of this research is the development of an integrated ontology for describing and designing technological systems using both function- and affordance-based methods.
Resumo:
Antigen design is generally driven by the need to obtain enhanced stability,efficiency and safety in vaccines.Unfortunately,the antigen modification is rarely proceeded in parallel with analytical tools development characterization.The analytical tools set up is required during steps of vaccine manufacturing pipeline,for vaccine production modifications,improvements or regulatory requirements.Despite the relevance of bioconjugate vaccines,robust and consistent analytical tools to evaluate the extent of carrier glycosylation are missing.Bioconjugation is a glycoengineering technology aimed to produce N-glycoprotein in vivo in E.coli cells,based on the PglB-dependent system by C. jejuni,applied for production of several glycoconjugate vaccines.This applicability is due to glycocompetent E. coli ability to produce site-selective glycosylated protein used,after few purification steps, as vaccines able to elicit both humoral and cell-mediate immune-response.Here, S.aureus Hla bioconjugated with CP5 was used to perform rational analytical-driven design of the glycosylation sites for the glycosylation extent quantification by Mass Spectrometry.The aim of the study was to develop a MS-based approach to quantify the glycosylation extent for in-process monitoring of bioconjugate production and for final product characterization.The three designed consensus sequences differ for a single amino-acid residue and fulfill the prerequisites for engineered bioconjugate more appropriate from an analytical perspective.We aimed to achieve an optimal MS detectability of the peptide carrying the consensus sequences,complying with the well-characterized requirements for N-glycosylation by PglB.Hla carrier isoforms,bearing these consensus sequences allowed a recovery of about 20 ng/μg of periplasmic protein glycosylated at 40%.The SRM-MS here developed was successfully applied to evaluate the differential site occupancy when carrier protein present two glycosites.The glycosylation extent in each glycosite was determined and the difference in the isoforms were influenced either by the overall source of protein produced and by the position of glycosite insertion.The analytical driven design of the bioconjugated antigen and the development of accurate,precise and robust analytical method allowed to finely characterize the vaccine.
Resumo:
Recent research trends in computer-aided drug design have shown an increasing interest towards the implementation of advanced approaches able to deal with large amount of data. This demand arose from the awareness of the complexity of biological systems and from the availability of data provided by high-throughput technologies. As a consequence, drug research has embraced this paradigm shift exploiting approaches such as that based on networks. Indeed, the process of drug discovery can benefit from the implementation of network-based methods at different steps from target identification to drug repurposing. From this broad range of opportunities, this thesis is focused on three main topics: (i) chemical space networks (CSNs), which are designed to represent and characterize bioactive compound data sets; (ii) drug-target interactions (DTIs) prediction through a network-based algorithm that predicts missing links; (iii) COVID-19 drug research which was explored implementing COVIDrugNet, a network-based tool for COVID-19 related drugs. The main highlight emerged from this thesis is that network-based approaches can be considered useful methodologies to tackle different issues in drug research. In detail, CSNs are valuable coordinate-free, graphically accessible representations of structure-activity relationships of bioactive compounds data sets especially for medium-large libraries of molecules. DTIs prediction through the random walk with restart algorithm on heterogeneous networks can be a helpful method for target identification. COVIDrugNet is an example of the usefulness of network-based approaches for studying drugs related to a specific condition, i.e., COVID-19, and the same ‘systems-based’ approaches can be used for other diseases. To conclude, network-based tools are proving to be suitable in many applications in drug research and provide the opportunity to model and analyze diverse drug-related data sets, even large ones, also integrating different multi-domain information.
Resumo:
In this paper, space adaptivity is introduced to control the error in the numerical solution of hyperbolic systems of conservation laws. The reference numerical scheme is a new version of the discontinuous Galerkin method, which uses an implicit diffusive term in the direction of the streamlines, for stability purposes. The decision whether to refine or to unrefine the grid in a certain location is taken according to the magnitude of wavelet coefficients, which are indicators of local smoothness of the numerical solution. Numerical solutions of the nonlinear Euler equations illustrate the efficiency of the method. © Springer 2005.
Resumo:
The application of laser induced breakdown spectrometry (LIBS) aiming the direct analysis of plant materials is a great challenge that still needs efforts for its development and validation. In this way, a series of experimental approaches has been carried out in order to show that LIBS can be used as an alternative method to wet acid digestions based methods for analysis of agricultural and environmental samples. The large amount of information provided by LIBS spectra for these complex samples increases the difficulties for selecting the most appropriated wavelengths for each analyte. Some applications have suggested that improvements in both accuracy and precision can be achieved by the application of multivariate calibration in LIBS data when compared to the univariate regression developed with line emission intensities. In the present work, the performance of univariate and multivariate calibration, based on partial least squares regression (PLSR), was compared for analysis of pellets of plant materials made from an appropriate mixture of cryogenically ground samples with cellulose as the binding agent. The development of a specific PLSR model for each analyte and the selection of spectral regions containing only lines of the analyte of interest were the best conditions for the analysis. In this particular application, these models showed a similar performance. but PLSR seemed to be more robust due to a lower occurrence of outliers in comparison to the univariate method. Data suggests that efforts dealing with sample presentation and fitness of standards for LIBS analysis must be done in order to fulfill the boundary conditions for matrix independent development and validation. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Steady-state and time-resolved fluorescence measurements are reported for several crude oils and their saturates, aromatics, resins, and asphaltenes (SARA) fractions (saturates, aromatics and resins), isolated from maltene after pentane precipitation of the asphaltenes. There is a clear relationship between the American Petroleum Institute (API) grade of the crude oils and their fluorescence emission intensity and maxima. Dilution of the crude oil samples with cyclohexane results in a significant increase of emission intensity and a blue shift, which is a clear indication of the presence of energy-transfer processes between the emissive chromophores present in the crude oil. Both the fluorescence spectra and the mean fluorescence lifetimes of the three SARA fractions and their mixtures indicate that the aromatics and resins are the major contributors to the emission of crude oils. Total synchronous fluorescence scan (TSFS) spectral maps are preferable to steady-state fluorescence spectra for discriminating between the fractions, making TSFS maps a particularly interesting choice for the development of fluorescence-based methods for the characterization and classification of crude oils. More detailed studies, using a much wider range of excitation and emission wavelengths, are necessary to determine the utility of time-resolved fluorescence (TRF) data for this purpose. Preliminary models constructed using TSFS spectra from 21 crude oil samples show a very good correlation (R(2) > 0.88) between the calculated and measured values of API and the SARA fraction concentrations. The use of models based on a fast fluorescence measurement may thus be an alternative to tedious and time-consuming chemical analysis in refineries.
Resumo:
The leaf area index (LAI) of fast-growing Eucalyptus plantations is highly dynamic both seasonally and interannually, and is spatially variable depending on pedo-climatic conditions. LAI is very important in determining the carbon and water balance of a stand, but is difficult to measure during a complete stand rotation and at large scales. Remote-sensing methods allowing the retrieval of LAI time series with accuracy and precision are therefore necessary. Here, we tested two methods for LAI estimation from MODIS 250m resolution red and near-infrared (NIR) reflectance time series. The first method involved the inversion of a coupled model of leaf reflectance and transmittance (PROSPECT4), soil reflectance (SOILSPECT) and canopy radiative transfer (4SAIL2). Model parameters other than the LAI were either fixed to measured constant values, or allowed to vary seasonally and/or with stand age according to trends observed in field measurements. The LAI was assumed to vary throughout the rotation following a series of alternately increasing and decreasing sigmoid curves. The parameters of each sigmoid curve that allowed the best fit of simulated canopy reflectance to MODIS red and NIR reflectance data were obtained by minimization techniques. The second method was based on a linear relationship between the LAI and values of the GEneralized Soil Adjusted Vegetation Index (GESAVI), which was calibrated using destructive LAI measurements made at two seasons, on Eucalyptus stands of different ages and productivity levels. The ability of each approach to reproduce field-measured LAI values was assessed, and uncertainty on results and parameter sensitivities were examined. Both methods offered a good fit between measured and estimated LAI (R(2) = 0.80 and R(2) = 0.62 for model inversion and GESAVI-based methods, respectively), but the GESAVI-based method overestimated the LAI at young ages. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
The reconstruction of a complex scene from multiple images is a fundamental problem in the field of computer vision. Volumetric methods have proven to be a strong alternative to traditional correspondence-based methods due to their flexible visibility models. In this paper we analyse existing methods for volumetric reconstruction and identify three key properties of voxel colouring algorithms: a water-tight surface model, a monotonic carving order, and causality. We present a new Voxel Colouring algorithm which embeds all reconstructions of a scene into a single output. While modelling exact visibility for arbitrary camera locations, Embedded Voxel Colouring removes the need for a priori threshold selection present in previous work. An efficient implementation is given along with results demonstrating the advantages of posteriori threshold selection.
Resumo:
Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).
Resumo:
Although immunosuppressive regimens are effective, rejection occurs in up to 50% of patients after orthotopic liver transplantation (OLT), and there is concern about side effects from long-term therapy. Knowledge of clinical and immunogenetic variables may allow tailoring of immunosuppressive therapy to patients according to their potential risks. We studied the association between transforming growth factor-beta, interleukin-10, and tumor necrosis factor alpha (TNF-alpha) gene polymorphisms and graft rejection and renal impairment in 121 white liver transplant recipients. Clinical variables were collected retrospectively, and creatinine clearance was estimated using the formula of Cockcroft and Gault. Biallelic polymorphisms were detected using polymerase chain reaction-based methods. Thirty-seven of 121 patients (30.6%) developed at least 1 episode of rejection. Multivariate analysis showed that Child-Pugh score (P =.001), immune-mediated liver disease (P =.018), normal pre-OLT creatinine clearance (P =.037), and fewer HLA class 1 mismatches (P =.038) were independently associated with rejection, Renal impairment occurred in 80% of patients and was moderate or severe in 39%, Clinical variables independently associated with renal impairment were female sex (P =.001), pre-OLT renal dysfunction (P =.0001), and a diagnosis of viral hepatitis (P =.0008), There was a significant difference in the frequency of TNF-alpha -308 alleles among the primary liver diseases. After adjustment for potential confounders and a Bonferroni correction, the association between the TNF-alpha -308 polymorphism and graft rejection approached significance (P =.06). Recipient cytokine genotypes do not have a major independent role in graft rejection or renal impairment after OLT, Additional studies of immunogenetic factors require analysis of large numbers of patients with appropriate phenotypic information to avoid population stratification, which may lead to inappropriate conclusions.
Resumo:
Numerical modeling of the eddy currents induced in the human body by the pulsed field gradients in MRI presents a difficult computational problem. It requires an efficient and accurate computational method for high spatial resolution analyses with a relatively low input frequency. In this article, a new technique is described which allows the finite difference time domain (FDTD) method to be efficiently applied over a very large frequency range, including low frequencies. This is not the case in conventional FDTD-based methods. A method of implementing streamline gradients in FDTD is presented, as well as comparative analyses which show that the correct source injection in the FDTD simulation plays a crucial rule in obtaining accurate solutions. In particular, making use of the derivative of the input source waveform is shown to provide distinct benefits in accuracy over direct source injection. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and the source injection method has been verified against examples with analytical solutions. Results are presented showing the spatial distribution of gradient-induced electric fields and eddy currents in a complete body model.
Resumo:
Background - Marfan syndrome (MS) is a genetic disorder caused by a mutation in the fibrillin gene FBN1. Bicuspid aortic valve (BAV) is a congenital heart malformation of unknown cause. Both conditions are associated with ascending aortic aneurysm and premature death. This study examined the relationship among the secretion of extracellular matrix proteins fibrillin, fibronectin, tenascin, and vascular smooth muscle cell (VSMC) apoptosis. The role of matrix metalloproteinase (MMP)- 2 in VSMC apoptosis was studied in MS aneurysm. Methods and Results - Aneurysm tissue was obtained from patients undergoing surgery ( MS: 4 M, 1 F, age 27 - 45 years; BAV: 3 M, 2 F, age 28 - 65 years). Normal aorta from subjects with nonaneurysm disease was also collected ( 4 M, 1 F, age 23 - 93 years). MS and BAV aneurysm histology showed areas of cystic medial necrosis (CMN) without inflammatory infiltrate. Immunohistochemical study of cultured MS and BAV VSMC showed intracellular accumulation and reduction of extracellular distribution of fibrillin, fibronectin, and tenascin. Western blot showed no increase in expression of fibrillin, fibronectin, or tenascin in MS or BAV VSMC and increased expression of MMP-2 in MS VSMCs. There was 4-fold increase in loss of cultured VSMC incubated in serum-free medium for 24 hours in both MS ( 27 +/- 8%) and BAV ( 32 +/- 14%) compared with control ( 7 +/- 5%). Conclusions - In MS and BAV there is alteration in both the amount and quality of secreted proteins and an increased degree of VSMC apoptosis. Up-regulation of MMP-2 might play a role in VSMC apoptosis in MS VSMC. The findings suggest the presence of a fundamental cellular abnormality in BAV thoracic aorta, possibly of genetic origin.
Resumo:
Esta dissertação apresenta o desenvolvimento de uma plataforma multimodal de aquisição e processamento de sinais. O projeto proposto insere-se no contexto do desenvolvimento de interfaces multimodais para aplicação em dispositivos robóticos cujo propósito é a reabilitação motora adaptando o controle destes dispositivos de acordo com a intenção do usuário. A interface desenvolvida adquire, sincroniza e processa sinais eletroencefalográficos (EEG), eletromiográficos (EMG) e sinais provenientes de sensores inerciais (IMUs). A aquisição dos dados é feita em experimentos realizados com sujeitos saudáveis que executam tarefas motoras de membros inferiores. O objetivo é analisar a intenção de movimento, a ativação muscular e o início efetivo dos movimentos realizados, respectivamente, através dos sinais de EEG, EMG e IMUs. Para este fim, uma análise offline foi realizada. Nessa análise, são utilizadas técnicas de processamento dos sinais biológicos e técnicas para processar sinais provenientes de sensores inerciais. A partir destes, os ângulos da articulação do joelho também são aferidos ao longo dos movimentos. Um protocolo experimental de testes foi proposto para as tarefas realizadas. Os resultados demonstraram que o sistema proposto foi capaz de adquirir, sincronizar, processar e classificar os sinais combinadamente. Análises acerca da acurácia dos classificadores utilizados mostraram que a interface foi capaz de identificar intenção de movimento em 76, 0 ± 18, 2% dos movimentos. A maior média de tempo de antecipação ao movimento foi obtida através da análise do sinal de EEG e foi de 716, 0±546, 1 milisegundos. A partir da análise apenas do sinal de EMG, este valor foi de 88, 34 ± 67, 28 milisegundos. Os resultados das etapas de processamento dos sinais biológicos, a medição dos ângulos da articulação, bem como os valores de acurácia e tempo de antecipação ao movimento se mostraram em conformidade com a literatura atual relacionada.
Resumo:
In the initial stage of this work, two potentiometric methods were used to determine the salt (sodium chloride) content in bread and dough samples from several cities in the north of Portugal. A reference method (potentiometric precipitation titration) and a newly developed ion-selective chloride electrode (ISE) were applied. Both methods determine the sodium chloride content through the quantification of chloride. To evaluate the accuracy of the ISE, bread and respective dough samples were analyzed by both methods. Statistical analysis (0.05 significance level) indicated that the results of these methods did not differ significantly. Therefore the ISE is an adequate alternative for the determination of chloride in the analyzed samples. To compare the results of these chloride-based methods with a sodium-based method, sodium was quantified in the same samples by a reference method (atomic absorption spectrometry). Significant differences between the results were verified. In several cases the sodium chloride content exceeded the legal limit when the chloride-based methods were used, but when the sodium-based method was applied this was not the case. This could lead to the erroneous application of fines and therefore the authorities should supply additional information regarding the analytical procedure for this particular control.