916 resultados para Topology-based methods
Resumo:
Recent research trends in computer-aided drug design have shown an increasing interest towards the implementation of advanced approaches able to deal with large amount of data. This demand arose from the awareness of the complexity of biological systems and from the availability of data provided by high-throughput technologies. As a consequence, drug research has embraced this paradigm shift exploiting approaches such as that based on networks. Indeed, the process of drug discovery can benefit from the implementation of network-based methods at different steps from target identification to drug repurposing. From this broad range of opportunities, this thesis is focused on three main topics: (i) chemical space networks (CSNs), which are designed to represent and characterize bioactive compound data sets; (ii) drug-target interactions (DTIs) prediction through a network-based algorithm that predicts missing links; (iii) COVID-19 drug research which was explored implementing COVIDrugNet, a network-based tool for COVID-19 related drugs. The main highlight emerged from this thesis is that network-based approaches can be considered useful methodologies to tackle different issues in drug research. In detail, CSNs are valuable coordinate-free, graphically accessible representations of structure-activity relationships of bioactive compounds data sets especially for medium-large libraries of molecules. DTIs prediction through the random walk with restart algorithm on heterogeneous networks can be a helpful method for target identification. COVIDrugNet is an example of the usefulness of network-based approaches for studying drugs related to a specific condition, i.e., COVID-19, and the same ‘systems-based’ approaches can be used for other diseases. To conclude, network-based tools are proving to be suitable in many applications in drug research and provide the opportunity to model and analyze diverse drug-related data sets, even large ones, also integrating different multi-domain information.
Resumo:
In this paper, space adaptivity is introduced to control the error in the numerical solution of hyperbolic systems of conservation laws. The reference numerical scheme is a new version of the discontinuous Galerkin method, which uses an implicit diffusive term in the direction of the streamlines, for stability purposes. The decision whether to refine or to unrefine the grid in a certain location is taken according to the magnitude of wavelet coefficients, which are indicators of local smoothness of the numerical solution. Numerical solutions of the nonlinear Euler equations illustrate the efficiency of the method. © Springer 2005.
Resumo:
The application of laser induced breakdown spectrometry (LIBS) aiming the direct analysis of plant materials is a great challenge that still needs efforts for its development and validation. In this way, a series of experimental approaches has been carried out in order to show that LIBS can be used as an alternative method to wet acid digestions based methods for analysis of agricultural and environmental samples. The large amount of information provided by LIBS spectra for these complex samples increases the difficulties for selecting the most appropriated wavelengths for each analyte. Some applications have suggested that improvements in both accuracy and precision can be achieved by the application of multivariate calibration in LIBS data when compared to the univariate regression developed with line emission intensities. In the present work, the performance of univariate and multivariate calibration, based on partial least squares regression (PLSR), was compared for analysis of pellets of plant materials made from an appropriate mixture of cryogenically ground samples with cellulose as the binding agent. The development of a specific PLSR model for each analyte and the selection of spectral regions containing only lines of the analyte of interest were the best conditions for the analysis. In this particular application, these models showed a similar performance. but PLSR seemed to be more robust due to a lower occurrence of outliers in comparison to the univariate method. Data suggests that efforts dealing with sample presentation and fitness of standards for LIBS analysis must be done in order to fulfill the boundary conditions for matrix independent development and validation. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Steady-state and time-resolved fluorescence measurements are reported for several crude oils and their saturates, aromatics, resins, and asphaltenes (SARA) fractions (saturates, aromatics and resins), isolated from maltene after pentane precipitation of the asphaltenes. There is a clear relationship between the American Petroleum Institute (API) grade of the crude oils and their fluorescence emission intensity and maxima. Dilution of the crude oil samples with cyclohexane results in a significant increase of emission intensity and a blue shift, which is a clear indication of the presence of energy-transfer processes between the emissive chromophores present in the crude oil. Both the fluorescence spectra and the mean fluorescence lifetimes of the three SARA fractions and their mixtures indicate that the aromatics and resins are the major contributors to the emission of crude oils. Total synchronous fluorescence scan (TSFS) spectral maps are preferable to steady-state fluorescence spectra for discriminating between the fractions, making TSFS maps a particularly interesting choice for the development of fluorescence-based methods for the characterization and classification of crude oils. More detailed studies, using a much wider range of excitation and emission wavelengths, are necessary to determine the utility of time-resolved fluorescence (TRF) data for this purpose. Preliminary models constructed using TSFS spectra from 21 crude oil samples show a very good correlation (R(2) > 0.88) between the calculated and measured values of API and the SARA fraction concentrations. The use of models based on a fast fluorescence measurement may thus be an alternative to tedious and time-consuming chemical analysis in refineries.
Resumo:
The leaf area index (LAI) of fast-growing Eucalyptus plantations is highly dynamic both seasonally and interannually, and is spatially variable depending on pedo-climatic conditions. LAI is very important in determining the carbon and water balance of a stand, but is difficult to measure during a complete stand rotation and at large scales. Remote-sensing methods allowing the retrieval of LAI time series with accuracy and precision are therefore necessary. Here, we tested two methods for LAI estimation from MODIS 250m resolution red and near-infrared (NIR) reflectance time series. The first method involved the inversion of a coupled model of leaf reflectance and transmittance (PROSPECT4), soil reflectance (SOILSPECT) and canopy radiative transfer (4SAIL2). Model parameters other than the LAI were either fixed to measured constant values, or allowed to vary seasonally and/or with stand age according to trends observed in field measurements. The LAI was assumed to vary throughout the rotation following a series of alternately increasing and decreasing sigmoid curves. The parameters of each sigmoid curve that allowed the best fit of simulated canopy reflectance to MODIS red and NIR reflectance data were obtained by minimization techniques. The second method was based on a linear relationship between the LAI and values of the GEneralized Soil Adjusted Vegetation Index (GESAVI), which was calibrated using destructive LAI measurements made at two seasons, on Eucalyptus stands of different ages and productivity levels. The ability of each approach to reproduce field-measured LAI values was assessed, and uncertainty on results and parameter sensitivities were examined. Both methods offered a good fit between measured and estimated LAI (R(2) = 0.80 and R(2) = 0.62 for model inversion and GESAVI-based methods, respectively), but the GESAVI-based method overestimated the LAI at young ages. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
The reconstruction of a complex scene from multiple images is a fundamental problem in the field of computer vision. Volumetric methods have proven to be a strong alternative to traditional correspondence-based methods due to their flexible visibility models. In this paper we analyse existing methods for volumetric reconstruction and identify three key properties of voxel colouring algorithms: a water-tight surface model, a monotonic carving order, and causality. We present a new Voxel Colouring algorithm which embeds all reconstructions of a scene into a single output. While modelling exact visibility for arbitrary camera locations, Embedded Voxel Colouring removes the need for a priori threshold selection present in previous work. An efficient implementation is given along with results demonstrating the advantages of posteriori threshold selection.
Resumo:
Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).
Resumo:
Although immunosuppressive regimens are effective, rejection occurs in up to 50% of patients after orthotopic liver transplantation (OLT), and there is concern about side effects from long-term therapy. Knowledge of clinical and immunogenetic variables may allow tailoring of immunosuppressive therapy to patients according to their potential risks. We studied the association between transforming growth factor-beta, interleukin-10, and tumor necrosis factor alpha (TNF-alpha) gene polymorphisms and graft rejection and renal impairment in 121 white liver transplant recipients. Clinical variables were collected retrospectively, and creatinine clearance was estimated using the formula of Cockcroft and Gault. Biallelic polymorphisms were detected using polymerase chain reaction-based methods. Thirty-seven of 121 patients (30.6%) developed at least 1 episode of rejection. Multivariate analysis showed that Child-Pugh score (P =.001), immune-mediated liver disease (P =.018), normal pre-OLT creatinine clearance (P =.037), and fewer HLA class 1 mismatches (P =.038) were independently associated with rejection, Renal impairment occurred in 80% of patients and was moderate or severe in 39%, Clinical variables independently associated with renal impairment were female sex (P =.001), pre-OLT renal dysfunction (P =.0001), and a diagnosis of viral hepatitis (P =.0008), There was a significant difference in the frequency of TNF-alpha -308 alleles among the primary liver diseases. After adjustment for potential confounders and a Bonferroni correction, the association between the TNF-alpha -308 polymorphism and graft rejection approached significance (P =.06). Recipient cytokine genotypes do not have a major independent role in graft rejection or renal impairment after OLT, Additional studies of immunogenetic factors require analysis of large numbers of patients with appropriate phenotypic information to avoid population stratification, which may lead to inappropriate conclusions.
Resumo:
Numerical modeling of the eddy currents induced in the human body by the pulsed field gradients in MRI presents a difficult computational problem. It requires an efficient and accurate computational method for high spatial resolution analyses with a relatively low input frequency. In this article, a new technique is described which allows the finite difference time domain (FDTD) method to be efficiently applied over a very large frequency range, including low frequencies. This is not the case in conventional FDTD-based methods. A method of implementing streamline gradients in FDTD is presented, as well as comparative analyses which show that the correct source injection in the FDTD simulation plays a crucial rule in obtaining accurate solutions. In particular, making use of the derivative of the input source waveform is shown to provide distinct benefits in accuracy over direct source injection. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and the source injection method has been verified against examples with analytical solutions. Results are presented showing the spatial distribution of gradient-induced electric fields and eddy currents in a complete body model.
Resumo:
In the initial stage of this work, two potentiometric methods were used to determine the salt (sodium chloride) content in bread and dough samples from several cities in the north of Portugal. A reference method (potentiometric precipitation titration) and a newly developed ion-selective chloride electrode (ISE) were applied. Both methods determine the sodium chloride content through the quantification of chloride. To evaluate the accuracy of the ISE, bread and respective dough samples were analyzed by both methods. Statistical analysis (0.05 significance level) indicated that the results of these methods did not differ significantly. Therefore the ISE is an adequate alternative for the determination of chloride in the analyzed samples. To compare the results of these chloride-based methods with a sodium-based method, sodium was quantified in the same samples by a reference method (atomic absorption spectrometry). Significant differences between the results were verified. In several cases the sodium chloride content exceeded the legal limit when the chloride-based methods were used, but when the sodium-based method was applied this was not the case. This could lead to the erroneous application of fines and therefore the authorities should supply additional information regarding the analytical procedure for this particular control.
Resumo:
In the field of appearance-based robot localization, the mainstream approach uses a quantized representation of local image features. An alternative strategy is the exploitation of raw feature descriptors, thus avoiding approximations due to quantization. In this work, the quantized and non-quantized representations are compared with respect to their discriminativity, in the context of the robot global localization problem. Having demonstrated the advantages of the non-quantized representation, the paper proposes mechanisms to reduce the computational burden this approach would carry, when applied in its simplest form. This reduction is achieved through a hierarchical strategy which gradually discards candidate locations and by exploring two simplifying assumptions about the training data. The potential of the non-quantized representation is exploited by resorting to the entropy-discriminativity relation. The idea behind this approach is that the non-quantized representation facilitates the assessment of the distinctiveness of features, through the entropy measure. Building on this finding, the robustness of the localization system is enhanced by modulating the importance of features according to the entropy measure. Experimental results support the effectiveness of this approach, as well as the validity of the proposed computation reduction methods.
Resumo:
In this paper we propose the use of the least-squares based methods for obtaining digital rational approximations (IIR filters) to fractional-order integrators and differentiators of type sα, α∈R. Adoption of the Padé, Prony and Shanks techniques is suggested. These techniques are usually applied in the signal modeling of deterministic signals. These methods yield suboptimal solutions to the problem which only requires finding the solution of a set of linear equations. The results reveal that the least-squares approach gives similar or superior approximations in comparison with other widely used methods. Their effectiveness is illustrated, both in the time and frequency domains, as well in the fractional differintegration of some standard time domain functions.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertation to Obtain the Degree of Master in Biomedical Engineering
Resumo:
This work presents the development of a low cost sensor device for the diagnosis of breast cancer in point-of-care, made with new synthetic biomimetic materials inside plasticized poly(vinyl chloride), PVC, membranes, for subsequent potentiometric detection. This concept was applied to target a conventional biomarker in breast cancer: Breast Cancer Antigen (CA15-3). The new biomimetic material was obtained by molecularly-imprinted technology. In this, a plastic antibody was obtained by polymerizing around the biomarker that acted as an obstacle to the growth of the polymeric matrix. The imprinted polymer was specifically synthetized by electropolymerization on an FTO conductive glass, by using cyclic voltammetry, including 40 cycles within -0.2 and 1.0 V. The reaction used for the polymerization included monomer (pyrrol, 5.0×10-3 mol/L) and protein (CA15-3, 100U/mL), all prepared in phosphate buffer saline (PBS), with a pH of 7.2 and 1% of ethylene glycol. The biomarker was removed from the imprinted sites by proteolytic action of proteinase K. The biomimetic material was employed in the construction of potentiometric sensors and tested with regard to its affinity and selectivity for binding CA15-3, by checking the analytical performance of the obtained electrodes. For this purpose, the biomimetic material was dispersed in plasticized PVC membranes, including or not a lipophilic ionic additive, and applied on a solid conductive support of graphite. The analytical behaviour was evaluated in buffer and in synthetic serum, with regard to linear range, limit of detection, repeatability, and reproducibility. This antibody-like material was tested in synthetic serum, and good results were obtained. The best devices were able to detect 5 times less CA15-3 than that required in clinical use. Selectivity assays were also performed, showing that the various serum components did not interfere with this biomarker. Overall, the potentiometric-based methods showed several advantages compared to other methods reported in the literature. The analytical process was simple, providing fast responses for a reduced amount of analyte, with low cost and feasible miniaturization. It also allowed the detection of a wide range of concentrations, diminishing the required efforts in previous sample pre-treating stages.