916 resultados para Data Mining and its Application
Resumo:
A xylanase was cloned from Aspergillus niveus and successfully expressed in Aspergillus nidulans (XAN). The full-length gene consisted of 890 bp and encoded 275 mature amino acids with a calculated mass of 31.3 kDa. The deduced amino acid sequence was highly homologous with the xylanase belonging to family 11 of the glycoside hydrolases. The recombinant protein was purified to electrophoretic homogeneity by anion-exchange chromatography and gel filtration. The optima of pH and temperature for the recombinant enzyme were 5.0 and 65 degrees C, respectively. The thermal stability of the recombinant xylanase was extremely improved by covalent immobilization on glyoxyl agarose with 91.4% of residual activity after 180 min at 60 degrees C, on the other hand, the free xylanase showed a half-life of 9.9 min at the same temperature. Affinity chromatography on Concanavalin A- and Jacalin-agarose columns followed by SDS-PAGE analyses showed that the XAN has O- and N-glycans. XAN promotes hydrolysis of xylan resulting in xylobiose, xylotriose and xylotetraose. Intermediate degradation of xylan resulting in xylo-oligomers is appealing for functional foods as the beneficial effect of oligosaccharides on gastrointestinal micro flora includes preventing proliferation of pathogenic intestinal bacteria and facilitates digestion and absorption of nutrients. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
OBJECTIVE: We present observations of the anatomy of the sylvian fissure region and their clinical application in neuroimaging, microsurgery for middle cerebral artery aneurysms and insular lesions, frontobasal resections, and epilepsy Surgery. METHODS: Sixty adult cadaveric hemispheres and 12 adult cadaveric heads were studied after perfusion of the arteries and veins with colored latex. The anatomic information was applied in more than 200 microsurgeries in and around the sylvian fissure region in the past 15 years. RESULTS: The sylvian fissure extends from the basal to the lateral surface of the brain and presents 2 compartments on each surface, I superficial (temporal stem and its ramii) and 1 deep (anterior and lateral operculoinsular compartments). The temporal operculum is in opposition to the frontal and parietal opercula (planum polare versus inferior frontal and precentral gyri, Heschl`s versus postcentral gyri, planum temporale versus supramarginal gyrus). The inferior frontal, precentral, and postcentral gyri cover the anterior, middle, and posterior thirds of the lateral surface of the insula, respectively. The pars triangularis covers the apex of the insula, located immediately distal to the genu of the middle cerebral artery. The clinical application of the anatomic information presented in this article is in angiography, middle cerebral artery aneurysm surgery, insular resection, frontobasal resection, and amygdalohippocampectomy, and hemispherotomy. CONCLUSION: The anatomic relationships of the sylvian fissure region can be helpful in preoperative planning and can serve as reliable intraoperative navigation landmarks in microsurgery involving that region.
Resumo:
Mexiletine (MEX), hydroxymethylmexiletine (HMM) and P-hydroxy-mexiletine (PHM) were analyzed in rat plasma by LC-MS/MS. The plasma samples were prepared by liquid-liquid extraction using methyl-tert-butyl ether as extracting solvent. MEX, HMM, and PHM enantiomers were resolved on a Chiralpak (R) AD column. Validation of the method showed a relative standard deviation (precision) and relative errors (accuracy) of less than 15% for all analytes studied. Quantification limits were 0.5 ng ml(-1) for the MEX and 0.2 ng ml(-1) for the HMM and PHM enantiomers. The validated method was successfully applied to quantify the enantiomers of MEX and its metabolites in plasma samples of rats (n = 6) treated with a single oral dose of racemic MEX. Chirality 21:648-656, 2009. (C) 2008 Wiley-Liss, Inc.
Resumo:
We present whole-rock and zircon rare earth element (REE) data from two early Archaean gneisses (3.81 Ga and 3.64 Ga) from the Itsaq gneiss complex, south-west Greenland. Both gneisses represent extremely rare examples of unaltered, fresh and relatively undeformed igneous rocks of such antiquity. Cathodoluminescence imaging of their zircons indicates a single crystallisation episode with no evidence for either later metamorphic and/or anatectic reworking or inheritance of earlier grains. Uniform, single-population U/Pb age data confirm the structural simplicity of these zircons. One sample, a 3.64 Ga granodioritic gneiss from the Gothabsfjord, yields a chondrite-normalised REE pattern with a positive slope from La to Lu as well as substantial positive Ce and slight negative Eu anomalies, features generally considered to be typical of igneous zircon. In contrast, the second sample, a 3.81 Ga tonalite from south of the Isua Greenstone Belt, has variable but generally much higher light REE abundances, with similar middle to heavy REE. Calculation of zircon/melt distribution coefficients (D-REE(zircon/melt)) from each sample yields markedly different values for the trivalent REE (i.e. Ce and Eu omitted) and simple application of one set of D-REE(zircon/melt) to model the melt composition for the other sample yields concentrations that are in error by up to two orders of magnitude for the light REE (La-Nd). The observed light REE overabundance in the 3.81 Ga tonalite is a commonly observed feature in terrestrial zircons for which a number of explanations ranging from lattice strain to disequilibrium crystallisation have been proposed and are further investigated herein. Regardless of the cause of light REE overabundance, our study shows that simple application of zircon/melt distribution coefficients is not an unambiguous method for ascertaining original melt composition. In this context, recent studies that use REE data to claim that > 4.3 Ga Hadean detrital zircons originally crystallised from an evolved magma, in turn suggesting the operation of geological processes in the early Earth analogous to those of the present day (e.g. subduction and melting of hydrated oceanic crust), must be regarded with caution. Indeed, comparison of terrestrial Hadean and > 3.9 Ga lunar highland zircons shows remarkable similarities in the light REE, even though subduction processes that have been used to explain the terrestrial zircons have never operated on the Moon. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Back ground. Based on the well-described excess of schizophrenia births in winter and spring, we hypothesised that individuals with schizophrenia (a) would be more likely to be born during periods of decreased perinatal sunshine, and (b) those born during periods of less sunshine would have an earlier age of first registration. Methods. We undertook an ecological analysis of long-term trends in perinatal sunshine duration and schizophrenia birth rates based on two mental health registers (Queensland. Australia n = 6630; The Netherlands n = 24, 474). For each of the 480 months between 1931 and 1970, the agreement between slopes of the trends in psychosis and long-term sunshine duration series were assessed. Age at first registration was assessed by quartiles of long-term trends in perinatal sunshine duration, Males and females were assessed separately. Results. Both the Dutch and Australian data showed a statistically significant association between falling long-term trends in sunshine duration around the time of birth and rising schizophrenia birth rates for males only. In both the Dutch and Australian data there were significant associations between earlier age of first registration and reduced long-term trends in sunshine duration around the time of birth for both males and females, Conclusions. A measure of long-term trends in perinatal sunshine duration was associated with two epidemiological features of schizophrenia in two separate data sets. Exposures related to sunshine duration warrant further consideration in schizophrenia research. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
When the electrostatic spraying is used correctly, it provides advantages over conventional systems, however many factors can affect the system efficiency. Therefore, the objective of this study was to evaluate the charge/mass ratio (Q/M) at different spraying distances (0, 1, 2, 3, 4 and 5 m), and the liquid deposition efficiency on the target. Evaluating the Q/M ratio the Faraday cage method was used and to evaluate the liquid deposition efficiency the artificial targets were positioned longitudinally and transversely to the spray jet. It was found that the spraying distance affects the Q/M ratio, consequently, the liquid deposition efficiency. For the closest distance to the target the Q/M ratio was 4.11 mC kg-1, and at distances of 1, 2, 3, 4 and 5 m, the ratio decreased to 1.38, 0.64, 0.31, 0.17 and 0.005 mC kg-1, respectively. For the liquid deposition, the electrostatic system was affected by the target orientation and spraying distance. The target transversely to the jet of liquid did not improve the liquid deposition, but longitudinally increased the deposition up to 3 meters of distance.
Resumo:
This paper deals with the establishment of a characterization methodology of electric power profiles of medium voltage (MV) consumers. The characterization is supported on the data base knowledge discovery process (KDD). Data Mining techniques are used with the purpose of obtaining typical load profiles of MV customers and specific knowledge of their customers’ consumption habits. In order to form the different customers’ classes and to find a set of representative consumption patterns, a hierarchical clustering algorithm and a clustering ensemble combination approach (WEACS) are used. Taking into account the typical consumption profile of the class to which the customers belong, new tariff options were defined and new energy coefficients prices were proposed. Finally, and with the results obtained, the consequences that these will have in the interaction between customer and electric power suppliers are analyzed.
Resumo:
This paper presents a methodology supported on the data base knowledge discovery process (KDD), in order to find out the failure probability of electrical equipments’, which belong to a real electrical high voltage network. Data Mining (DM) techniques are used to discover a set of outcome failure probability and, therefore, to extract knowledge concerning to the unavailability of the electrical equipments such us power transformers and high-voltages power lines. The framework includes several steps, following the analysis of the real data base, the pre-processing data, the application of DM algorithms, and finally, the interpretation of the discovered knowledge. To validate the proposed methodology, a case study which includes real databases is used. This data have a heavy uncertainty due to climate conditions for this reason it was used fuzzy logic to determine the set of the electrical components failure probabilities in order to reestablish the service. The results reflect an interesting potential of this approach and encourage further research on the topic.
Resumo:
Mestrado em Engenharia Electrotécnica – Sistemas Eléctricos de Energia
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Informática
Resumo:
Doctoral Thesis in Information Systems and Technologies Area of Engineering and Manag ement Information Systems
Resumo:
This paper discusses the results of applied research on the eco-driving domain based on a huge data set produced from a fleet of Lisbon's public transportation buses for a three-year period. This data set is based on events automatically extracted from the control area network bus and enriched with GPS coordinates, weather conditions, and road information. We apply online analytical processing (OLAP) and knowledge discovery (KD) techniques to deal with the high volume of this data set and to determine the major factors that influence the average fuel consumption, and then classify the drivers involved according to their driving efficiency. Consequently, we identify the most appropriate driving practices and styles. Our findings show that introducing simple practices, such as optimal clutch, engine rotation, and engine running in idle, can reduce fuel consumption on average from 3 to 5l/100 km, meaning a saving of 30 l per bus on one day. These findings have been strongly considered in the drivers' training sessions.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
This paper consists in the characterization of medium voltage (MV) electric power consumers based on a data clustering approach. It is intended to identify typical load profiles by selecting the best partition of a power consumption database among a pool of data partitions produced by several clustering algorithms. The best partition is selected using several cluster validity indices. These methods are intended to be used in a smart grid environment to extract useful knowledge about customers’ behavior. The data-mining-based methodology presented throughout the paper consists in several steps, namely the pre-processing data phase, clustering algorithms application and the evaluation of the quality of the partitions. To validate our approach, a case study with a real database of 1.022 MV consumers was used.