9 resultados para Concept-based Retrieval

em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The preserved activity of immobilized biomolecules in layer-by-layer (LbL) films can be exploited in various applications. including biosensing. In this study, cholesterol oxidase (COX) layers were alternated with layers of poly(allylamine hydrochloride) (PAH) in LbL films whose morphology was investigated with atomic force microscopy (AFM). The adsorption kinetics of COX layers comprised two regimes, a fast, first-order kinetics process followed by a slow process fitted with a Johnson-Mehl-Avrami (JMA) function. with exponent similar to 2 characteristic of aggregates growing as disks. The concept based on the use of sensor arrays to increase sensitivity, widely employed in electronic tongues, was extended to biosensing with impedance spectroscopy measurements. Using three sensing units, made of LbL films of PAH/COX and PAHIPVS (polyvinyl sulfonic acid) and a bare gold interdigitated electrode, we were able to detect cholesterol in aqueous solutions down to the 10(-6) M level. This high sensitivity is attributed to the molecular-recognition interaction between COX and cholesterol, and opens the way for clinical tests to be made with low cost. fast experimental procedures. (C) 2008 Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The diffusive gradients in thin films (DGT) technique has shown enormous potential for labile metal monitoring in fresh water due to the preconcentration, time-integrated, matrix interference removal and speciation analytical features. In this work, the coupling of energy dispersive X-ray fluorescence (EDXRF) with paper-based DGT devices was evaluated for the direct determination of Mn, Co. Ni, Cu, Zn and Pb in fresh water. The DGT samplers were assembled with cellulose (Whatman 3 MM chromatography paper) as the diffusion layer and a cellulose phosphate ion exchange membrane (Whatman P 81 paper) as the binding agent. The diffusion coefficients of the analytes on 3 MM chromatography paper were calculated by deploying the DGT samplers in synthetic solutions containing 500 mu g L-1 of Mn. Co, Ni, Cu, Zn and Pb (4 L at pH 5.5 and ionic strength at 0.05 mol L-1). After retrieval, the DGT units were disassembled and the P81 papers were dried and analysed by EDXRF directly. The 3 MM chromatographic paper diffusion coefficients of the analytes ranged from 1.67 to 1.87 x 10(-6) cm(2) s(-1). The metal retention and phosphate group homogeneities on the P81 membrane was studied by a spot analysis with a diameter of 1 mm. The proposed approach (DGT-EDXRF coupling) was applied to determine the analytes at five sampling sites (48 h in situ deployment) on the Piracicaba river basin, and the results (labile fraction) were compared with 0.45 mu m dissolved fractions determined by synchrotron radiation-excited total reflection X-ray fluorescence (SR-TXRF). The limits of detection of DGT-EDXRF coupling for the analytes (from 7.5 to 26 mu g L-1) were similar to those obtained by the sensitive SR-TXRF technique (3.8 to 9.1 mu g L-1). (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The classification of texts has become a major endeavor with so much electronic material available, for it is an essential task in several applications, including search engines and information retrieval. There are different ways to define similarity for grouping similar texts into clusters, as the concept of similarity may depend on the purpose of the task. For instance, in topic extraction similar texts mean those within the same semantic field, whereas in author recognition stylistic features should be considered. In this study, we introduce ways to classify texts employing concepts of complex networks, which may be able to capture syntactic, semantic and even pragmatic features. The interplay between various metrics of the complex networks is analyzed with three applications, namely identification of machine translation (MT) systems, evaluation of quality of machine translated texts and authorship recognition. We shall show that topological features of the networks representing texts can enhance the ability to identify MT systems in particular cases. For evaluating the quality of MT texts, on the other hand, high correlation was obtained with methods capable of capturing the semantics. This was expected because the golden standards used are themselves based on word co-occurrence. Notwithstanding, the Katz similarity, which involves semantic and structure in the comparison of texts, achieved the highest correlation with the NIST measurement, indicating that in some cases the combination of both approaches can improve the ability to quantify quality in MT. In authorship recognition, again the topological features were relevant in some contexts, though for the books and authors analyzed good results were obtained with semantic features as well. Because hybrid approaches encompassing semantic and topological features have not been extensively used, we believe that the methodology proposed here may be useful to enhance text classification considerably, as it combines well-established strategies. (c) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

XML similarity evaluation has become a central issue in the database and information communities, its applications ranging over document clustering, version control, data integration and ranked retrieval. Various algorithms for comparing hierarchically structured data, XML documents in particular, have been proposed in the literature. Most of them make use of techniques for finding the edit distance between tree structures, XML documents being commonly modeled as Ordered Labeled Trees. Yet, a thorough investigation of current approaches led us to identify several similarity aspects, i.e., sub-tree related structural and semantic similarities, which are not sufficiently addressed while comparing XML documents. In this paper, we provide an integrated and fine-grained comparison framework to deal with both structural and semantic similarities in XML documents (detecting the occurrences and repetitions of structurally and semantically similar sub-trees), and to allow the end-user to adjust the comparison process according to her requirements. Our framework consists of four main modules for (i) discovering the structural commonalities between sub-trees, (ii) identifying sub-tree semantic resemblances, (iii) computing tree-based edit operations costs, and (iv) computing tree edit distance. Experimental results demonstrate higher comparison accuracy with respect to alternative methods, while timing experiments reflect the impact of semantic similarity on overall system performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dimensionality reduction is employed for visual data analysis as a way to obtaining reduced spaces for high dimensional data or to mapping data directly into 2D or 3D spaces. Although techniques have evolved to improve data segregation on reduced or visual spaces, they have limited capabilities for adjusting the results according to user's knowledge. In this paper, we propose a novel approach to handling both dimensionality reduction and visualization of high dimensional data, taking into account user's input. It employs Partial Least Squares (PLS), a statistical tool to perform retrieval of latent spaces focusing on the discriminability of the data. The method employs a training set for building a highly precise model that can then be applied to a much larger data set very effectively. The reduced data set can be exhibited using various existing visualization techniques. The training data is important to code user's knowledge into the loop. However, this work also devises a strategy for calculating PLS reduced spaces when no training data is available. The approach produces increasingly precise visual mappings as the user feeds back his or her knowledge and is capable of working with small and unbalanced training sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Models are becoming increasingly important in the software development process. As a consequence, the number of models being used is increasing, and so is the need for efficient mechanisms to search them. Various existing search engines could be used for this purpose, but they lack features to properly search models, mainly because they are strongly focused on text-based search. This paper presents Moogle, a model search engine that uses metamodeling information to create richer search indexes and to allow more complex queries to be performed. The paper also presents the results of an evaluation of Moogle, which showed that the metamodel information improves the accuracy of the search.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classical Pavlovian fear conditioning to painful stimuli has provided the generally accepted view of a core system centered in the central amygdala to organize fear responses. Ethologically based models using other sources of threat likely to be expected in a natural environment, such as predators or aggressive dominant conspecifics, have challenged this concept of a unitary core circuit for fear processing. We discuss here what the ethologically based models have told us about the neural systems organizing fear responses. We explored the concept that parallel paths process different classes of threats, and that these different paths influence distinct regions in the periaqueductal gray - a critical element for the organization of all kinds of fear responses. Despite this parallel processing of different kinds of threats, we have discussed an interesting emerging view that common cortical-hippocampal-amygdalar paths seem to be engaged in fear conditioning to painful stimuli, to predators and, perhaps, to aggressive dominant conspecifics as well. Overall, the aim of this review is to bring into focus a more global and comprehensive view of the systems organizing fear responses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Brazil is expected to have 19.6 million patients with diabetes by the year 2030. A key concept in the treatment of type 2 diabetes mellitus (T2DM) is establishing individualized glycemic goals based on each patient’s clinical characteristics, which impact the choice of antihyperglycemic therapy. Targets for glycemic control, including fasting blood glucose, postprandial blood glucose, and glycated hemoglobin (A1C), are often not reached solely with antihyperglycemic therapy, and insulin therapy is often required. Basal insulin is considered an initial strategy; however, premixed insulins are convenient and are equally or more effective, especially for patients who require both basal and prandial control but desire a more simplified strategy involving fewer daily injections than a basal-bolus regimen. Most physicians are reluctant to transition patients to insulin treatment due to inappropriate assumptions and insufficient information. We conducted a nonsystematic review in PubMed and identified the most relevant and recently published articles that compared the use of premixed insulin versus basal insulin analogues used alone or in combination with rapid-acting insulin analogues before meals in patients with T2DM. These studies suggest that premixed insulin analogues are equally or more effective in reducing A1C compared to basal insulin analogues alone in spite of the small increase in the risk of nonsevere hypoglycemic events and nonclinically significant weight gain. Premixed insulin analogues can be used in insulin-naïve patients, in patients already on basal insulin therapy, and those using basal-bolus therapy who are noncompliant with blood glucose self-monitoring and titration of multiple insulin doses. We additionally provide practical aspects related to titration for the specific premixed insulin analogue formulations commercially available in Brazil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At present, solid thin films are recognized by their well established and mature processing technology that is able to produce components which, depending on their main characteristics, can perform either passive or active functions. Additionally, Si-based materials in the form of thin films perfectly match the concept of miniaturized and low-consumption devices-as required in various modern technological applications. Part of these aspects was considered in the present work that was concerned with the study of optical micro-cavities entirely based on silicon and silicon nitride thin films. The structures were prepared by the sputtering deposition method which, due to the adopted conditions (atmosphere and deposition rate) and arrangement of layers, provided cavities operating either in the visible (at ~ 670 nm) or in the near-infrared (at ~ 1560 nm) wavelength ranges. The main differential of the work relies on the construction of optical microcavities with a reduced number of periods whose main properties can be changed by thermal annealing treatments. The work also discusses the angle-dependent behavior of the optical transmission profiles as well as the use of the COMSOL software package to simulate the microcavities.