18 resultados para structuration of lexical data bases
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
We report the synthesis and spectroscopic/electrochemical properties of iron(II) complexes of polydentate Schiff bases generated from 2-acetylpyridine and 1,3-diaminopropane, acetylpyrazine and 1,3-diaminopropane, and from 2-acetylpyridine and L-histidine. The complexes exhibit bis(diimine)iron(II) chromophores in association with pyrazine, pyridine or imidazole groups displaying contrasting pi-acceptor properties. In spite of their open geometry, their properties are much closer to those of macrocyclic tetraimineiron(II) complexes. An electrochemical/spectroscopic correlation between E degrees(Fe(III/II)) and the energies of the lowest MLCT band has been observed, reflecting the stabilization of the HOMO levels as a consequence of the increasing backbonding effects in the series of compounds. Mossbauer data have also confirmed the similarities in their electronic structure, as deduced from the spectroscopic and theoretical data. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Geographic Data Warehouses (GDW) are one of the main technologies used in decision-making processes and spatial analysis, and the literature proposes several conceptual and logical data models for GDW. However, little effort has been focused on studying how spatial data redundancy affects SOLAP (Spatial On-Line Analytical Processing) query performance over GDW. In this paper, we investigate this issue. Firstly, we compare redundant and non-redundant GDW schemas and conclude that redundancy is related to high performance losses. We also analyze the issue of indexing, aiming at improving SOLAP query performance on a redundant GDW. Comparisons of the SB-index approach, the star-join aided by R-tree and the star-join aided by GiST indicate that the SB-index significantly improves the elapsed time in query processing from 25% up to 99% with regard to SOLAP queries defined over the spatial predicates of intersection, enclosure and containment and applied to roll-up and drill-down operations. We also investigate the impact of the increase in data volume on the performance. The increase did not impair the performance of the SB-index, which highly improved the elapsed time in query processing. Performance tests also show that the SB-index is far more compact than the star-join, requiring only a small fraction of at most 0.20% of the volume. Moreover, we propose a specific enhancement of the SB-index to deal with spatial data redundancy. This enhancement improved performance from 80 to 91% for redundant GDW schemas.
Resumo:
The MINOS experiment at Fermilab has recently reported a tension between the oscillation results for neutrinos and antineutrinos. We show that this tension, if it persists, can be understood in the framework of nonstandard neutrino interactions (NSI). While neutral current NSI (nonstandard matter effects) are disfavored by atmospheric neutrinos, a new charged current coupling between tau neutrinos and nucleons can fit the MINOS data without violating other constraints. In particular, we show that loop-level contributions to flavor-violating tau decays are sufficiently suppressed. However, conflicts with existing bounds could arise once the effective theory considered here is embedded into a complete renormalizable model. We predict the future sensitivity of the T2K and NOvA experiments to the NSI parameter region favored by the MINOS fit, and show that both experiments are excellent tools to test the NSI interpretation of the MINOS data.
Resumo:
Agricultural management practices that promote net carbon (C) accumulation in the soil have been considered as an important potential mitigation option to combat global warming. The change in the sugarcane harvesting system, to one which incorporates C into the soil from crop residues, is the focus of this work. The main objective was to assess and discuss the changes in soil organic C stocks caused by the conversion of burnt to unburnt sugarcane harvesting systems in Brazil, when considering the main soils and climates associated with this crop. For this purpose, a dataset was obtained from a literature review of soils under sugarcane in Brazil. Although not necessarily from experimental studies, only paired comparisons were examined, and for each site the dominant soil type, topography and climate were similar. The results show a mean annual C accumulation rate of 1.5 Mg ha-1 year-1 for the surface to 30-cm depth (0.73 and 2.04 Mg ha-1 year-1 for sandy and clay soils, respectively) caused by the conversion from a burnt to an unburnt sugarcane harvesting system. The findings suggest that soil should be included in future studies related to life cycle assessment and C footprint of Brazilian sugarcane ethanol.
Resumo:
The Brazilian Network of Food Data Systems (BRASILFOODS) has been keeping the Brazilian Food Composition Database-USP (TBCA-USP) (http://www.fcf.usp.br/tabela) since 1998. Besides the constant compilation, analysis and update work in the database, the network tries to innovate through the introduction of food information that may contribute to decrease the risk for non-transmissible chronic diseases, such as the profile of carbohydrates and flavonoids in foods. In 2008, data on carbohydrates, individually analyzed, of 112 foods, and 41 data related to the glycemic response produced by foods widely consumed in the country were included in the TBCA-USP. Data (773) about the different flavonoid subclasses of 197 Brazilian foods were compiled and the quality of each data was evaluated according to the USDAs data quality evaluation system. In 2007, BRASILFOODS/USP and INFOODS/FAO organized the 7th International Food Data Conference ""Food Composition and Biodiversity"". This conference was a unique opportunity for interaction between renowned researchers and participants from several countries and it allowed the discussion of aspects that may improve the food composition area. During the period, the LATINFOODS Regional Technical Compilation Committee and BRASILFOODS disseminated to Latin America the Form and Manual for Data Compilation, version 2009, ministered a Food Composition Data Compilation course and developed many activities related to data production and compilation. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Functional magnetic resonance imaging (fMRI) is currently one of the most widely used methods for studying human brain function in vivo. Although many different approaches to fMRI analysis are available, the most widely used methods employ so called ""mass-univariate"" modeling of responses in a voxel-by-voxel fashion to construct activation maps. However, it is well known that many brain processes involve networks of interacting regions and for this reason multivariate analyses might seem to be attractive alternatives to univariate approaches. The current paper focuses on one multivariate application of statistical learning theory: the statistical discrimination maps (SDM) based on support vector machine, and seeks to establish some possible interpretations when the results differ from univariate `approaches. In fact, when there are changes not only on the activation level of two conditions but also on functional connectivity, SDM seems more informative. We addressed this question using both simulations and applications to real data. We have shown that the combined use of univariate approaches and SDM yields significant new insights into brain activations not available using univariate methods alone. In the application to a visual working memory fMRI data, we demonstrated that the interaction among brain regions play a role in SDM`s power to detect discriminative voxels. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Searching in a dataset for elements that are similar to a given query element is a core problem in applications that manage complex data, and has been aided by metric access methods (MAMs). A growing number of applications require indices that must be built faster and repeatedly, also providing faster response for similarity queries. The increase in the main memory capacity and its lowering costs also motivate using memory-based MAMs. In this paper. we propose the Onion-tree, a new and robust dynamic memory-based MAM that slices the metric space into disjoint subspaces to provide quick indexing of complex data. It introduces three major characteristics: (i) a partitioning method that controls the number of disjoint subspaces generated at each node; (ii) a replacement technique that can change the leaf node pivots in insertion operations; and (iii) range and k-NN extended query algorithms to support the new partitioning method, including a new visit order of the subspaces in k-NN queries. Performance tests with both real-world and synthetic datasets showed that the Onion-tree is very compact. Comparisons of the Onion-tree with the MM-tree and a memory-based version of the Slim-tree showed that the Onion-tree was always faster to build the index. The experiments also showed that the Onion-tree significantly improved range and k-NN query processing performance and was the most efficient MAM, followed by the MM-tree, which in turn outperformed the Slim-tree in almost all the tests. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This paper is concerned with the computational efficiency of fuzzy clustering algorithms when the data set to be clustered is described by a proximity matrix only (relational data) and the number of clusters must be automatically estimated from such data. A fuzzy variant of an evolutionary algorithm for relational clustering is derived and compared against two systematic (pseudo-exhaustive) approaches that can also be used to automatically estimate the number of fuzzy clusters in relational data. An extensive collection of experiments involving 18 artificial and two real data sets is reported and analyzed. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
We review some issues related to the implications of different missing data mechanisms on statistical inference for contingency tables and consider simulation studies to compare the results obtained under such models to those where the units with missing data are disregarded. We confirm that although, in general, analyses under the correct missing at random and missing completely at random models are more efficient even for small sample sizes, there are exceptions where they may not improve the results obtained by ignoring the partially classified data. We show that under the missing not at random (MNAR) model, estimates on the boundary of the parameter space as well as lack of identifiability of the parameters of saturated models may be associated with undesirable asymptotic properties of maximum likelihood estimators and likelihood ratio tests; even in standard cases the bias of the estimators may be low only for very large samples. We also show that the probability of a boundary solution obtained under the correct MNAR model may be large even for large samples and that, consequently, we may not always conclude that a MNAR model is misspecified because the estimate is on the boundary of the parameter space.
Dynamic Changes in the Mental Rotation Network Revealed by Pattern Recognition Analysis of fMRI Data
Resumo:
We investigated the temporal dynamics and changes in connectivity in the mental rotation network through the application of spatio-temporal support vector machines (SVMs). The spatio-temporal SVM [Mourao-Miranda, J., Friston, K. J., et al. (2007). Dynamic discrimination analysis: A spatial-temporal SVM. Neuroimage, 36, 88-99] is a pattern recognition approach that is suitable for investigating dynamic changes in the brain network during a complex mental task. It does not require a model describing each component of the task and the precise shape of the BOLD impulse response. By defining a time window including a cognitive event, one can use spatio-temporal fMRI observations from two cognitive states to train the SVM. During the training, the SVM finds the discriminating pattern between the two states and produces a discriminating weight vector encompassing both voxels and time (i.e., spatio-temporal maps). We showed that by applying spatio-temporal SVM to an event-related mental rotation experiment, it is possible to discriminate between different degrees of angular disparity (0 degrees vs. 20 degrees, 0 degrees vs. 60 degrees, and 0 degrees vs. 100 degrees), and the discrimination accuracy is correlated with the difference in angular disparity between the conditions. For the comparison with highest accuracy (08 vs. 1008), we evaluated how the most discriminating areas (visual regions, parietal regions, supplementary, and premotor areas) change their behavior over time. The frontal premotor regions became highly discriminating earlier than the superior parietal cortex. There seems to be a parcellation of the parietal regions with an earlier discrimination of the inferior parietal lobe in the mental rotation in relation to the superior parietal. The SVM also identified a network of regions that had a decrease in BOLD responses during the 100 degrees condition in relation to the 0 degrees condition (posterior cingulate, frontal, and superior temporal gyrus). This network was also highly discriminating between the two conditions. In addition, we investigated changes in functional connectivity between the most discriminating areas identified by the spatio-temporal SVM. We observed an increase in functional connectivity between almost all areas activated during the 100 degrees condition (bilateral inferior and superior parietal lobe, bilateral premotor area, and SMA) but not between the areas that showed a decrease in BOLD response during the 100 degrees condition.
Resumo:
Background: High-density tiling arrays and new sequencing technologies are generating rapidly increasing volumes of transcriptome and protein-DNA interaction data. Visualization and exploration of this data is critical to understanding the regulatory logic encoded in the genome by which the cell dynamically affects its physiology and interacts with its environment. Results: The Gaggle Genome Browser is a cross-platform desktop program for interactively visualizing high-throughput data in the context of the genome. Important features include dynamic panning and zooming, keyword search and open interoperability through the Gaggle framework. Users may bookmark locations on the genome with descriptive annotations and share these bookmarks with other users. The program handles large sets of user-generated data using an in-process database and leverages the facilities of SQL and the R environment for importing and manipulating data. A key aspect of the Gaggle Genome Browser is interoperability. By connecting to the Gaggle framework, the genome browser joins a suite of interconnected bioinformatics tools for analysis and visualization with connectivity to major public repositories of sequences, interactions and pathways. To this flexible environment for exploring and combining data, the Gaggle Genome Browser adds the ability to visualize diverse types of data in relation to its coordinates on the genome. Conclusions: Genomic coordinates function as a common key by which disparate biological data types can be related to one another. In the Gaggle Genome Browser, heterogeneous data are joined by their location on the genome to create information-rich visualizations yielding insight into genome organization, transcription and its regulation and, ultimately, a better understanding of the mechanisms that enable the cell to dynamically respond to its environment.
Resumo:
Assuming as a starting point the acknowledge that the principles and methods used to build and manage the documentary systems are disperse and lack systematization, this study hypothesizes that the notion of structure, when assuming mutual relationships among its elements, promotes more organical systems and assures better quality and consistency in the retrieval of information concerning users` matters. Accordingly, it aims to explore the fundamentals about the records of information and documentary systems, starting from the notion of structure. In order to achieve that, it presents basic concepts and relative matters to documentary systems and information records. Next to this, it lists the theoretical subsides over the notion of structure, studied by Benveniste, Ferrater Mora, Levi-Strauss, Lopes, Penalver Simo, Saussure, apart from Ducrot, Favero and Koch. Appropriations that have already been done by Paul Otlet, Garcia Gutierrez and Moreiro Gonzalez. In Documentation come as a further topic. It concludes that the adopted notion of structure to make explicit a hypothesis of real systematization achieves more organical systems, as well as it grants pedagogical reference to the documentary tasks.
Resumo:
The paper discusses the availability of biomass in Brazil to supply charcoal to the steel industry on the bases of an initial global assessment of land potentially available for plantations and of Brazilian data that allows refining the assessment and specifying the issue of practical availability. Technical potentials are first assessed through a series of simple rules against direct competition with agriculture, forests and protected areas, and of quantitative criteria, whether geo-climatic (rainfall), demographic (population density) or legal (reserves). Institutional, social and economic factors are then identified and discussed so as to account for the practical availability of Brazilian biomass through six criteria. The ranking of nine Brazilian States according to these criteria brings out the necessary trade-offs in the selection of land for plantations that would efficiently supply charcoal to the steel industry. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Dietary changes associated with drug therapy can reduce high serum cholesterol levels and dramatically decrease the risk of coronary artery disease, stroke, and overall mortality. Statins are hypolipemic drugs that are effective in the reduction of cholesterol serum levels, attenuating cholesterol synthesis in liver by competitive inhibition regarding the substrate or molecular target HMG-CoA reductase. We have herewith used computer-aided molecular design tools, i.e., flexible docking, virtual screening in large data bases, molecular interaction fields to propose novel potential HMG-CoA reductase inhibitors that are promising for the treatment of hypercholesterolemia.
Resumo:
Dherte PM, Negrao MPG, Mori Neto S, Holzhacker R, Shimada V, Taberner P, Carmona MJC - Smart Alerts: Development of a Software to Optimize Data Monitoring. Background and objectives: Monitoring is useful for vital follow-ups and prevention, diagnosis, and treatment of several events in anesthesia. Although alarms can be useful in monitoring they can cause dangerous user`s desensitization. The objective of this study was to describe the development of specific software to integrate intraoperative monitoring parameters generating ""smart alerts"" that can help decision making, besides indicating possible diagnosis and treatment. Methods: A system that allowed flexibility in the definition of alerts, combining individual alarms of the parameters monitored to generate a more elaborated alert system was designed. After investigating a set of smart alerts, considered relevant in the surgical environment, a prototype was designed and evaluated, and additional suggestions were implemented in the final product. To verify the occurrence of smart alerts, the system underwent testing with data previously obtained during intraoperative monitoring of 64 patients. The system allows continuous analysis of monitored parameters, verifying the occurrence of smart alerts defined in the user interface. Results: With this system a potential 92% reduction in alarms was observed. We observed that in most situations that did not generate alerts individual alarms did not represent risk to the patient. Conclusions: Implementation of software can allow integration of the data monitored and generate information, such as possible diagnosis or interventions. An expressive potential reduction in the amount of alarms during surgery was observed. Information displayed by the system can be oftentimes more useful than analysis of isolated parameters.