997 resultados para Computational Lexical Semantics
Resumo:
abstract: Cape Verde is a country of bilingual characteristics, where coexist two languages: the mother tongue – the Creole of Cape Verde (CCV) or the Capeverdian Language (LCV) and the Non Maternal language – the Portuguese that is the official language and, therefore, the language used in the process of education and learning. This situation generates conflicts so much to linguistic level as to cultural level. The two languages presents some lexical resemblances, what drives, many times, to misconceptions and linguistics errors that complicate children in the learning, in particular, of reading that constitute the base for the learning of others knowledge. The learning of reading, in the Non Maternal language, requires a development of the oral language in Portuguese Language, which stimulates the reasoning of the child through playful exercises and cognitivists and construtivists approaches. In this way, the competences of phonological processing in the acquisition of the competences of reading are important for the discrimination of written text and favor the learning and the development of reading. The child, through the discovery, begins to elaborate concepts in the way to obtain a relation with the written language, by functional form. Adopting a methodology of case study and through questionnaires, direct observation and collect of documentary information, this dissertation presents and analyzes connected aspects to the literacy of capeverdian children in the beginning of the schooling and to the learning of reading as basic support for the learning of Non Maternal language. The subsidies collected by the study, presented in this dissertation will contribute for the education progress of reading and, also, for implement successfully the learning of reading of the students, developing to practical of reading and the expectations in uncover the multiplicity of the dimensions of experience in that domain and contribute for a relative comprehension of written and reading modes.
Resumo:
abstract: Cape Verde is a country of bilingual characteristics, where coexist two languages: the mother tongue – the Creole of Cape Verde (CCV) or the Capeverdian Language (LCV) and the Non Maternal language – the Portuguese that is the official language and, therefore, the language used in the process of education and learning. This situation generates conflicts so much to linguistic level as to cultural level. The two languages presents some lexical resemblances, what drives, many times, to misconceptions and linguistics errors that complicate children in the learning, in particular, of reading that constitute the base for the learning of others knowledge. The learning of reading, in the Non Maternal language, requires a development of the oral language in Portuguese Language, which stimulates the reasoning of the child through playful exercises and cognitivists and construtivists approaches. In this way, the competences of phonological processing in the acquisition of the competences of reading are important for the discrimination of written text and favor the learning and the development of reading. The child, through the discovery, begins to elaborate concepts in the way to obtain a relation with the written language, by functional form. Adopting a methodology of case study and through questionnaires, direct observation and collect of documentary information, this dissertation presents and analyzes connected aspects to the literacy of capeverdian children in the beginning of the schooling and to the learning of reading as basic support for the learning of Non Maternal language. The subsidies collected by the study, presented in this dissertation will contribute for the education progress of reading and, also, for implement successfully the learning of reading of the students, developing to practical of reading and the expectations in uncover the multiplicity of the dimensions of experience in that domain and contribute for a relative comprehension of written and reading modes.
Resumo:
Agent-based computational economics is becoming widely used in practice. This paperexplores the consistency of some of its standard techniques. We focus in particular on prevailingwholesale electricity trading simulation methods. We include different supply and demandrepresentations and propose the Experience-Weighted Attractions method to include severalbehavioural algorithms. We compare the results across assumptions and to economic theorypredictions. The match is good under best-response and reinforcement learning but not underfictitious play. The simulations perform well under flat and upward-slopping supply bidding,and also for plausible demand elasticity assumptions. Learning is influenced by the number ofbids per plant and the initial conditions. The overall conclusion is that agent-based simulationassumptions are far from innocuous. We link their performance to underlying features, andidentify those that are better suited to model wholesale electricity markets.
Resumo:
Statistical computing when input/output is driven by a Graphical User Interface is considered. A proposal is made for automatic control ofcomputational flow to ensure that only strictly required computationsare actually carried on. The computational flow is modeled by a directed graph for implementation in any object-oriented programming language with symbolic manipulation capabilities. A complete implementation example is presented to compute and display frequency based piecewise linear density estimators such as histograms or frequency polygons.
Resumo:
The paper presents a new model based on the basic Maximum Capture model,MAXCAP. The New Chance Constrained Maximum Capture modelintroduces astochastic threshold constraint, which recognises the fact that a facilitycan be open only if a minimum level of demand is captured. A metaheuristicbased on MAX MIN ANT system and TABU search procedure is presented tosolve the model. This is the first time that the MAX MIN ANT system isadapted to solve a location problem. Computational experience and anapplication to 55 node network are also presented.
Resumo:
A new direction of research in Competitive Location theory incorporatestheories of Consumer Choice Behavior in its models. Following thisdirection, this paper studies the importance of consumer behavior withrespect to distance or transportation costs in the optimality oflocations obtained by traditional Competitive Location models. To dothis, it considers different ways of defining a key parameter in thebasic Maximum Capture model (MAXCAP). This parameter will reflectvarious ways of taking into account distance based on several ConsumerChoice Behavior theories. The optimal locations and the deviation indemand captured when the optimal locations of the other models are usedinstead of the true ones, are computed for each model. A metaheuristicbased on GRASP and Tabu search procedure is presented to solve all themodels. Computational experience and an application to 55-node networkare also presented.
Resumo:
Behavioral and brain responses to identical stimuli can vary with experimental and task parameters, including the context of stimulus presentation or attention. More surprisingly, computational models suggest that noise-related random fluctuations in brain responses to stimuli would alone be sufficient to engender perceptual differences between physically identical stimuli. In two experiments combining psychophysics and EEG in healthy humans, we investigated brain mechanisms whereby identical stimuli are (erroneously) perceived as different (higher vs lower in pitch or longer vs shorter in duration) in the absence of any change in the experimental context. Even though, as expected, participants' percepts to identical stimuli varied randomly, a classification algorithm based on a mixture of Gaussians model (GMM) showed that there was sufficient information in single-trial EEG to reliably predict participants' judgments of the stimulus dimension. By contrasting electrical neuroimaging analyses of auditory evoked potentials (AEPs) to the identical stimuli as a function of participants' percepts, we identified the precise timing and neural correlates (strength vs topographic modulations) as well as intracranial sources of these erroneous perceptions. In both experiments, AEP differences first occurred ∼100 ms after stimulus onset and were the result of topographic modulations following from changes in the configuration of active brain networks. Source estimations localized the origin of variations in perceived pitch of identical stimuli within right temporal and left frontal areas and of variations in perceived duration within right temporoparietal areas. We discuss our results in terms of providing neurophysiologic evidence for the contribution of random fluctuations in brain activity to conscious perception.
Resumo:
The drug discovery process has been deeply transformed recently by the use of computational ligand-based or structure-based methods, helping the lead compounds identification and optimization, and finally the delivery of new drug candidates more quickly and at lower cost. Structure-based computational methods for drug discovery mainly involve ligand-protein docking and rapid binding free energy estimation, both of which require force field parameterization for many drug candidates. Here, we present a fast force field generation tool, called SwissParam, able to generate, for arbitrary small organic molecule, topologies, and parameters based on the Merck molecular force field, but in a functional form that is compatible with the CHARMM force field. Output files can be used with CHARMM or GROMACS. The topologies and parameters generated by SwissParam are used by the docking software EADock2 and EADock DSS to describe the small molecules to be docked, whereas the protein is described by the CHARMM force field, and allow them to reach success rates ranging from 56 to 78%. We have also developed a rapid binding free energy estimation approach, using SwissParam for ligands and CHARMM22/27 for proteins, which requires only a short minimization to reproduce the experimental binding free energy of 214 ligand-protein complexes involving 62 different proteins, with a standard error of 2.0 kcal mol(-1), and a correlation coefficient of 0.74. Together, these results demonstrate the relevance of using SwissParam topologies and parameters to describe small organic molecules in computer-aided drug design applications, together with a CHARMM22/27 description of the target protein. SwissParam is available free of charge for academic users at www.swissparam.ch.
Resumo:
Protein-protein interactions encode the wiring diagram of cellular signaling pathways and their deregulations underlie a variety of diseases, such as cancer. Inhibiting protein-protein interactions with peptide derivatives is a promising way to develop new biological and therapeutic tools. Here, we develop a general framework to computationally handle hundreds of non-natural amino acid sidechains and predict the effect of inserting them into peptides or proteins. We first generate all structural files (pdb and mol2), as well as parameters and topologies for standard molecular mechanics software (CHARMM and Gromacs). Accurate predictions of rotamer probabilities are provided using a novel combined knowledge and physics based strategy. Non-natural sidechains are useful to increase peptide ligand binding affinity. Our results obtained on non-natural mutants of a BCL9 peptide targeting beta-catenin show very good correlation between predicted and experimental binding free-energies, indicating that such predictions can be used to design new inhibitors. Data generated in this work, as well as PyMOL and UCSF Chimera plug-ins for user-friendly visualization of non-natural sidechains, are all available at http://www.swisssidechain.ch. Our results enable researchers to rapidly and efficiently work with hundreds of non-natural sidechains.
Resumo:
We develop a mathematical programming approach for the classicalPSPACE - hard restless bandit problem in stochastic optimization.We introduce a hierarchy of n (where n is the number of bandits)increasingly stronger linear programming relaxations, the lastof which is exact and corresponds to the (exponential size)formulation of the problem as a Markov decision chain, while theother relaxations provide bounds and are efficiently computed. Wealso propose a priority-index heuristic scheduling policy fromthe solution to the first-order relaxation, where the indices aredefined in terms of optimal dual variables. In this way wepropose a policy and a suboptimality guarantee. We report resultsof computational experiments that suggest that the proposedheuristic policy is nearly optimal. Moreover, the second-orderrelaxation is found to provide strong bounds on the optimalvalue.
Resumo:
This article starts a computational study of congruences of modular forms and modular Galoisrepresentations modulo prime powers. Algorithms are described that compute the maximum integermodulo which two monic coprime integral polynomials have a root in common in a sensethat is defined. These techniques are applied to the study of congruences of modular forms andmodular Galois representations modulo prime powers. Finally, some computational results withimplications on the (non-)liftability of modular forms modulo prime powers and possible generalisationsof level raising are presented.
Resumo:
The network choice revenue management problem models customers as choosing from an offer-set, andthe firm decides the best subset to offer at any given moment to maximize expected revenue. The resultingdynamic program for the firm is intractable and approximated by a deterministic linear programcalled the CDLP which has an exponential number of columns. However, under the choice-set paradigmwhen the segment consideration sets overlap, the CDLP is difficult to solve. Column generation has beenproposed but finding an entering column has been shown to be NP-hard. In this paper, starting with aconcave program formulation based on segment-level consideration sets called SDCP, we add a class ofconstraints called product constraints, that project onto subsets of intersections. In addition we proposea natural direct tightening of the SDCP called ?SDCP, and compare the performance of both methodson the benchmark data sets in the literature. Both the product constraints and the ?SDCP method arevery simple and easy to implement and are applicable to the case of overlapping segment considerationsets. In our computational testing on the benchmark data sets in the literature, SDCP with productconstraints achieves the CDLP value at a fraction of the CPU time taken by column generation and webelieve is a very promising approach for quickly approximating CDLP when segment consideration setsoverlap and the consideration sets themselves are relatively small.
Resumo:
The analysis of conservation between the human and mouse genomes resulted in the identification of a large number of conserved nongenic sequences (CNGs). The functional significance of this nongenic conservation remains unknown, however. The availability of the sequence of a third mammalian genome, the dog, allows for a large-scale analysis of evolutionary attributes of CNGs in mammals. We have aligned 1638 previously identified CNGs and 976 conserved exons (CODs) from human chromosome 21 (Hsa21) with their orthologous sequences in mouse and dog. Attributes of selective constraint, such as sequence conservation, clustering, and direction of substitutions were compared between CNGs and CODs, showing a clear distinction between the two classes. We subsequently performed a chromosome-wide analysis of CNGs by correlating selective constraint metrics with their position on the chromosome and relative to their distance from genes. We found that CNGs appear to be randomly arranged in intergenic regions, with no bias to be closer or farther from genes. Moreover, conservation and clustering of substitutions of CNGs appear to be completely independent of their distance from genes. These results suggest that the majority of CNGs are not typical of previously described regulatory elements in terms of their location. We propose models for a global role of CNGs in genome function and regulation, through long-distance cis or trans chromosomal interactions.
Resumo:
The n-octanol/water partition coefficient (log Po/w) is a key physicochemical parameter for drug discovery, design, and development. Here, we present a physics-based approach that shows a strong linear correlation between the computed solvation free energy in implicit solvents and the experimental log Po/w on a cleansed data set of more than 17,500 molecules. After internal validation by five-fold cross-validation and data randomization, the predictive power of the most interesting multiple linear model, based on two GB/SA parameters solely, was tested on two different external sets of molecules. On the Martel druglike test set, the predictive power of the best model (N = 706, r = 0.64, MAE = 1.18, and RMSE = 1.40) is similar to six well-established empirical methods. On the 17-drug test set, our model outperformed all compared empirical methodologies (N = 17, r = 0.94, MAE = 0.38, and RMSE = 0.52). The physical basis of our original GB/SA approach together with its predictive capacity, computational efficiency (1 to 2 s per molecule), and tridimensional molecular graphics capability lay the foundations for a promising predictor, the implicit log P method (iLOGP), to complement the portfolio of drug design tools developed and provided by the SIB Swiss Institute of Bioinformatics.
Resumo:
Models are presented for the optimal location of hubs in airline networks, that take into consideration the congestion effects. Hubs, which are the most congested airports, are modeled as M/D/c queuing systems, that is, Poisson arrivals, deterministic service time, and {\em c} servers. A formula is derived for the probability of a number of customers in the system, which is later used to propose a probabilistic constraint. This constraint limits the probability of {\em b} airplanes in queue, to be lesser than a value $\alpha$. Due to the computational complexity of the formulation. The model is solved using a meta-heuristic based on tabu search. Computational experience is presented.