135 resultados para Semantic extraction
Resumo:
Many studies suggest a large capacity memory for briefly presented pictures of whole scenes. At the same time, visual working memory (WM) of scene elements is limited to only a few items. We examined the role of retroactive interference in limiting memory for visual details. Participants viewed a scene for 5?s and then, after a short delay containing either a blank screen or 10 distracter scenes, answered questions about the location, color, and identity of objects in the scene. We found that the influence of the distracters depended on whether they were from a similar semantic domain, such as "kitchen" or "airport." Increasing the number of similar scenes reduced, and eventually eliminated, memory for scene details. Although scene memory was firmly established over the initial study period, this memory was fragile and susceptible to interference. This may help to explain the discrepancy in the literature between studies showing limited visual WM and those showing a large capacity memory for scenes.
Resumo:
In most previous research on distributional semantics, Vector Space Models (VSMs) of words are built either from topical information (e.g., documents in which a word is present), or from syntactic/semantic types of words (e.g., dependency parse links of a word in sentences), but not both. In this paper, we explore the utility of combining these two representations to build VSM for the task of semantic composition of adjective-noun phrases. Through extensive experiments on benchmark datasets, we find that even though a type-based VSM is effective for semantic composition, it is often outperformed by a VSM built using a combination of topic- and type-based statistics. We also introduce a new evaluation task wherein we predict the composed vector representation of a phrase from the brain activity of a human subject reading that phrase. We exploit a large syntactically parsed corpus of 16 billion tokens to build our VSMs, with vectors for both phrases and words, and make them publicly available.
Resumo:
This paper proposes a method for wind turbine mode identification using the multivariable output error statespace (MOESP) identification algorithm. The paper incorporates a fast moving window QR decomposition and propagator method from array signal processing, yielding a moving window subspace identification algorithm. The algorithm assumes that the system order is known as a priori and remains constant during identification. For the purpose of extracting modal information for turbines modelled as a linear parameter varying (LPV) system, the algorithm is applicable since a nonlinear system can be approximated as a piecewise time invariant system in consecutive data windows. The algorithm is exemplified using numerical simulations which show that the moving window algorithm can track the modal information. The paper also demonstrates that the low computational burden of the algorithm, compared to conventional batch subspace identification, has significant implications for online implementation.
Resumo:
A new approach for extracting stress intensity factors (SIFs) by the element-free Galerkin (EFG) class of methods through a modified crack closure integral (MCCI) scheme is proposed. Its primary feature is that it allows accurate calculation of mode I and mode II SIFs with a relatively simple and straightforward analysis even when a coarser nodal density is employed. The details of the adoption of the MCCI technique in the EFG method are described. Its performance is demonstrated through a number of case studies including mixed-mode and thermal problems in linear elastic fracture mechanics (LEFM). The results are compared with published theoretical solutions and those based on the displacement method, stress method, crack closure integral in conjunction with local smoothing (CCI–LS) technique, as well as the M-integral method. Its advantages are discussed.
Resumo:
The Supreme Court of the United States in Feist v. Rural (Feist, 1991) specified that compilations or databases, and other works, must have a minimal degree of creativity to be copyrightable. The significance and global diffusion of the decision is only matched by the difficulties it has posed for interpretation. The judgment does not specify what is to be understood by creativity, although it does give a full account of the negative of creativity, as ‘so mechanical or routine as to require no creativity whatsoever’ (Feist, 1991, p.362). The negative of creativity as highly mechanical has particularly diffused globally.
A recent interpretation has correlated ‘so mechanical’ (Feist, 1991) with an automatic mechanical procedure or computational process, using a rigorous exegesis fully to correlate the two uses of mechanical. The negative of creativity is then understood as an automatic computation and as a highly routine process. Creativity is itself is conversely understood as non-computational activity, above a certain level of routinicity (Warner, 2013).
The distinction between the negative of creativity and creativity is strongly analogous to an independently developed distinction between forms of mental labour, between semantic and syntactic labour. Semantic labour is understood as human labour motivated by considerations of meaning and syntactic labour as concerned solely with patterns. Semantic labour is distinctively human while syntactic labour can be directly humanly conducted or delegated to machine, as an automatic computational process (Warner, 2005; 2010, pp.33-41).
The value of the analogy is to greatly increase the intersubjective scope of the distinction between semantic and syntactic mental labour. The global diffusion of the standard for extreme absence of copyrightability embodied in the judgment also indicates the possibility that the distinction fully captures the current transformation in the distribution of mental labour, where syntactic tasks which were previously humanly performed are now increasingly conducted by machine.
The paper has substantive and methodological relevance to the conference themes. Substantively, it is concerned with human creativity, with rationality as not reducible to computation, and has relevance to the language myth, through its indirect endorsement of a non-computable or not mechanical semantics. These themes are supported by the underlying idea of technology as a human construction. Methodologically, it is rooted in the humanities and conducts critical thinking through exegesis and empirically tested theoretical development
References
Feist. (1991). Feist Publications, Inc. v. Rural Tel. Service Co., Inc. 499 U.S. 340.
Warner, J. (2005). Labor in information systems. Annual Review of Information Science and Technology. 39, 2005, pp.551-573.
Warner, J. (2010). Human Information Retrieval (History and Foundations of Information Science Series). Cambridge, MA: MIT Press.
Warner, J. (2013). Creativity for Feist. Journal of the American Society for Information Science and Technology. 64, 6, 2013, pp.1173-1192.
Resumo:
Heterocyclic aromatic amines (HCA) are carcinogenic mutagens formed during cooking of proteinaceous foods, particularly meat. To assist in the ongoing search for biomarkers of HCA exposure in blood, a method is described for the extraction from human plasma of the most abundant HCAs: 2-Amino-1-methyl-6-phenylimidazo(4,5-b)pyridine (PhIP), 2-amino-3,8-dimethylimidazo[4,5-f]quinoxaline (MeIQx) and 2-amino-3,4,8-trimethylimidazo[4,5-f]quinoxaline (4,8-DiMeIQx) (and its isomer 7,8-DiMeIQx), using Hollow Fibre Membrane Liquid-Phase Microextraction. This technique employs 2.5 cm lengths of porous polypropylene fibres impregnated with organic solvent to facilitate simultaneous extraction from an alkaline aqueous sample into a low volume acidic acceptor phase. This low cost protocol is extensively optimised for fibre length, extraction time, sample pH and volume. Detection is by UPLC-MS/MS using positive mode electrospray ionisation with a 3.4 min runtime, with optimum peak shape, sensitivity and baseline separation being achieved at pH 9.5. To our knowledge this is the first description of HCA chromatography under alkaline conditions. Application of fixed ion ratio tolerances for confirmation of analyte identity is discussed. Assay precision is between 4.5 and 8.8% while lower limits of detection between 2 and 5 pg/mL are below the concentrations postulated for acid-labile HCA-protein adducts in blood.
Resumo:
A series of imprinted polymers targeting nucleoside metabolites, prepared using a template analogue approach, are presented. These were prepared following selection of the optimum functional monomer by solution association studies using 1H-NMR titrations whereby methacrylic acid was shown to be the strongest receptor with and affinity constant of 621 ± 51 L mol-1 vs. 110 ± 16 L mol-1 for acrylamide. The best performing polymers were prepared using methanol as porogenic co-solvent and although average binding site affinities were marginally reduced, 2.3×104 L mol-1 vs. 2.7×104 L mol-1 measured for a polymer prepared in acetonitrile, these polymers contained the highest number of binding sites, 5.27 μmol g-1¬¬ vs. 1.64 μmol g-1, while they also exhibited enhanced selectivity for methylated guanosine derivatives. When applied as sorbents in the extraction of nucleoside derivative cancer biomarkers from synthetic urine samples, significant sample clean-up and recoveries of up to 90% for 7-methylguanosine were achieved.
Resumo:
Many modeling problems require to estimate a scalar output from one or more time series. Such problems are usually tackled by extracting a fixed number of features from the time series (like their statistical moments), with a consequent loss in information that leads to suboptimal predictive models. Moreover, feature extraction techniques usually make assumptions that are not met by real world settings (e.g. uniformly sampled time series of constant length), and fail to deliver a thorough methodology to deal with noisy data. In this paper a methodology based on functional learning is proposed to overcome the aforementioned problems; the proposed Supervised Aggregative Feature Extraction (SAFE) approach allows to derive continuous, smooth estimates of time series data (yielding aggregate local information), while simultaneously estimating a continuous shape function yielding optimal predictions. The SAFE paradigm enjoys several properties like closed form solution, incorporation of first and second order derivative information into the regressor matrix, interpretability of the generated functional predictor and the possibility to exploit Reproducing Kernel Hilbert Spaces setting to yield nonlinear predictive models. Simulation studies are provided to highlight the strengths of the new methodology w.r.t. standard unsupervised feature selection approaches. © 2012 IEEE.
Resumo:
No abstract available
Resumo:
This paper describes the extraction of C5–C8 linear α-olefins from olefin/paraffin mixtures of the same carbon number via a reversible complexation with a silver salt (silver bis(trifluoromethylsulfonyl)imide, Ag[Tf2N]) to form room temperature ionic liquids [Ag(olefin)x][Tf2N]. From the experimental (liquid + liquid) equilibrium data for the olefin/paraffin mixtures and Ag[Tf2N], 1-pentene showed the best separation performance while C7 and C8 olefins could only be separated from the corresponding mixtures on addition of water which also improves the selectivity at lower carbon numbers like the C5 and C6, for example. Using infrared and Raman spectroscopy of the complex and Ag[Tf2N] saturated by olefin, the mechanism of the extraction was found to be based on both chemical complexation and the physical solubility of the olefin in the ionic liquid ([Ag(olefin)x][Tf2N]). These experiments further support the use of such extraction techniques for the separation of olefins from paraffins.
Resumo:
In the semiconductor manufacturing environment it is very important to understand which factors have the most impact on process outcomes and to control them accordingly. This is usually achieved through design of experiments at process start-up and long term observation of production. As such it relies heavily on the expertise of the process engineer. In this work, we present an automatic approach to extracting useful insights about production processes and equipment based on state-of-the-art Machine Learning techniques. The main goal of this activity is to provide tools to process engineers to accelerate the learning-by-observation phase of process analysis. Using a Metal Deposition process as an example, we highlight various ways in which the extracted information can be employed.