63 resultados para semantic extension
Resumo:
The ionic liquid 1-ethyl-3-methylimidazolium bis{(trifluoromethyl)sulfonyl}amide ([C(2)mim][NTf2]) was tested as solvent for the separation of aromatic and aliphatic hydrocarbons containing 7 or 8 carbon atoms (the C-7- and C-8-fractions). The liquid-liquid equilibria (LLE) of the ternary systems (heptane + toluene + [C(2)mim][NTf2]) and (octane + ethylbenzene + [C(2)mim][NTf2]), at 25 degrees C, were experimentally determined. The performance of the ionic liquid as the solvent in such systems was evaluated by means of the calculation of the solute distribution ratio and the selectivity. The results were compared to those previously reported for the extraction of benzene from its mixtures with hexane by using the same ionic liquid, therefore analysing the influence of the size of the hydrocarbons. It was found that the ionic liquid is also good for the extraction of C-7- and C-8- fraction aromatic compounds, just a greater amount of ionic liquid being needed to perform an equivalently efficient separation than for the C-6-fraction. It is also discussed how [C(2)mim][NTf2] performs comparably better than the conventional solvent sulfolane. The original 'Non-Random Two-Liquid' (NRTL) equation was used to adequately correlate the experimental LLE data.
Resumo:
A novel non-linear dimensionality reduction method, called Temporal Laplacian Eigenmaps, is introduced to process efficiently time series data. In this embedded-based approach, temporal information is intrinsic to the objective function, which produces description of low dimensional spaces with time coherence between data points. Since the proposed scheme also includes bidirectional mapping between data and embedded spaces and automatic tuning of key parameters, it offers the same benefits as mapping-based approaches. Experiments on a couple of computer vision applications demonstrate the superiority of the new approach to other dimensionality reduction method in term of accuracy. Moreover, its lower computational cost and generalisation abilities suggest it is scalable to larger datasets. © 2010 IEEE.
Resumo:
Latent semantic indexing (LSI) is a technique used for intelligent information retrieval (IR). It can be used as an alternative to traditional keyword matching IR and is attractive in this respect because of its ability to overcome problems with synonymy and polysemy. This study investigates various aspects of LSI: the effect of the Haar wavelet transform (HWT) as a preprocessing step for the singular value decomposition (SVD) in the key stage of the LSI process; and the effect of different threshold types in the HWT on the search results. The developed method allows the visualisation and processing of the term document matrix, generated in the LSI process, using HWT. The results have shown that precision can be increased by applying the HWT as a preprocessing step, with better results for hard thresholding than soft thresholding, whereas standard SVD-based LSI remains the most effective way of searching in terms of recall value.
Resumo:
The project comprises of the re-ordering and extension of a 19th century country house in the extreme south west of Ireland. The original house is what can be termed an Irish house of the middle size. A common typology in 19th century Ireland the classical house of the middle size is characterised by a highly ordered plan containing a variety of rooms within a square or rectangular form. A strategy of elaborating the threshold between the reception rooms of the house and the garden was adopted by wrapping the house in a notional forest of columns creating deep verandas to the south and west
of the main living spaces. The grid of structural columns derived its proportions directly from the house. The columns became analogous with the mature oak and pine trees in the garden beyond while the floor and ceiling were considered as landscapes in their own right, with the black floor forming hearth stone, kitchen island and basement cellar and the concrete roof inflected to hold roof lights, a chimney and a landscape of pleasure on the roof above.
Aims / Objectives / Questions
1To restore and extend a “house of the middle size”, a historic Irish typology, in a sympathetic manner.
2To address the new build accommodation in a sustainable manner through strategies associated with orientation, micro climates, materiality and engineering both mechanical and structural.
3To explore and develop an understanding for two spatial orders, the enfilade room and non directional space of the grid.
4The creation of deep threshold space.
5Marbling as a finish in fair faced concrete
6Concrete as a sustainable building material
Resumo:
Introduction: Juvenile idiopathic arthritis (JIA) comprises a poorly understood group of chronic autoimmune diseases with variable clinical outcomes. We investigated whether the synovial fluid (SF) proteome could distinguish a subset of patients in whom disease extends to affect a large number of joints.
Methods: SF samples from 57 patients were obtained around time of initial diagnosis of JIA, labeled with Cy dyes and separated by two-dimensional electrophoresis. Multivariate analyses were used to isolate a panel of proteins which distinguish patient subgroups. Proteins were identified using MALDI-TOF mass spectrometry with expression verified by immunochemical methods. Protein glycosylation status was confirmed by hydrophilic interaction liquid chromatography.
Results: A truncated isoform of vitamin D binding protein (VDBP) is present at significantly reduced levels in the SF of oligoarticular patients at risk of disease extension, relative to other subgroups (p < 0.05). Furthermore, sialylated forms of immunopurified synovial VDBP were significantly reduced in extended oligoarticular patients (p < 0.005).
Conclusion: Reduced conversion of VDBP to a macrophage activation factor may be used to stratify patients to determine risk of disease extension in JIA patients.
Resumo:
We report four repetitions of Falk and Kosfeld's (Am. Econ. Rev. 96(5):1611-1630, 2006) low and medium control treatments with 476 subjects. Each repetition employs a sample drawn from a standard subject pool of students and demographics vary across samples. We largely confirm the existence of hidden costs of control but, contrary to the original study, hidden costs of control are usually not substantial enough to significantly undermine the effectiveness of economic incentives. Our subjects were asked, at the end of the experimental session, to complete a questionnaire in which they had to state their work motivation in hypothetical scenarios. Our questionnaires are identical to the ones administered in Falk and Kosfeld's (Am. Econ. Rev. 96(5):1611-1630, 2006) questionnaire study. In contrast to the game play data, our questionnaire data are similar to those of the original questionnaire study. In an attempt to solve this puzzle, we report an extension with 228 subjects where performance-contingent earnings are absent i.e. both principals and agents are paid according to a flat participation fee. We observe that hidden costs significantly outweigh benefits of control under hypothetical incentives.
Turning the tide: A critique of Natural Semantic Metalanguage from a translation studies perspective
Resumo:
Starting from the premise that human communication is predicated on translational phenomena, this paper applies theoretical insights and practical findings from Translation Studies to a critique of Natural Semantic Metalanguage (NSM), a theory of semantic analysis developed by Anna Wierzbicka. Key tenets of NSM, i.e. (1) culture-specificity of complex concepts; (2) the existence of a small set of universal semantic primes; and (3) definition by reductive paraphrase, are discussed critically with reference to the notions of untranslatability, equivalence, and intra-lingual translation, respectively. It is argued that a broad spectrum of research and theoretical reflection in Translation Studies may successfully feed into the study of cognition, meaning, language, and communication. The interdisciplinary exchange between Translation Studies and linguistics may be properly balanced, with the former not only being informed by but also informing and interrogating the latter.
Resumo:
Web databases are now pervasive. Such a database can be accessed via its query interface (usually HTML query form) only. Extracting Web query interfaces is a critical step in data integration across multiple Web databases, which creates a formal representation of a query form by extracting a set of query conditions in it. This paper presents a novel approach to extracting Web query interfaces. In this approach, a generic set of query condition rules are created to define query conditions that are semantically equivalent to SQL search conditions. Query condition rules represent the semantic roles that labels and form elements play in query conditions, and how they are hierarchically grouped into constructs of query conditions. To group labels and form elements in a query form, we explore both their structural proximity in the hierarchy of structures in the query form, which is captured by a tree of nested tags in the HTML codes of the form, and their semantic similarity, which is captured by various short texts used in labels, form elements and their properties. We have implemented the proposed approach and our experimental results show that the approach is highly effective.
Resumo:
A 42-year-old man has been under long-term follow-up since he was a child for congenital glaucoma and buphthalmos in both eyes. His left eye best corrected visual acuity (BCVA) was counting fingers, due to end-stage glaucoma. He was on maximal medical therapy with an intraocular pressure (IOP) maintained at mid to low twenties. His right eye, the only seeing eye, had a BCVA of 6/9. This eye had undergone multiple glaucoma laser and surgical procedures, including an initial first Molteno drainage device inserted superonasally that failed in April 2003 due to fibrotic membrane over the tube opening. As a result, he subsequently had a second Molteno drainage device inserted inferotemporally. To further maximize his vision he had an uncomplicated cataract extraction and intraocular lens implant in December 2004, after which he developed postoperative cystoid macular edema and corneal endothelial failure. He underwent a penetrating keratoplasty in the right eye thereafter in March 2007. After approximately a year, the second Molteno device developed drainage tube retraction, which was managed surgically to maintain optimum IOP in the right eye. His right eye vision to date is maintained at 6/12. © 2011 Mustafa and Azuara-Blanco.
Resumo:
In this paper, we introduce an application of matrix factorization to produce corpus-derived, distributional
models of semantics that demonstrate cognitive plausibility. We find that word representations
learned by Non-Negative Sparse Embedding (NNSE), a variant of matrix factorization, are sparse,
effective, and highly interpretable. To the best of our knowledge, this is the first approach which
yields semantic representation of words satisfying these three desirable properties. Though extensive
experimental evaluations on multiple real-world tasks and datasets, we demonstrate the superiority
of semantic models learned by NNSE over other state-of-the-art baselines.
Resumo:
Computational models of meaning trained on naturally occurring text successfully model human performance on tasks involving simple similarity measures, but they characterize meaning in terms of undifferentiated bags of words or topical dimensions. This has led some to question their psychological plausibility (Murphy, 2002; Schunn, 1999). We present here a fully automatic method for extracting a structured and comprehensive set of concept descriptions directly from an English part-of-speech-tagged corpus. Concepts are characterized by weighted properties, enriched with concept-property types that approximate classical relations such as hypernymy and function. Our model outperforms comparable algorithms in cognitive tasks pertaining not only to concept-internal structures (discovering properties of concepts, grouping properties by property type) but also to inter-concept relations (clustering into superordinates), suggesting the empirical validity of the property-based approach. Copyright © 2009 Cognitive Science Society, Inc. All rights reserved.
Resumo:
Achieving a clearer picture of categorial distinctions in the brain is essential for our understanding of the conceptual lexicon, but much more fine-grained investigations are required in order for this evidence to contribute to lexical research. Here we present a collection of advanced data-mining techniques that allows the category of individual concepts to be decoded from single trials of EEG data. Neural activity was recorded while participants silently named images of mammals and tools, and category could be detected in single trials with an accuracy well above chance, both when considering data from single participants, and when group-training across participants. By aggregating across all trials, single concepts could be correctly assigned to their category with an accuracy of 98%. The pattern of classifications made by the algorithm confirmed that the neural patterns identified are due to conceptual category, and not any of a series of processing-related confounds. The time intervals, frequency bands and scalp locations that proved most informative for prediction permit physiological interpretation: the widespread activation shortly after appearance of the stimulus (from 100. ms) is consistent both with accounts of multi-pass processing, and distributed representations of categories. These methods provide an alternative to fMRI for fine-grained, large-scale investigations of the conceptual lexicon. © 2010 Elsevier Inc.