109 resultados para Translation memory
Resumo:
One-dimensional ferroelectric nanostructures, carbon nanotubes (CNT) and CNTinorganic oxides have recently been studied due to their potential applications for microelectronics. Here, we report coating of a registered array of aligned multi-wall carbon nanotubes (MWCNT) grown on silicon substrates by functional ferroelectric Pb(Zr,Ti)O 3 (PZT) which produces structures suitable for commercial prototype memories. Microstructural analysis reveals the crystalline nature of PZT with small nanocrystals aligned in different directions. First-order Raman modes of MWCNT and PZT/MWCNT/n-Si show the high structural quality of CNT before and after PZT deposition at elevated temperature. PZT exists mostly in the monoclinic Cc/Cm phase, which is the origin of the high piezoelectric response in the system. Lowloss square piezoelectric hysteresis obtained for the 3D bottom-up structure confirms the switchability of the device. Currentvoltage mapping of the device by conducting atomic force microscopy (c-AFM) indicates very low transient current. Fabrication and functional properties of these hybrid ferroelectriccarbon nanotubes is the first step towards miniaturization for future nanotechnology sensors, actuators, transducers and memory devices. © 2012 IOP Publishing Ltd.
Resumo:
This paper introduces a rule-based classification of single-word and compound verbs into a statistical machine translation approach. By substituting verb forms by the lemma of their head verb, the data sparseness problem caused by highly-inflected languages can be successfully addressed. On the other hand, the information of seen verb forms can be used to generate new translations for unseen verb forms. Translation results for an English to Spanish task are reported, producing a significant performance improvement.
Resumo:
In this paper we describe MARIE, an Ngram-based statistical machine translation decoder. It is implemented using a beam search strategy, with distortion (or reordering) capabilities. The underlying translation model is based on an Ngram approach, extended to introduce reordering at the phrase level. The search graph structure is designed to perform very accurate comparisons, what allows for a high level of pruning, improving the decoder efficiency. We report several techniques for efficiently prune out the search space. The combinatory explosion of the search space derived from the search graph structure is reduced by limiting the number of reorderings a given translation is allowed to perform, and also the maximum distance a word (or a phrase) is allowed to be reordered. We finally report translation accuracy results on three different translation tasks.