918 resultados para symbols.
Resumo:
Structural analysis in handwritten mathematical expressions focuses on interpreting the recognized symbols using geometrical information such as relative sizes and positions of the symbols. Most existing approaches rely on hand-crafted grammar rules to identify semantic relationships among the recognized mathematical symbols. They could easily fail when writing errors occurred. Moreover, they assume the availability of the whole mathematical expression before being able to analyze the semantic information of the expression. To tackle these problems, we propose a progressive structural analysis (PSA) approach for dynamic recognition of handwritten mathematical expressions. The proposed PSA approach is able to provide analysis result immediately after each written input symbol. This has an advantage that users are able to detect any recognition errors immediately and correct only the mis-recognized symbols rather than the whole expression. Experiments conducted on 57 most commonly used mathematical expressions have shown that the PSA approach is able to achieve very good performance results.
Resumo:
Much research pursues machine intelligence through better representation of semantics. What is semantics? People in different areas view semantics from different facets although it accompanies interaction through civilization. Some researchers believe that humans have some innate structure in mind for processing semantics. Then, what the structure is like? Some argue that humans evolve a structure for processing semantics through constant learning. Then, how the process is like? Humans have invented various symbol systems to represent semantics. Can semantics be accurately represented? Turing machines are good at processing symbols according to algorithms designed by humans, but they are limited in ability to process semantics and to do active interaction. Super computers and high-speed networks do not help solve this issue as they do not have any semantic worldview and cannot reflect themselves. Can future cyber-society have some semantic images that enable machines and individuals (humans and agents) to reflect themselves and interact with each other with knowing social situation through time? This paper concerns these issues in the context of studying an interactive semantics for the future cyber-society. It firstly distinguishes social semantics from natural semantics, and then explores the interactive semantics in the category of social semantics. Interactive semantics consists of an interactive system and its semantic image, which co-evolve and influence each other. The semantic worldview and interactive semantic base are proposed as the semantic basis of interaction. The process of building and explaining semantic image can be based on an evolving structure incorporating adaptive multi-dimensional classification space and self-organized semantic link network. A semantic lens is proposed to enhance the potential of the structure and help individuals build and retrieve semantic images from different facets, abstraction levels and scales through time.
Resumo:
We report the performance of a group of adult dyslexics and matched controls in an array-matching task where two strings of either consonants or symbols are presented side by side and have to be judged to be the same or different. The arrays may differ either in the order or identity of two adjacent characters. This task does not require naming – which has been argued to be the cause of dyslexics’ difficulty in processing visual arrays – but, instead, has a strong serial component as demonstrated by the fact that, in both groups, Reaction times (RTs) increase monotonically with position of a mismatch. The dyslexics are clearly impaired in all conditions and performance in the identity conditions predicts performance across orthographic tasks even after age, performance IQ and phonology are partialled out. Moreover, the shapes of serial position curves are revealing of the underlying impairment. In the dyslexics, RTs increase with position at the same rate as in the controls (lines are parallel) ruling out reduced processing speed or difficulties in shifting attention. Instead, error rates show a catastrophic increase for positions which are either searched later or more subject to interference. These results are consistent with a reduction in the attentional capacity needed in a serial task to bind together identity and positional information. This capacity is best seen as a reduction in the number of spotlights into which attention can be split to process information at different locations rather than as a more generic reduction of resources which would also affect processing the details of single objects.
Resumo:
This paper presents the digital imaging results of a collaborative research project working toward the generation of an on-line interactive digital image database of signs from ancient cuneiform tablets. An important aim of this project is the application of forensic analysis to the cuneiform symbols to identify scribal hands. Cuneiform tablets are amongst the earliest records of written communication, and could be considered as one of the original information technologies; an accessible, portable and robust medium for communication across distance and time. The earliest examples are up to 5,000 years old, and the writing technique remained in use for some 3,000 years. Unfortunately, only a small fraction of these tablets can be made available for display in museums and much important academic work has yet to be performed on the very large numbers of tablets to which there is necessarily restricted access. Our paper will describe the challenges encountered in the 2D image capture of a sample set of tablets held in the British Museum, explaining the motivation for attempting 3D imaging and the results of initial experiments scanning the smaller, more densely inscribed cuneiform tablets. We will also discuss the tractability of 3D digital capture, representation and manipulation, and investigate the requirements for scaleable data compression and transmission methods. Additional information can be found on the project website: www.cuneiform.net
Resumo:
This thesis explores the interaction between Micros (<10 employees) from non-creative sectors and website designers ("Creatives") that occurred when creating a website of a higher order than a basic template site. The research used Straussian Grounded Theory Method with a longitudinal design, in order to identify what knowledge transferred to the Micros during the collaboration, how it transferred, what factors affected the transfer and outcomes of the transfer including behavioural additionality. To identify whether the research could be extended beyond this, five other design areas were also examined, as well as five Small to Medium Enterprises (SMEs) engaged in website and branding projects. The findings were that, at the start of the design process, many Micros could not articulate their customer knowledge, and had poor marketing and visual language skills, knowledge core to web design, enabling targeted communication to customers through images. Despite these gaps, most Micros still tried to lead the process. To overcome this disjoint, the majority of the designers used a knowledge transfer strategy termed in this thesis as ‘Bi-Modal Knowledge Transfer’, where the Creative was aware of the transfer but the Micro was unaware, both for drawing out customer knowledge from the Micro and for transferring visual language skills to the Micro. Two models were developed to represent this process. Two models were also created to map changes in the knowledge landscapes of customer knowledge and visual language – the Knowledge Placement Model and the Visual Language Scale. The Knowledge Placement model was used to map the placement of customer knowledge within the consciousness, extending the known Automatic-Unconscious -Conscious model, adding two more locations – Peripheral Consciousness and Occasional Consciousness. Peripheral Consciousness is where potential knowledge is held, but not used. Occasional Consciousness is where potential knowledge is held but used only for specific tasks. The Visual Language Scale was created to measure visual language ability from visually responsive, where the participant only responds personally to visual symbols, to visually multi-lingual, where the participant can use visual symbols to communicate with multiple thought-worlds. With successful Bi-Modal Knowledge Transfer, the outcome included not only an effective website but also changes in the knowledge landscape for the Micros and ongoing behavioural changes, especially in marketing. These effects were not seen in the other design projects, and only in two of the SME projects. The key factors for this difference between SMEs and Micros appeared to be an expectation of knowledge by the Creatives and failure by the SMEs to transfer knowledge within the company.
Resumo:
In this letter, we experimentally study the statistical properties of a received QPSK modulated signal and compare various bit error rate (BER) estimation methods for coherent optical orthogonal frequency division multiplexing transmission. We show that the statistical BER estimation method based on the probability density function of the received QPSK symbols offers the most accurate estimate of the system performance.
Resumo:
We explored the role of modularity as a means to improve evolvability in populations of adaptive agents. We performed two sets of artificial life experiments. In the first, the adaptive agents were neural networks controlling the behavior of simulated garbage collecting robots, where modularity referred to the networks architectural organization and evolvability to the capacity of the population to adapt to environmental changes measured by the agents performance. In the second, the agents were programs that control the changes in network's synaptic weights (learning algorithms), the modules were emerged clusters of symbols with a well defined function and evolvability was measured through the level of symbol diversity across programs. We found that the presence of modularity (either imposed by construction or as an emergent property in a favorable environment) is strongly correlated to the presence of very fit agents adapting effectively to environmental changes. In the case of learning algorithms we also observed that character diversity and modularity are also strongly correlated quantities. © 2014 Springer Science+Business Media New York.
Resumo:
A general technique for transforming a timed finite state automaton into an equivalent automated planning domain based on a numerical parameter model is introduced. Timed transition automata have many applications in control systems and agents models; they are used to describe sequential processes, where actions are labelling by automaton transitions subject to temporal constraints. The language of timed words accepted by a timed automaton, the possible sequences of system or agent behaviour, can be described in term of an appropriate planning domain encapsulating the timed actions patterns and constraints. The time words recognition problem is then posed as a planning problem where the goal is to reach a final state by a sequence of actions, which corresponds to the timed symbols labeling the automaton transitions. The transformation is proved to be correct and complete and it is space/time linear on the automaton size. Experimental results shows that the performance of the planning domain obtained by transformation is scalable for real world applications. A major advantage of the planning based approach, beside of the solving the parsing problem, is to represent in a single automated reasoning framework problems of plan recognitions, plan synthesis and plan optimisation.
Resumo:
Coherent optical orthogonal frequency division multiplexing (CO-OFDM) is an attractive transmission technique to virtually eliminate intersymbol interference caused by chromatic dispersion and polarization-mode dispersion. Design, development, and operation of CO-OFDM systems require simple, efficient, and reliable methods of their performance evaluation. In this paper, we demonstrate an accurate bit error rate estimation method for QPSK CO-OFDM transmission based on the probability density function of the received QPSK symbols. By comparing with other known approaches, including data-aided and nondata-aided error vector magnitude, we show that the proposed method offers the most accurate estimate of the system performance for both single channel and wavelength division multiplexing QPSK CO-OFDM transmission systems. © 2014 IEEE.
Resumo:
* Supported by projects CCG08-UAM TIC-4425-2009 and TEC2007-68065-C03-02
Resumo:
2000 Mathematics Subject Classification: 35E45
Resumo:
A partition of a positive integer n is a way of writing it as the sum of positive integers without regard to order; the summands are called parts. The number of partitions of n, usually denoted by p(n), is determined asymptotically by the famous partition formula of Hardy and Ramanujan [5]. We shall introduce the uniform probability measure P on the set of all partitions of n assuming that the probability 1/p(n) is assigned to each n-partition. The symbols E and V ar will be further used to denote the expectation and variance with respect to the measure P . Thus, each conceivable numerical characteristic of the parts in a partition can be regarded as a random variable.
Resumo:
We demonstrate an accurate BER estimation method for QPSK CO-OFDM transmission based on the probability density function of the received QPSK symbols. Using a 112Gbs QPSK CO-OFDM transmission as an example, we show that this method offers the most accurate estimate of the system's performance in comparison with other known approaches.
Resumo:
We experimentally demonstrate an effective multiplier-free blind phase noise estimation technique for CO-OFDM systems for the first time based on the statistical properties of the received symbols' phases. Our technique operates in polar coordinates, providing very low implementation complexity.
Resumo:
We propose a Wiener-Hammerstein (W-H) channel estimation algorithm for Long-Term Evolution (LTE) systems. The LTE standard provides known data as pilot symbols and exploits them through coherent detection to improve system performance. These drivers are placed in a hybrid way to cover up both time and frequency domain. Our aim is to adapt the W-H equalizer (W-H/E) to LTE standard for compensation of both linear and nonlinear effects induced by power amplifiers and multipath channels. We evaluate the performance of the W-H/E for a Downlink LTE system in terms of BLER, EVM and Throughput versus SNR. Afterwards, we compare the results with a traditional Least-Mean Square (LMS) equalizer. It is shown that W-H/E can significantly reduce both linear and nonlinear distortions compared to LMS and improve LTE Downlink system performance.