888 resultados para Input and outputs


Relevância:

80.00% 80.00%

Publicador:

Resumo:

北欧和北美的研究发现水库是典型的汞敏感生态系统,新建水库而引起的鱼体甲基汞污染问题已受到科学家的高度重视,而我国在这方面的研究比较薄弱。本论文选择乌江流域的6个水库作为研究对象,并根据年龄把这些水库划分为3个演化阶段,洪家渡、引子渡、索风营水库为初级演化阶段,普定、东风水库为中级演化阶段,乌江渡水库为高级演化阶段。对这6个水库总汞和甲基汞的输入和输出通量进行了研究,探讨了不同演化阶段的水库对总汞和甲基汞的“源/汇”作用,主要研究内容有以下三个方面:(1)各水库入出库河流中汞的分布特征;(2)大气降水中汞浓度及沉降通量的分布;(3)乌江流域不同水库汞的输入输出通量。通过本论文的研究,得出以下主要结论: 1. 乌江流域河流中总汞、颗粒态汞、溶解态汞、活性汞、总甲基汞、溶解态甲基汞的年均浓度分别为3.41±1.98、2.05±1.73、1.36±0.44、0.24±0.11、0.15±0.06、0.08±0.03 ng•L-1。与国内外其它河流的比较发现,总汞的浓度明显低于国外受污染的河流,略高于国外未受污染的河流。溶解态汞、活性汞、甲基汞的浓度略低于受污染的河流,与未受污染的河流基本相当。与同处在贵州喀斯特地区的阿哈湖、红枫湖、百花湖的入出库河流相比,总汞、溶解态汞、活性汞、甲基汞、溶解态甲基汞的浓度均明显偏低。 2. 水库的修建显著降低了出库河流中总汞、颗粒态汞的浓度,使总甲基汞和溶解态甲基汞的浓度升高,而且增加了出库河流中溶解态汞、活性汞、总甲基汞占总汞的比例。不同形态汞的沿程分布显示,梯级水库的修建改变了河流原有的汞的生物地球化学过程,使乌江多个河段的甲基汞升高,并且随着水库生态系统的不断演化,水库输出的甲基汞将增加,下游河流水体中甲基汞有继续升高的趋势。 3. 大气降雨中总汞、溶解态汞、颗粒态汞、活性汞、总甲基汞的浓度分别为7.49~149 ng•L-1、1.23~10.0 ng•L-1、5.76~142 ng•L-1、0.56~2.94 ng•L-1、0.08~0.82 ng•L-1,且以颗粒态汞为主,约占总汞比例的87%。总汞、溶解态汞、颗粒态汞、甲基汞的浓度有明显的季节变化趋势,冬春季高于夏秋季,而空间分布特征不明显。2006年总汞、甲基汞的年湿沉降通量为34.7±5.80 µg•m-2•yr-1、0.18±0.03 µg•m-2•yr-1,且主要受降雨量的影响。乌江流域降雨中总汞的浓度及其湿沉降通量远高于北美和日本,低于中国的一些城市地区(如长春和北京),而甲基汞的浓度和通量与其它地区相当。 4. 在乌江流域的不同水库中,降雨输入总汞和甲基汞的通量主要受降雨量和水库面积的影响,而与降雨的汞浓度间没有相关性。河流向水库输入总汞的量主要受河流流量的控制,而输入甲基汞和颗粒物的量受河流流量和浓度的影响。下泄水输出总汞、甲基汞、颗粒物的通量受浓度和流量的影响。由于流域面积/水面面积的比值较大,水库水量、总汞、甲基汞、颗粒物的输入以河流为主,分别占总输入的87%、80%、85%、86%。输出以下泄输出为主,下泄水输出的水量、总汞、甲基汞、颗粒物分别占总输出的80%、77%、86%、79%。 5. 从输入-输出通量的结果发现,各水库均表现为河流颗粒物输送的“汇”;除乌江渡水库外,其它水库均表现为总汞的“汇”;对甲基汞而言,引子渡、洪家渡、索风营水库表现为“汇”,而普定、东风、乌江渡水库则表现为“源”。 6. 普定和洪家渡水库中总汞的贮存率为56%和57%,明显高于其它水库,说明在上游有水库存在的情况下,水库对总汞“汇”的作用将降低。普定、东风、乌江渡水库中甲基汞的净通量分别为+69.4 g•yr-1、+368 g•yr-1、+857 g•yr-1,转化率为13%、73%、84%,说明甲基汞的净通量和转化率与水库的演化阶段有关,随着水库演化阶段的升高而增加,并且随着水库的不断演化,甲基汞将从“汇”变成“源”。

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recently,Handheld Communication Devices is developing very fast, extending in users and spreading in application fields, and has an promising future. This study investigated the acceptance of the multimodal text entry method and the behavioral characteristics when using it. Based on the general information process model of a bimodal system and the human factor studies about the multimodal map system, the present study mainly focused on the hand-speech bimodal text entry method. For acceptance, the study investigated the subjective perception of the accuracy of speech recognition by Wizard of Oz (WOz) experiment and a questionnaire. Results showed that there was a linear relationship between the speech recognition accuracy and the subjective accuracy. Furthermore, as the familiarity increasing, the difference between the acceptable accuracy and the subjective accuracy gradually decreased. In addition, the similarity of meaning between the outcome of speech recognition and the correct sentences was an important referential criterion. The second study investigated three aspects of the bimodal text entry method, including input, error recovery and modal shifts. The first experiment aimed to find the behavioral characteristics of user when doing error recovery task. Results indicated that participants preferred to correct the error by handwriting, which had no relationship with the input modality. The second experiment aimed to discover the behavioral characteristics of users when doing text entry in various types of text. Results showed that users preferred to speech input in both words and sentences conditions, which was highly consistent among individuals, while no significant difference was found between handwriting and speech input in the character condition. Participants used more direct strategy than jumping strategy to deal with mixed text, especially for the Chinese-English mixed type. The third experiment examined the cognitive load in the different modal shifts, results suggesting that there were significant differences between different shifts. Moreover, relevant little time was needed in the Shift from speech input to hand input. Based on the main findings, implications were discussed as follows: Firstly, when evaluating a speech recognition system, attention should be paid to the fact that the speech recognition accuracy was not equal to the subjective accuracy. Secondly, in order to make a speech input system more acceptable, a good method is to train and supply the feedback for the accuracy in training, which improving the familiarity and sensitivity to the system. Thirdly, both the universal and individual behavioral patterns were taken into consideration to improve the error recovery method. Fourthly, easing the study and the use of speech input, the operations of speech input should be simpler. Fifthly, more convenient text input method for non-Chinese text entry should be provided. Finally, the shifting time between hand input and speech input provides an important parameter for the design of automatic-evoked speech recognition system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We first pose the following problem: to develop a program which takes line-drawings as input and constructs three-dimensional objects as output, such that the output objects are the same as the ones we see when we look at the input line-drawing. We then introduce the principle of minimum standard-deviation of angles (MSDA) and discuss a program based on MSDA. We present the results of testing this program with a variety of line- drawings and show that the program constitutes a solution to the stated problem over the range of line-drawings tested. Finally, we relate this work to its historical antecedents in the psychological and computer-vision literature.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper explores automating the qualitative analysis of physical systems. It describes a program, called PLR, that takes parameterized ordinary differential equations as input and produces a qualitative description of the solutions for all initial values. PLR approximates intractable nonlinear systems with piecewise linear ones, analyzes the approximations, and draws conclusions about the original systems. It chooses approximations that are accurate enough to reproduce the essential properties of their nonlinear prototypes, yet simple enough to be analyzed completely and efficiently. It derives additional properties, such as boundedness or periodicity, by theoretical methods. I demonstrate PLR on several common nonlinear systems and on published examples from mechanical engineering.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A fundamental problem in artificial intelligence is obtaining coherent behavior in rule-based problem solving systems. A good quantitative measure of coherence is time behavior; a system that never, in retrospect, applied a rule needlessly is certainly coherent; a system suffering from combinatorial blowup is certainly behaving incoherently. This report describes a rule-based problem solving system for automatically writing and improving numerical computer programs from specifications. The specifications are in terms of "constraints" among inputs and outputs. The system has solved program synthesis problems involving systems of equations, determining that methods of successive approximation converge, transforming recursion to iteration, and manipulating power series (using differing organizations, control structures, and argument-passing techniques).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to outline unique learning experience that virtual/e-internships can offer small and medium-sized enterprises and start-up organizations. Design/methodology/approach – We interviewed 18 experts on e-internships (interns and managers of internships) across several countries to learn more about the learning experiences for both organizations and interns. The information from these interviews was also used to formulate a number of recommendations. Findings – The interviews provided insights into how e-internships can provide development opportunities for interns, managers and staff within these organizations. One important benefit pertains to the skill development of both interns and managers. The interns get unique working experiences that also benefit the organizations in terms of their creativity, input and feedback. In return, managers get a unique learning experience that helps them expand their project management skills, interpersonal skills and mentoring. Practical implications – We outline a number of recommendations that consider skill development, the benefit of diversity in numerous forms as well as mutual benefits for enterprises and start-ups. Originality/value – The discussion of the various benefits and conditions under which virtual internships will succeed in organizations provide practitioners an insight into the organizational opportunities available to them given the right investment into e-interns and internship schemes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many real world image analysis problems, such as face recognition and hand pose estimation, involve recognizing a large number of classes of objects or shapes. Large margin methods, such as AdaBoost and Support Vector Machines (SVMs), often provide competitive accuracy rates, but at the cost of evaluating a large number of binary classifiers, thus making it difficult to apply such methods when thousands or millions of classes need to be recognized. This thesis proposes a filter-and-refine framework, whereby, given a test pattern, a small number of candidate classes can be identified efficiently at the filter step, and computationally expensive large margin classifiers are used to evaluate these candidates at the refine step. Two different filtering methods are proposed, ClassMap and OVA-VS (One-vs.-All classification using Vector Search). ClassMap is an embedding-based method, works for both boosted classifiers and SVMs, and tends to map the patterns and their associated classes close to each other in a vector space. OVA-VS maps OVA classifiers and test patterns to vectors based on the weights and outputs of weak classifiers of the boosting scheme. At runtime, finding the strongest-responding OVA classifier becomes a classical vector search problem, where well-known methods can be used to gain efficiency. In our experiments, the proposed methods achieve significant speed-ups, in some cases up to two orders of magnitude, compared to exhaustive evaluation of all OVA classifiers. This was achieved in hand pose recognition and face recognition systems where the number of classes ranges from 535 to 48,600.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Spotting patterns of interest in an input signal is a very useful task in many different fields including medicine, bioinformatics, economics, speech recognition and computer vision. Example instances of this problem include spotting an object of interest in an image (e.g., a tumor), a pattern of interest in a time-varying signal (e.g., audio analysis), or an object of interest moving in a specific way (e.g., a human's body gesture). Traditional spotting methods, which are based on Dynamic Time Warping or hidden Markov models, use some variant of dynamic programming to register the pattern and the input while accounting for temporal variation between them. At the same time, those methods often suffer from several shortcomings: they may give meaningless solutions when input observations are unreliable or ambiguous, they require a high complexity search across the whole input signal, and they may give incorrect solutions if some patterns appear as smaller parts within other patterns. In this thesis, we develop a framework that addresses these three problems, and evaluate the framework's performance in spotting and recognizing hand gestures in video. The first contribution is a spatiotemporal matching algorithm that extends the dynamic programming formulation to accommodate multiple candidate hand detections in every video frame. The algorithm finds the best alignment between the gesture model and the input, and simultaneously locates the best candidate hand detection in every frame. This allows for a gesture to be recognized even when the hand location is highly ambiguous. The second contribution is a pruning method that uses model-specific classifiers to reject dynamic programming hypotheses with a poor match between the input and model. Pruning improves the efficiency of the spatiotemporal matching algorithm, and in some cases may improve the recognition accuracy. The pruning classifiers are learned from training data, and cross-validation is used to reduce the chance of overpruning. The third contribution is a subgesture reasoning process that models the fact that some gesture models can falsely match parts of other, longer gestures. By integrating subgesture reasoning the spotting algorithm can avoid the premature detection of a subgesture when the longer gesture is actually being performed. Subgesture relations between pairs of gestures are automatically learned from training data. The performance of the approach is evaluated on two challenging video datasets: hand-signed digits gestured by users wearing short sleeved shirts, in front of a cluttered background, and American Sign Language (ASL) utterances gestured by ASL native signers. The experiments demonstrate that the proposed method is more accurate and efficient than competing approaches. The proposed approach can be generally applied to alignment or search problems with multiple input observations, that use dynamic programming to find a solution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

How does the brain make decisions? Speed and accuracy of perceptual decisions covary with certainty in the input, and correlate with the rate of evidence accumulation in parietal and frontal cortical "decision neurons." A biophysically realistic model of interactions within and between Retina/LGN and cortical areas V1, MT, MST, and LIP, gated by basal ganglia, simulates dynamic properties of decision-making in response to ambiguous visual motion stimuli used by Newsome, Shadlen, and colleagues in their neurophysiological experiments. The model clarifies how brain circuits that solve the aperture problem interact with a recurrent competitive network with self-normalizing choice properties to carry out probablistic decisions in real time. Some scientists claim that perception and decision-making can be described using Bayesian inference or related general statistical ideas, that estimate the optimal interpretation of the stimulus given priors and likelihoods. However, such concepts do not propose the neocortical mechanisms that enable perception, and make decisions. The present model explains behavioral and neurophysiological decision-making data without an appeal to Bayesian concepts and, unlike other existing models of these data, generates perceptual representations and choice dynamics in response to the experimental visual stimuli. Quantitative model simulations include the time course of LIP neuronal dynamics, as well as behavioral accuracy and reaction time properties, during both correct and error trials at different levels of input ambiguity in both fixed duration and reaction time tasks. Model MT/MST interactions compute the global direction of random dot motion stimuli, while model LIP computes the stochastic perceptual decision that leads to a saccadic eye movement.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Grid cells in the dorsal segment of the medial entorhinal cortex (dMEC) show remarkable hexagonal activity patterns, at multiple spatial scales, during spatial navigation. How these hexagonal patterns arise has excited intense interest. It has previously been shown how a selforganizing map can convert firing patterns across entorhinal grid cells into hippocampal place cells that are capable of representing much larger spatial scales. Can grid cell firing fields also arise during navigation through learning within a self-organizing map? A neural model is proposed that converts path integration signals into hexagonal grid cell patterns of multiple scales. This GRID model creates only grid cell patterns with the observed hexagonal structure, predicts how these hexagonal patterns can be learned from experience, and can process biologically plausible neural input and output signals during navigation. These results support a unified computational framework for explaining how entorhinal-hippocampal interactions support spatial navigation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Adaptive Resonance Theory (ART) models are real-time neural networks for category learning, pattern recognition, and prediction. Unsupervised fuzzy ART and supervised fuzzy ARTMAP synthesize fuzzy logic and ART networks by exploiting the formal similarity between the computations of fuzzy subsethood and the dynamics of ART category choice, search, and learning. Fuzzy ART self-organizes stable recognition categories in response to arbitrary sequences of analog or binary input patterns. It generalizes the binary ART 1 model, replacing the set-theoretic: intersection (∩) with the fuzzy intersection (∧), or component-wise minimum. A normalization procedure called complement coding leads to a symmetric: theory in which the fuzzy inter:>ec:tion and the fuzzy union (∨), or component-wise maximum, play complementary roles. Complement coding preserves individual feature amplitudes while normalizing the input vector, and prevents a potential category proliferation problem. Adaptive weights :otart equal to one and can only decrease in time. A geometric interpretation of fuzzy AHT represents each category as a box that increases in size as weights decrease. A matching criterion controls search, determining how close an input and a learned representation must be for a category to accept the input as a new exemplar. A vigilance parameter (p) sets the matching criterion and determines how finely or coarsely an ART system will partition inputs. High vigilance creates fine categories, represented by small boxes. Learning stops when boxes cover the input space. With fast learning, fixed vigilance, and an arbitrary input set, learning stabilizes after just one presentation of each input. A fast-commit slow-recode option allows rapid learning of rare events yet buffers memories against recoding by noisy inputs. Fuzzy ARTMAP unites two fuzzy ART networks to solve supervised learning and prediction problems. A Minimax Learning Rule controls ARTMAP category structure, conjointly minimizing predictive error and maximizing code compression. Low vigilance maximizes compression but may therefore cause very different inputs to make the same prediction. When this coarse grouping strategy causes a predictive error, an internal match tracking control process increases vigilance just enough to correct the error. ARTMAP automatically constructs a minimal number of recognition categories, or "hidden units," to meet accuracy criteria. An ARTMAP voting strategy improves prediction by training the system several times using different orderings of the input set. Voting assigns confidence estimates to competing predictions given small, noisy, or incomplete training sets. ARPA benchmark simulations illustrate fuzzy ARTMAP dynamics. The chapter also compares fuzzy ARTMAP to Salzberg's Nested Generalized Exemplar (NGE) and to Simpson's Fuzzy Min-Max Classifier (FMMC); and concludes with a summary of ART and ARTMAP applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work considers the effect of hardware constraints that typically arise in practical power-aware wireless sensor network systems. A rigorous methodology is presented that quantifies the effect of output power limit and quantization constraints on bit error rate performance. The approach uses a novel, intuitively appealing means of addressing the output power constraint, wherein the attendant saturation block is mapped from the output of the plant to its input and compensation is then achieved using a robust anti-windup scheme. A priori levels of system performance are attained using a quantitative feedback theory approach on the initial, linear stage of the design paradigm. This hybrid design is assessed experimentally using a fully compliant 802.15.4 testbed where mobility is introduced through the use of autonomous robots. A benchmark comparison between the new approach and a number of existing strategies is also presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The image on the retina may move because the eyes move, or because something in the visual scene moves. The brain is not fooled by this ambiguity. Even as we make saccades, we are able to detect whether visual objects remain stable or move. Here we test whether this ability to assess visual stability across saccades is present at the single-neuron level in the frontal eye field (FEF), an area that receives both visual input and information about imminent saccades. Our hypothesis was that neurons in the FEF report whether a visual stimulus remains stable or moves as a saccade is made. Monkeys made saccades in the presence of a visual stimulus outside of the receptive field. In some trials, the stimulus remained stable, but in other trials, it moved during the saccade. In every trial, the stimulus occupied the center of the receptive field after the saccade, thus evoking a reafferent visual response. We found that many FEF neurons signaled, in the strength and timing of their reafferent response, whether the stimulus had remained stable or moved. Reafferent responses were tuned for the amount of stimulus translation, and, in accordance with human psychophysics, tuning was better (more prevalent, stronger, and quicker) for stimuli that moved perpendicular, rather than parallel, to the saccade. Tuning was sometimes present as well for nonspatial transaccadic changes (in color, size, or both). Our results indicate that FEF neurons evaluate visual stability during saccades and may be general purpose detectors of transaccadic visual change.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

While the number of traditional laptops and computers sold has dipped slightly year over year, manufacturers have developed new hybrid laptops with touch screens to build on the tactile trend. This market is moving quickly to make touch the rule rather than the exception and the sales of these devices have tripled since the launch of Windows 8 in 2012, to reach more than sixty million units sold in 2015. Unlike tablets, that benefit from easy-to-use applications specially designed for tactile interactions, hybrid laptops are intended to be used with regular user-interfaces. Hence, one could ask whether tactile interactions are suited for every task and activity performed with such interfaces. Since hybrid laptops are increasingly used in educational situations, this study focuses on information search tasks which are commonly performed for learning purposes. It is hypothesized that tasks that require complex and/or less common gestures will increase user's cognitive load and impair task performance in terms of efficacy and efficiency. A study was carried out in a usability laboratory with 30 participants for whom prior experience with tactile devices has been controlled. They were asked to perform information search tasks on an online encyclopaedia by using only the touch screen of and hybrid laptop. Tasks were selected with respect to their level of cognitive demand (amount of information that had to be maintained in working memory) and the complexity of gestures needed (left and/or right clicks, zoom, text selection and/or input.), and grouped into 4 sets accordingly. Task performance was measured by the number of tasks succeeded (efficacy) and time spent on each task (efficiency). Perceived cognitive load was assessed thanks to a questionnaire given after each set of tasks. An eye tracking device was used to monitor users' attention allocation and to provide objective cognitive load measures based on pupil dilation and the Index of Cognitive Activity. Each experimental run took approximately one hour. The results of this within-subjects design indicate that tasks involving complex gestures led to a lower efficacy, especially when the tasks were cognitively demanding. Regarding efficacy, there is no significant differences between sets of tasks excepted for tasks with low cognitive demand and complex gestures that required more time to be achieved. Surprisingly, users that declared the biggest experience with tactile devices spent more time than less frequent users. Cognitive load measures indicate that participants reported having devoted more mental effort in the interaction when they had to use complex gestures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The U&I programme Critical Friends (CFs) are developing guidelines on the role of the Critical Friend and the way in which this links with U&I programme model, projects and outputs. The critical friends are also in the process of building a new online community of shared effective practice for current and future critical friends. The CF Benefits Realisation project aims to synthesise existing CF U&I, JISC Curriculum Design and Delivery, JISC Institutional Innovation and related programmes, activities, methodologies and approaches to produce a range of specialist guidelines and other outputs for effective CF practice, within the context of the aims and objectives of the JISC U&I programme. We aim to disseminate these to a wide range of interests within the JISC HE-FE communities, following consultation.