918 resultados para Chinese information processing
Resumo:
The aim of this study was to identify and describe the types of errors in clinical reasoning that contribute to poor diagnostic performance at different levels of medical training and experience. Three cohorts of subjects, second- and fourth- (final) year medical students and a group of general practitioners, completed a set of clinical reasoning problems. The responses of those whose scores fell below the 25th centile were analysed to establish the stage of the clinical reasoning process - identification of relevant information, interpretation or hypothesis generation - at which most errors occurred and whether this was dependent on problem difficulty and level of medical experience. Results indicate that hypothesis errors decrease as expertise increases but that identification and interpretation errors increase. This may be due to inappropriate use of pattern recognition or to failure of the knowledge base. Furthermore, although hypothesis errors increased in line with problem difficulty, identification and interpretation errors decreased. A possible explanation is that as problem difficulty increases, subjects at all levels of expertise are less able to differentiate between relevant and irrelevant clinical features and so give equal consideration to all information contained within a case. It is concluded that the development of clinical reasoning in medical students throughout the course of their pre-clinical and clinical education may be enhanced by both an analysis of the clinical reasoning process and a specific focus on each of the stages at which errors commonly occur.
Resumo:
This paper describes the 3D Water Chemistry Atlas - an open source, Web-based system that enables the three-dimensional (3D) sub-surface visualization of ground water monitoring data, overlaid on the local geological model. Following a review of existing technologies, the system adopts Cesium (an open source Web-based 3D mapping and visualization interface) together with a PostGreSQL/PostGIS database, for the technical architecture. In addition a range of the search, filtering, browse and analysis tools were developed that enable users to interactively explore the groundwater monitoring data and interpret it spatially and temporally relative to the local geological formations and aquifers via the Cesium interface. The result is an integrated 3D visualization system that enables environmental managers and regulators to assess groundwater conditions, identify inconsistencies in the data, manage impacts and risks and make more informed decisions about activities such as coal seam gas extraction, waste water extraction and re-use.
Resumo:
Given two simple polygons, the Minimal Vertex Nested Polygon Problem is one of finding a polygon nested between the given polygons having the minimum number of vertices. In this paper, we suggest efficient approximate algorithms for interesting special cases of the above using the shortest-path finding graph algorithms.
Resumo:
The starting point of this thesis is the notion that in order for organisations to understand what customers value and how customers experience service, they need to learn about customers. The first and perhaps most important link in an organisation-wide learning process directed at customers is the frontline contact person. Service- and sales organisations can only learn about customers if the individual frontline contact persons learn about customers. Even though it is commonly recognised that learning about customers is the basis for an organisation’s success, few contributions within marketing investigate the fundamental nature of the phenomenon as it occurs in everyday customer service. Thus, what learning about customers is and how it takes place in a customer-service setting is an issue that is neglected in marketing research. In order to explore these questions, this thesis presents a socio-cultural approach to understanding learning about customers. Hence, instead of considering learning equal to cognitive processes in the mind of the frontline contact person or learning as equal to organisational information processing, the interactive, communication-based, socio-cultural aspect of learning about customers is brought to the fore. Consequently, the theoretical basis of the study can be found both in socio-cultural and practice-oriented lines of reasoning, as well as in the fields of service- and relationship marketing. As it is argued that learning about customers is an integrated part of everyday practices, it is also clear that it should be studied in a naturalistic and holistic way as it occurs in a customer-service setting. This calls for an ethnographic research approach, which involves direct, first-hand experience of the research setting during an extended period of time. Hence, the empirical study employs participant observations, informal discussions and interviews among car salespersons and service advisors at a car retailing company. Finally, as a synthesis of theoretically and empirically gained understanding, a set of concepts are developed and they are integrated into a socio-cultural model of learning about customers.
Resumo:
Biological motion has successfully been used for analysis of a person's mood and other psychological traits. Efforts are made to use human gait as a non-invasive mode of biometric. In this reported work, we try to study the effectiveness of biological gait motion of people as a cue to biometric based person recognition. The data is 3D in nature and, hence, has more information with itself than the cues obtained from video-based gait patterns. The high accuracies of person recognition using a simple linear model of data representation and simple neighborhood based classfiers, suggest that it is the nature of the data which is more important than the recognition scheme employed.
Resumo:
Separation of printed text blocks from the non-text areas, containing signatures, handwritten text, logos and other such symbols, is a necessary first step for an OCR involving printed text recognition. In the present work, we compare the efficacy of some feature-classifier combinations to carry out this separation task. We have selected length-nomalized horizontal projection profile (HPP) as the starting point of such a separation task. This is with the assumption that the printed text blocks contain lines of text which generate HPP's with some regularity. Such an assumption is demonstrated to be valid. Our features are the HPP and its two transformed versions, namely, eigen and Fisher profiles. Four well known classifiers, namely, Nearest neighbor, Linear discriminant function, SVM's and artificial neural networks have been considered and efficiency of the combination of these classifiers with the above features is compared. A sequential floating feature selection technique has been adopted to enhance the efficiency of this separation task. The results give an average accuracy of about 96.
Resumo:
This correspondence describes a method for automated segmentation of speech. The method proposed in this paper uses a specially designed filter-bank called Bach filter-bank which makes use of 'music' related perception criteria. The speech signal is treated as continuously time varying signal as against a short time stationary model. A comparative study has been made of the performances using Mel, Bark and Bach scale filter banks. The preliminary results show up to 80 % matches within 20 ms of the manually segmented data, without any information of the content of the text and without any language dependence. The Bach filters are seen to marginally outperform the other filters.
Resumo:
Extraction of text areas from the document images with complex content and layout is one of the challenging tasks. Few texture based techniques have already been proposed for extraction of such text blocks. Most of such techniques are greedy for computation time and hence are far from being realizable for real time implementation. In this work, we propose a modification to two of the existing texture based techniques to reduce the computation. This is accomplished with Harris corner detectors. The efficiency of these two textures based algorithms, one based on Gabor filters and other on log-polar wavelet signature, are compared. A combination of Gabor feature based texture classification performed on a smaller set of Harris corner detected points is observed to deliver the accuracy and efficiency.
Resumo:
This paper proposes and compares four methods of binarzing text images captured using a camera mounted on a cell phone. The advantages and disadvantages(image clarity and computational complexity) of each method over the others are demonstrated through binarized results. The images are of VGA or lower resolution.
Resumo:
Geometric phases have been used in NMR to implement controlled phase shift gates for quantum-information processing, only in weakly coupled systems in which the individual spins can be identified as qubits. In this work, we implement controlled phase shift gates in strongly coupled systems by using nonadiabatic geometric phases, obtained by evolving the magnetization of fictitious spin-1/2 subspaces, over a closed loop on the Bloch sphere. The dynamical phase accumulated during the evolution of the subspaces is refocused by a spin echo pulse sequence and by setting the delay of transition selective pulses such that the evolution under the homonuclear coupling makes a complete 2 pi rotation. A detailed theoretical explanation of nonadiabatic geometric phases in NMR is given by using single transition operators. Controlled phase shift gates, two qubit Deutsch-Jozsa algorithm, and parity algorithm in a qubit-qutrit system have been implemented in various strongly dipolar coupled systems obtained by orienting the molecules in liquid crystal media.
Resumo:
Avoiding the loss of coherence of quantum mechanical states is an important prerequisite for quantum information processing. Dynamical decoupling (DD) is one of the most effective experimental methods for maintaining coherence, especially when one can access only the qubit system and not its environment (bath). It involves the application of pulses to the system whose net effect is a reversal of the system-environment interaction. In any real system, however, the environment is not static, and therefore the reversal of the system-environment interaction becomes imperfect if the spacing between refocusing pulses becomes comparable to or longer than the correlation time of the environment. The efficiency of the refocusing improves therefore if the spacing between the pulses is reduced. Here, we quantify the efficiency of different DD sequences in preserving different quantum states. We use C-13 nuclear spins as qubits and an environment of H-1 nuclear spins as the environment, which couples to the qubit via magnetic dipole-dipole couplings. Strong dipole-dipole couplings between the proton spins result in a rapidly fluctuating environment with a correlation time of the order of 100 mu s. Our experimental results show that short delays between the pulses yield better performance if they are compared with the bath correlation time. However, as the pulse spacing becomes shorter than the bath correlation time, an optimum is reached. For even shorter delays, the pulse imperfections dominate over the decoherence losses and cause the quantum state to decay.
Resumo:
We present a low-complexity algorithm for intrusion detection in the presence of clutter arising from wind-blown vegetation, using Passive Infra-Red (PIR) sensors in a Wireless Sensor Network (WSN). The algorithm is based on a combination of Haar Transform (HT) and Support-Vector-Machine (SVM) based training and was field tested in a network setting comprising of 15-20 sensing nodes. Also contained in this paper is a closed-form expression for the signal generated by an intruder moving at a constant velocity. It is shown how this expression can be exploited to determine the direction of motion information and the velocity of the intruder from the signals of three well-positioned sensors.
Resumo:
We present a local algorithm (constant-time distributed algorithm) for finding a 3-approximate vertex cover in bounded-degree graphs. The algorithm is deterministic, and no auxiliary information besides port numbering is required. (c) 2009 Elsevier B.V. All rights reserved.
Resumo:
Any pair of non-adjacent vertices forms a non-edge in a graph. Contraction of a non-edge merges two non-adjacent vertices into a single vertex such that the edges incident on the non-adjacent vertices are now incident on the merged vertex. In this paper, we consider simple connected graphs, hence parallel edges are removed after contraction. The minimum number of nodes whose removal disconnects the graph is the connectivity of the graph. We say a graph is k-connected, if its connectivity is k. A non-edge in a k-connected graph is contractible if its contraction does not result in a graph of lower connectivity. Otherwise the non-edge is non-contractible. We focus our study on non-contractible non-edges in 2-connected graphs. We show that cycles are the only 2-connected graphs in which every non-edge is non-contractible. (C) 2010 Elsevier B.V. All rights reserved.