210 resultados para Theoretical Computer Science
Resumo:
Key decisions at the collection, pre-processing, transformation, mining and interpretation phase of any knowledge discovery from database (KDD) process depend heavily on assumptions and theorectical perspectives relating to the type of task to be performed and characteristics of data sourced. In this article, we compare and contrast theoretical perspectives and assumptions taken in data mining exercises in the legal domain with those adopted in data mining in TCM and allopathic medicine. The juxtaposition results in insights for the application of KDD for Traditional Chinese Medicine.
Resumo:
Process bus networks are the next stage in the evolution of substation design, bringing digital technology to the high voltage switchyard. Benefits of process buses include facilitating the use of Non-Conventional Instrument Transformers, improved disturbance recording and phasor measurement and the removal of costly, and potentially hazardous, copper cabling from substation switchyards and control rooms. This paper examines the role a process bus plays in an IEC 61850 based Substation Automation System. Measurements taken from a process bus substation are used to develop an understanding of the network characteristics of "whole of substation" process buses. The concept of "coherent transmission" is presented and the impact of this on Ethernet switches is examined. Experiments based on substation observations are used to investigate in detail the behavior of Ethernet switches with sampled value traffic. Test methods that can be used to assess the adequacy of a network are proposed, and examples of the application and interpretation of these tests are provided. Once sampled value frames are queued by an Ethernet switch the additional delay incurred by subsequent switches is minimal, and this allows their use in switchyards to further reduce communications cabling, without significantly impacting operation. The performance and reliability of a process bus network operating with close to the theoretical maximum number of digital sampling units (merging units or electronic instrument transformers) was investigated with networking equipment from several vendors, and has been demonstrated to be acceptable.
Resumo:
A fundamental problem faced by stereo matching algorithms is the matching or correspondence problem. A wide range of algorithms have been proposed for the correspondence problem. For all matching algorithms, it would be useful to be able to compute a measure of the probability of correctness, or reliability of a match. This paper focuses in particular on one class for matching algorithms, which are based on the rank transform. The interest in these algorithms for stereo matching stems from their invariance to radiometric distortion, and their amenability to fast hardware implementation. This work differs from previous work in that it derives, from first principles, an expression for the probability of a correct match. This method was based on an enumeration of all possible symbols for matching. The theoretical results for disparity error prediction, obtained using this method, were found to agree well with experimental results. However, disadvantages of the technique developed in this chapter are that it is not easily applicable to real images, and also that it is too computationally expensive for practical window sizes. Nevertheless, the exercise provides an interesting and novel analysis of match reliability.
Resumo:
It is acknowledged around the world that many university students struggle with learning to program (McCracken et al., 2001; McGettrick et al., 2005). In this paper, we describe how we have developed a research programme to systematically study and incrementally improve our teaching. We have adopted a research programme with three elements: (1) a theory that provides an organising framework for defining the type of phenomena and data of interest, (2) data on how the class as a whole performs on formative assessment tasks that are framed from within the organising framework, and (3) data from one-on-one think aloud sessions, to establish why students struggle with some of those in-class formative assessment tasks. We teach introductory computer programming, but this three-element structure of our research is applicable to many areas of engineering education research.
Resumo:
This paper introduces PartSS, a new partition-based fil- tering for tasks performing string comparisons under edit distance constraints. PartSS offers improvements over the state-of-the-art method NGPP with the implementation of a new partitioning scheme and also improves filtering abil- ities by exploiting theoretical results on shifting and scaling ranges, thus accelerating the rate of calculating edit distance between strings. PartSS filtering has been implemented within two major tasks of data integration: similarity join and approximate membership extraction under edit distance constraints. The evaluation on an extensive range of real-world datasets demonstrates major gain in efficiency over NGPP and QGrams approaches.
Resumo:
Citizen Science projects are initiatives in which members of the general public participate in scientific research projects and perform or manage research-related tasks such as data collection and/or data annotation. Citizen Science is technologically possible and scientifically significant. However, although research teams can save time and money by recruiting general citizens to volunteer their time and skills to help data analysis, the reliability of contributed data varies a lot. Data reliability issues are significant to the domain of Citizen Science due to the quantity and diversity of people and devices involved. Participants may submit low quality, misleading, inaccurate, or even malicious data. Therefore, finding a way to improve the data reliability has become an urgent demand. This study aims to investigate techniques to enhance the reliability of data contributed by general citizens in scientific research projects especially for acoustic sensing projects. In particular, we propose to design a reputation framework to enhance data reliability and also investigate some critical elements that should be aware of during developing and designing new reputation systems.
Resumo:
We blend research from human-computer interface (HCI) design with computational based crypto- graphic provable security. We explore the notion of practice-oriented provable security (POPS), moving the focus to a higher level of abstraction (POPS+) for use in providing provable security for security ceremonies involving humans. In doing so we high- light some challenges and paradigm shifts required to achieve meaningful provable security for a protocol which includes a human. We move the focus of security ceremonies from being protocols in their context of use, to the protocols being cryptographic building blocks in a higher level protocol (the security cere- mony), which POPS can be applied to. In order to illustrate the need for our approach, we analyse both a protocol proven secure in theory, and a similar proto- col implemented by a �nancial institution, from both HCI and cryptographic perspectives.
Resumo:
We introduce a lightweight biometric solution for user authentication over networks using online handwritten signatures. The algorithm proposed is based on a modified Hausdorff distance and has favorable characteristics such as low computational cost and minimal training requirements. Furthermore, we investigate an information theoretic model for capacity and performance analysis for biometric authentication which brings additional theoretical insights to the problem. A fully functional proof-of-concept prototype that relies on commonly available off-the-shelf hardware is developed as a client-server system that supports Web services. Initial experimental results show that the algorithm performs well despite its low computational requirements and is resilient against over-the-shoulder attacks.
Resumo:
The GameFlow model strives to be a general model of player enjoyment, applicable to all game genres and platforms. Derived from a general set of heuristics for creating enjoyable player experiences, the GameFlow model has been widely used in evaluating many types of games, as well as non-game applications. However, we recognize that more specific, low-level, and implementable criteria are potentially more useful for designing and evaluating video games. Consequently, the research reported in this paper aims to provide detailed heuristics for designing and evaluating one specific game genre, real-time strategy games. In order to develop these heuristics, we conducted a grounded theoretical analysis on a set of professional game reviews and structured the resulting heuristics using the GameFlow model. A selection of the resulting 165 heuristics are presented in this paper and discussed with respect to key evaluations of the GameFlow model.
Resumo:
The GameFlow model strives to be a general model of player enjoyment, applicable to all game genres and platforms. Derived from a general set of heuristics for creating enjoyable player experiences, the GameFlow model has been widely used in evaluating many types of games, as well as non-game applications. However, we recognize that more specific, low-level, and implementable criteria are potentially more useful for designing and evaluating video games. Consequently, the research reported in this paper aims to provide detailed heuristics for designing and evaluating one specific game genre, real-time strategy games. In order to develop these heuristics, we conducted a grounded theoretical analysis on a set of professional game reviews and structured the resulting heuristics using the GameFlow model. The resulting 165 heuristics for designing and evaluating real-time strategy games are presented and discussed in this paper.
Resumo:
Series reactors are used in distribution grids to reduce the short-circuit fault level. Some of the disadvantages of the application of these devices are the voltage drop produced across the reactor and the steep front rise of the transient recovery voltage (TRV), which generally exceeds the rating of the associated circuit breaker. Simulations were performed to compare the characteristics of a saturated core High-Temperature Superconducting Fault Current Limiter (HTS FCL) and a series reactor. The design of the HTS FCL was optimized using the evolutionary algorithm. The resulting Pareto frontier curve of optimum solution is presented in this paper. The results show that the steady-state impedance of an HTS FCL is significantly lower than that of a series reactor for the same level of fault current limiting. Tests performed on a prototype 11 kV HTS FCL confirm the theoretical results. The respective transient recovery voltages (TRV) of the HTS FCL and an air core reactor of comparable fault current limiting capability are also determined. The results show that the saturated core HTS FCL has a significantly lower effect on the rate of rise of the circuit breaker TRV as compared to the air core reactor. The simulations results are validated with shortcircuit test results.
Resumo:
Theoretical foundations of higher order spectral analysis are revisited to examine the use of time-varying bicoherence on non-stationary signals using a classical short-time Fourier approach. A methodology is developed to apply this to evoked EEG responses where a stimulus-locked time reference is available. Short-time windowed ensembles of the response at the same offset from the reference are considered as ergodic cyclostationary processes within a non-stationary random process. Bicoherence can be estimated reliably with known levels at which it is significantly different from zero and can be tracked as a function of offset from the stimulus. When this methodology is applied to multi-channel EEG, it is possible to obtain information about phase synchronization at different regions of the brain as the neural response develops. The methodology is applied to analyze evoked EEG response to flash visual stimulii to the left and right eye separately. The EEG electrode array is segmented based on bicoherence evolution with time using the mean absolute difference as a measure of dissimilarity. Segment maps confirm the importance of the occipital region in visual processing and demonstrate a link between the frontal and occipital regions during the response. Maps are constructed using bicoherence at bifrequencies that include the alpha band frequency of 8Hz as well as 4 and 20Hz. Differences are observed between responses from the left eye and the right eye, and also between subjects. The methodology shows potential as a neurological functional imaging technique that can be further developed for diagnosis and monitoring using scalp EEG which is less invasive and less expensive than magnetic resonance imaging.
Resumo:
This paper reports on the implementation of a non-invasive electroencephalography-based brain-computer interface to control functions of a car in a driving simulator. The system is comprised of a Cleveland Medical Devices BioRadio 150 physiological signal recorder, a MATLAB-based BCI and an OKTAL SCANeR advanced driving experience simulator. The system utilizes steady-state visual-evoked potentials for the BCI paradigm, elicited by frequency-modulated high-power LEDs and recorded with the electrode placement of Oz-Fz with Fz as ground. A three-class online brain-computer interface was developed and interfaced with an advanced driving simulator to control functions of the car, including acceleration and steering. The findings are mainly exploratory but provide an indication of the feasibility and challenges of brain-controlled on-road cars for the future, in addition to a safe, simulated BCI driving environment to use as a foundation for research into overcoming these challenges.
Resumo:
Trajectory basis Non-Rigid Structure From Motion (NRSFM) currently faces two problems: the limit of reconstructability and the need to tune the basis size for different sequences. This paper provides a novel theoretical bound on 3D reconstruction error, arguing that the existing definition of reconstructability is fundamentally flawed in that it fails to consider system condition. This insight motivates a novel strategy whereby the trajectory's response to a set of high-pass filters is minimised. The new approach eliminates the need to tune the basis size and is more efficient for long sequences. Additionally, the truncated DCT basis is shown to have a dual interpretation as a high-pass filter. The success of trajectory filter reconstruction is demonstrated quantitatively on synthetic projections of real motion capture sequences and qualitatively on real image sequences.
Resumo:
The security of industrial control systems in critical infrastructure is a concern for the Australian government and other nations. There is a need to provide local Australian training and education for both control system engineers and information technology professionals. This paper proposes a postgraduate curriculum of four courses to provide knowledge and skills to protect critical infrastructure industrial control systems. Our curriculum is unique in that it provides security awareness but also the advanced skills required for security specialists in this area. We are aware that in the Australian context there is a cultural gap between the thinking of control system engineers who are responsible for maintaining and designing critical infrastructure and information technology professionals who are responsible for protecting these systems from cyber attacks. Our curriculum aims to bridge this gap by providing theoretical and practical exercises that will raise the awareness and preparedness of both groups of professionals.