869 resultados para Classification of cast net
Resumo:
Nucleosides in human urine and serum have frequently been studied as a possible biomedical marker for cancer, acquired immune deficiency syndrome (AIDS) and the whole-body turnover of RNAs. Fifteen normal and modified nucleosides were determined in 69 urine and 42 serum samples using high-performance liquid chromatography (HPLC). Artificial neural networks have been used as a powerful pattern recognition tool to distinguish cancer patients from healthy persons. The recognition rate for the training set reached 100%. In the validating set, 95.8 and 92.9% of people were correctly classified into cancer patients and healthy persons when urine and serum were used as the sample for measuring the nucleosides. The results show that the artificial neural network technique is better than principal component analysis for the classification of healthy persons and cancer patients based on nucleoside data. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Spatial and temporal distribution of vegetation net primary production (NPP) in China was studied using three light-use efficiency models (CASA, GLOPEM and GEOLUE) and two mechanistic ecological process models (CEVSA, GEOPRO). Based on spatial and temporal analysis (e.g. monthly, seasonally and annually) of simulated results from ecological process mechanism models of CASA, GLOPEM and CEVSA, the following conclusions could be made: (1) during the last 20 years, NPP change in China followed closely the seasonal change of climate affected by monsoon with an overall trend of increasing; (2) simulated average seasonal NPP was: 0.571 +/- 0.2 GtC in spring, 1.573 +/- 0.4 GtC in summer, 0.6 +/- 0.2 GtC in autumn, and 0.12 +/- 0.1 GtC in winter. Average annual NPP in China was 2.864 +/- 1 GtC. All the five models were able to simulate seasonal and spatial features of biomass for different ecological types in China. This paper provides a baseline for China's total biomass production. It also offers a means of estimating the NPP change due to afforestation, reforestation, conservation and other human activities and could aid people in using for-mentioned carbon sinks to fulfill China's commitment of reducing greenhouse gases.
Resumo:
C.R. Bull, N.J.B. McFarlane, R. Zwiggelaar, C.J. Allen and T.T. Mottram, 'Inspection of teats by colour image analysis for automatic milking systems', Computers and Electronics in Agriculture 15 (1), 15-26 (1996)
Resumo:
Plakhov, A.Y.; Torres, D., (2005) 'Newton's aerodynamic problem in media of chaotically moving particles', Sbornik: Mathematics 196(6) pp.885-933 RAE2008
Resumo:
The current congestion-oriented design of TCP hinders its ability to perform well in hybrid wireless/wired networks. We propose a new improvement on TCP NewReno (NewReno-FF) using a new loss labeling technique to discriminate wireless from congestion losses. The proposed technique is based on the estimation of average and variance of the round trip time using a filter cal led Flip Flop filter that is augmented with history information. We show the comparative performance of TCP NewReno, NewReno-FF, and TCP Westwood through extensive simulations. We study the fundamental gains and limits using TCP NewReno with varying Loss Labeling accuracy (NewReno-LL) as a benchmark. Lastly our investigation opens up important research directions. First, there is a need for a finer grained classification of losses (even within congestion and wireless losses) for TCP in heterogeneous networks. Second, it is essential to develop an appropriate control strategy for recovery after the correct classification of a packet loss.
Resumo:
Object detection and recognition are important problems in computer vision. The challenges of these problems come from the presence of noise, background clutter, large within class variations of the object class and limited training data. In addition, the computational complexity in the recognition process is also a concern in practice. In this thesis, we propose one approach to handle the problem of detecting an object class that exhibits large within-class variations, and a second approach to speed up the classification processes. In the first approach, we show that foreground-background classification (detection) and within-class classification of the foreground class (pose estimation) can be jointly solved with using a multiplicative form of two kernel functions. One kernel measures similarity for foreground-background classification. The other kernel accounts for latent factors that control within-class variation and implicitly enables feature sharing among foreground training samples. For applications where explicit parameterization of the within-class states is unavailable, a nonparametric formulation of the kernel can be constructed with a proper foreground distance/similarity measure. Detector training is accomplished via standard Support Vector Machine learning. The resulting detectors are tuned to specific variations in the foreground class. They also serve to evaluate hypotheses of the foreground state. When the image masks for foreground objects are provided in training, the detectors can also produce object segmentation. Methods for generating a representative sample set of detectors are proposed that can enable efficient detection and tracking. In addition, because individual detectors verify hypotheses of foreground state, they can also be incorporated in a tracking-by-detection frame work to recover foreground state in image sequences. To run the detectors efficiently at the online stage, an input-sensitive speedup strategy is proposed to select the most relevant detectors quickly. The proposed approach is tested on data sets of human hands, vehicles and human faces. On all data sets, the proposed approach achieves improved detection accuracy over the best competing approaches. In the second part of the thesis, we formulate a filter-and-refine scheme to speed up recognition processes. The binary outputs of the weak classifiers in a boosted detector are used to identify a small number of candidate foreground state hypotheses quickly via Hamming distance or weighted Hamming distance. The approach is evaluated in three applications: face recognition on the face recognition grand challenge version 2 data set, hand shape detection and parameter estimation on a hand data set, and vehicle detection and estimation of the view angle on a multi-pose vehicle data set. On all data sets, our approach is at least five times faster than simply evaluating all foreground state hypotheses with virtually no loss in classification accuracy.
Resumo:
A mechanism is proposed that integrates low-level (image processing), mid-level (recursive 3D trajectory estimation), and high-level (action recognition) processes. It is assumed that the system observes multiple moving objects via a single, uncalibrated video camera. A novel extended Kalman filter formulation is used in estimating the relative 3D motion trajectories up to a scale factor. The recursive estimation process provides a prediction and error measure that is exploited in higher-level stages of action recognition. Conversely, higher-level mechanisms provide feedback that allows the system to reliably segment and maintain the tracking of moving objects before, during, and after occlusion. The 3D trajectory, occlusion, and segmentation information are utilized in extracting stabilized views of the moving object. Trajectory-guided recognition (TGR) is proposed as a new and efficient method for adaptive classification of action. The TGR approach is demonstrated using "motion history images" that are then recognized via a mixture of Gaussian classifier. The system was tested in recognizing various dynamic human outdoor activities; e.g., running, walking, roller blading, and cycling. Experiments with synthetic data sets are used to evaluate stability of the trajectory estimator with respect to noise.
Resumo:
A combined 2D, 3D approach is presented that allows for robust tracking of moving people and recognition of actions. It is assumed that the system observes multiple moving objects via a single, uncalibrated video camera. Low-level features are often insufficient for detection, segmentation, and tracking of non-rigid moving objects. Therefore, an improved mechanism is proposed that integrates low-level (image processing), mid-level (recursive 3D trajectory estimation), and high-level (action recognition) processes. A novel extended Kalman filter formulation is used in estimating the relative 3D motion trajectories up to a scale factor. The recursive estimation process provides a prediction and error measure that is exploited in higher-level stages of action recognition. Conversely, higher-level mechanisms provide feedback that allows the system to reliably segment and maintain the tracking of moving objects before, during, and after occlusion. The 3D trajectory, occlusion, and segmentation information are utilized in extracting stabilized views of the moving object that are then used as input to action recognition modules. Trajectory-guided recognition (TGR) is proposed as a new and efficient method for adaptive classification of action. The TGR approach is demonstrated using "motion history images" that are then recognized via a mixture-of-Gaussians classifier. The system was tested in recognizing various dynamic human outdoor activities: running, walking, roller blading, and cycling. Experiments with real and synthetic data sets are used to evaluate stability of the trajectory estimator with respect to noise.
Resumo:
The Internet and World Wide Web have had, and continue to have, an incredible impact on our civilization. These technologies have radically influenced the way that society is organised and the manner in which people around the world communicate and interact. The structure and function of individual, social, organisational, economic and political life begin to resemble the digital network architectures upon which they are increasingly reliant. It is increasingly difficult to imagine how our ‘offline’ world would look or function without the ‘online’ world; it is becoming less meaningful to distinguish between the ‘actual’ and the ‘virtual’. Thus, the major architectural project of the twenty-first century is to “imagine, build, and enhance an interactive and ever changing cyberspace” (Lévy, 1997, p. 10). Virtual worlds are at the forefront of this evolving digital landscape. Virtual worlds have “critical implications for business, education, social sciences, and our society at large” (Messinger et al., 2009, p. 204). This study focuses on the possibilities of virtual worlds in terms of communication, collaboration, innovation and creativity. The concept of knowledge creation is at the core of this research. The study shows that scholars increasingly recognise that knowledge creation, as a socially enacted process, goes to the very heart of innovation. However, efforts to build upon these insights have struggled to escape the influence of the information processing paradigm of old and have failed to move beyond the persistent but problematic conceptualisation of knowledge creation in terms of tacit and explicit knowledge. Based on these insights, the study leverages extant research to develop the conceptual apparatus necessary to carry out an investigation of innovation and knowledge creation in virtual worlds. The study derives and articulates a set of definitions (of virtual worlds, innovation, knowledge and knowledge creation) to guide research. The study also leverages a number of extant theories in order to develop a preliminary framework to model knowledge creation in virtual worlds. Using a combination of participant observation and six case studies of innovative educational projects in Second Life, the study yields a range of insights into the process of knowledge creation in virtual worlds and into the factors that affect it. The study’s contributions to theory are expressed as a series of propositions and findings and are represented as a revised and empirically grounded theoretical framework of knowledge creation in virtual worlds. These findings highlight the importance of prior related knowledge and intrinsic motivation in terms of shaping and stimulating knowledge creation in virtual worlds. At the same time, they highlight the importance of meta-knowledge (knowledge about knowledge) in terms of guiding the knowledge creation process whilst revealing the diversity of behavioural approaches actually used to create knowledge in virtual worlds and. This theoretical framework is itself one of the chief contributions of the study and the analysis explores how it can be used to guide further research in virtual worlds and on knowledge creation. The study’s contributions to practice are presented as actionable guide to simulate knowledge creation in virtual worlds. This guide utilises a theoretically based classification of four knowledge-creator archetypes (the sage, the lore master, the artisan, and the apprentice) and derives an actionable set of behavioural prescriptions for each archetype. The study concludes with a discussion of the study’s implications in terms of future research.
The psychology of immersion and development of a quantitative measure of immersive response in games
Resumo:
This study sets out to investigate the psychology of immersion and the immersive response of individuals in relation to video and computer games. Initially, an exhaustive review of literature is presented, including research into games, player demographics, personality and identity. Play in traditional psychology is also reviewed, as well as previous research into immersion and attempts to define and measure this construct. An online qualitative study was carried out (N=38), and data was analysed using content analysis. A definition of immersion emerged, as well as a classification of two separate types of immersion, namely, vicarious immersion and visceral immersion. A survey study (N=217) verified the discrete nature of these categories and rejected the null hypothesis that there was no difference between individuals' interpretations of vicarious and visceral immersion. The primary aim of this research was to create a quantitative instrument which measures the immersive response as experienced by the player in a single game session. The IMX Questionnaire was developed using data from the initial qualitative study and quantitative survey. Exploratory Factor Analysis was carried out on data from 300 participants for the IMX Version 1, and Confirmatory Factor Analysis was conducted on data from 380 participants on the IMX Version 2. IMX Version 3 was developed from the results of these analyses. This questionnaire was found to have high internal consistency reliability and validity.
Resumo:
Coastal lagoons are defined as shallow coastal water bodies partially separated from the adjacent sea by a restrictive barrier. Coastal lagoons are protected under Annex I of the European Habitats Directive (92/43/EEC). Lagoons are also considered to be “transitional water bodies” and are therefore included in the “register of protected areas” under the Water Framework Directive (2000/60/EC). Consequently, EU member states are required to establish monitoring plans and to regularly report on lagoon condition and conservation status. Irish lagoons are considered relatively rare and unusual because of their North Atlantic, macrotidal location on high energy coastlines and have received little attention. This work aimed to assess the physicochemical and ecological status of three lagoons, Cuskinny, Farranamanagh and Toormore, on the southwest coast of Ireland. Baseline salinity, nutrient and biological conditions were determined in order to provide reference conditions to detect perturbations, and to inform future maintenance of ecosystem health. Accumulation of organic matter is an increasing pressure in coastal lagoon habitats worldwide, often compounding existing eutrophication problems. This research also aimed to investigate the in situ decomposition process in a lagoon habitat together with exploring the associated invertebrate assemblages. Re-classification of the lagoons, under the guidelines of the Venice system for the classifications of marine waters according to salinity, was completed by taking spatial and temporal changes in salinity regimes into consideration. Based on the results of this study, Cuskinny, Farranamanagh and Toormore lagoons are now classified as mesohaline (5 ppt – 18 ppt), oligohaline (0.5 ppt – 5 ppt) and polyhaline (18 ppt – 30 ppt), respectively. Varying vertical, longitudinal and transverse salinity patterns were observed in the three lagoons. Strong correlations between salinity and cumulative rainfall highlighted the important role of precipitation in controlling the lagoon environment. Maximum effect of precipitation on the salinity of the lagoon was observed between four and fourteen days later depending on catchment area geology, indicating the uniqueness of each lagoon system. Seasonal nutrient patterns were evident in the lagoons. Nutrient concentrations were found to be reflective of the catchment area and the magnitude of the freshwater inflow. Assessment based on the Redfield molar ratio indicated a trend towards phosphorus, rather than nitrogen, limitation in Irish lagoons. Investigation of the decomposition process in Cuskinny Lagoon revealed that greatest biomass loss occurred in the winter season. Lowest biomass loss occurred in spring, possibly due to the high density of invertebrates feeding on the thick microbial layer rather than the decomposing litter. It has been reported that the decomposition of plant biomass is highest in the preferential distribution area of the plant species; however, no similar trend was observed in this study with the most active zones of decomposition varying spatially throughout the seasons. Macroinvertebrate analysis revealed low species diversity but high abundance, indicating the dominance of a small number of species. Invertebrate assemblages within the lagoon varied significantly from communities in the adjacent freshwater or marine environments. Although carried out in coastal lagoons on the southwest coast of Ireland, it is envisaged that the overall findings of this study have relevance throughout the entire island of Ireland and possibly to many North Atlantic coastal lagoon ecosystems elsewhere.
Resumo:
Future high speed communications networks will transmit data predominantly over optical fibres. As consumer and enterprise computing will remain the domain of electronics, the electro-optical conversion will get pushed further downstream towards the end user. Consequently, efficient tools are needed for this conversion and due to many potential advantages, including low cost and high output powers, long wavelength Vertical Cavity Surface Emitting Lasers (VCSELs) are a viable option. Drawbacks, such as broader linewidths than competing options, can be mitigated through the use of additional techniques such as Optical Injection Locking (OIL) which can require significant expertise and expensive equipment. This thesis addresses these issues by removing some of the experimental barriers to achieving performance increases via remote OIL. Firstly, numerical simulations of the phase and the photon and carrier numbers of an OIL semiconductor laser allowed the classification of the stable locking phase limits into three distinct groups. The frequency detuning of constant phase values (ø) was considered, in particular ø = 0 where the modulation response parameters were shown to be independent of the linewidth enhancement factor, α. A new method to estimate α and the coupling rate in a single experiment was formulated. Secondly, a novel technique to remotely determine the locked state of a VCSEL based on voltage variations of 2mV−30mV during detuned injection has been developed which can identify oscillatory and locked states. 2D & 3D maps of voltage, optical and electrical spectra illustrate corresponding behaviours. Finally, the use of directly modulated VCSELs as light sources for passive optical networks was investigated by successful transmission of data at 10 Gbit/s over 40km of single mode fibre (SMF) using cost effective electronic dispersion compensation to mitigate errors due to wavelength chirp. A widely tuneable MEMS-VCSEL was established as a good candidate for an externally modulated colourless source after a record error free transmission at 10 Gbit/s over 50km of SMF across a 30nm single mode tuning range. The ability to remotely set the emission wavelength using the novel methods developed in this thesis was demonstrated.
Resumo:
The electroencephalogram (EEG) is an important noninvasive tool used in the neonatal intensive care unit (NICU) for the neurologic evaluation of the sick newborn infant. It provides an excellent assessment of at-risk newborns and formulates a prognosis for long-term neurologic outcome.The automated analysis of neonatal EEG data in the NICU can provide valuable information to the clinician facilitating medical intervention. The aim of this thesis is to develop a system for automatic classification of neonatal EEG which can be mainly divided into two parts: (1) classification of neonatal EEG seizure from nonseizure, and (2) classifying neonatal background EEG into several grades based on the severity of the injury using atomic decomposition. Atomic decomposition techniques use redundant time-frequency dictionaries for sparse signal representations or approximations. The first novel contribution of this thesis is the development of a novel time-frequency dictionary coherent with the neonatal EEG seizure states. This dictionary was able to track the time-varying nature of the EEG signal. It was shown that by using atomic decomposition and the proposed novel dictionary, the neonatal EEG transition from nonseizure to seizure states could be detected efficiently. The second novel contribution of this thesis is the development of a neonatal seizure detection algorithm using several time-frequency features from the proposed novel dictionary. It was shown that the time-frequency features obtained from the atoms in the novel dictionary improved the seizure detection accuracy when compared to that obtained from the raw EEG signal. With the assistance of a supervised multiclass SVM classifier and several timefrequency features, several methods to automatically grade EEG were explored. In summary, the novel techniques proposed in this thesis contribute to the application of advanced signal processing techniques for automatic assessment of neonatal EEG recordings.
Resumo:
OBJECTIVE: To investigate the value of serum antitissue transglutaminase IgA antibodies (IgA-TTG) and IgA antiendomysial antibodies (IgA-EMA) in the diagnosis of coeliac disease in cohorts from different geographical areas in Europe. The setting allowed a further comparison between the antibody results and the conventional small-intestinal histology. METHODS: A total of 144 cases with coeliac disease [median age 19.5 years (range 0.9-81.4)], and 127 disease controls [median age 29.2 years (range 0.5-79.0)], were recruited, on the basis of biopsy, from 13 centres in nine countries. All biopsy specimens were re-evaluated and classified blindly a second time by two investigators. IgA-TTG were determined by ELISA with human recombinant antigen and IgA-EMA by an immunofluorescence test with human umbilical cord as antigen. RESULTS: The quality of the biopsy specimens was not acceptable in 29 (10.7%) of 271 cases and a reliable judgement could not be made, mainly due to poor orientation of the samples. The primary clinical diagnosis and the second classification of the biopsy specimens were divergent in nine cases, and one patient was initially enrolled in the wrong group. Thus, 126 coeliac patients and 106 controls, verified by biopsy, remained for final analysis. The sensitivity of IgA-TTG was 94% and IgA-EMA 89%, the specificity was 99% and 98%, respectively. CONCLUSIONS: Serum IgA-TTG measurement is effective and at least as good as IgA-EMA in the identification of coeliac disease. Due to a high percentage of poor histological specimens, the diagnosis of coeliac disease should not depend only on biopsy, but in addition the clinical picture and serology should be considered.