838 resultados para Mesh generation from image data
Resumo:
This thesis investigates the problem of estimating the three-dimensional structure of a scene from a sequence of images. Structure information is recovered from images continuously using shading, motion or other visual mechanisms. A Kalman filter represents structure in a dense depth map. With each new image, the filter first updates the current depth map by a minimum variance estimate that best fits the new image data and the previous estimate. Then the structure estimate is predicted for the next time step by a transformation that accounts for relative camera motion. Experimental evaluation shows the significant improvement in quality and computation time that can be achieved using this technique.
Resumo:
M. Galea and Q. Shen. Fuzzy rules from ant-inspired computation. Proceedings of the 13th International Conference on Fuzzy Systems, pages 1691-1696, 2004.
Resumo:
A probabilistic, nonlinear supervised learning model is proposed: the Specialized Mappings Architecture (SMA). The SMA employs a set of several forward mapping functions that are estimated automatically from training data. Each specialized function maps certain domains of the input space (e.g., image features) onto the output space (e.g., articulated body parameters). The SMA can model ambiguous, one-to-many mappings that may yield multiple valid output hypotheses. Once learned, the mapping functions generate a set of output hypotheses for a given input via a statistical inference procedure. The SMA inference procedure incorporates an inverse mapping or feedback function in evaluating the likelihood of each of the hypothesis. Possible feedback functions include computer graphics rendering routines that can generate images for given hypotheses. The SMA employs a variant of the Expectation-Maximization algorithm for simultaneous learning of the specialized domains along with the mapping functions, and approximate strategies for inference. The framework is demonstrated in a computer vision system that can estimate the articulated pose parameters of a human’s body or hands, given silhouettes from a single image. The accuracy and stability of the SMA are also tested using synthetic images of human bodies and hands, where ground truth is known.
Resumo:
In a recent paper, Structural Analysis of Network Traffic Flows, we analyzed the set of Origin Destination traffic flows from the Sprint-Europe and Abilene backbone networks. This report presents the complete set of results from analyzing data from both networks. The results in this report are specific to the Sprint-1 and Abilene datasets studied in the above paper. The following results are presented here: 1 Rows of Principal Matrix (V) 2 1.1 Sprint-1 Dataset ................................ 2 1.2 Abilene Dataset.................................. 9 2 Set of Eigenflows 14 2.1 Sprint-1 Dataset.................................. 14 2.2 Abilene Dataset................................... 21 3 Classifying Eigenflows 26 3.1 Sprint-1 Dataset.................................. 26 3.2 Abilene Datase.................................... 44
Resumo:
Routing protocols in wireless sensor networks (WSN) face two main challenges: first, the challenging environments in which WSNs are deployed negatively affect the quality of the routing process. Therefore, routing protocols for WSNs should recognize and react to node failures and packet losses. Second, sensor nodes are battery-powered, which makes power a scarce resource. Routing protocols should optimize power consumption to prolong the lifetime of the WSN. In this paper, we present a new adaptive routing protocol for WSNs, we call it M^2RC. M^2RC has two phases: mesh establishment phase and data forwarding phase. In the first phase, M^2RC establishes the routing state to enable multipath data forwarding. In the second phase, M^2RC forwards data packets from the source to the sink. Targeting hop-by-hop reliability, an M^2RC forwarding node waits for an acknowledgement (ACK) that its packets were correctly received at the next neighbor. Based on this feedback, an M^2RC node applies multiplicative-increase/additive-decrease (MIAD) to control the number of neighbors targeted by its packet broadcast. We simulated M^2RC in the ns-2 simulator and compared it to GRAB, Max-power, and Min-power routing schemes. Our simulations show that M^2RC achieves the highest throughput with at least 10-30% less consumed power per delivered report in scenarios where a certain number of nodes unexpectedly fail.
Resumo:
A fundamental task of vision systems is to infer the state of the world given some form of visual observations. From a computational perspective, this often involves facing an ill-posed problem; e.g., information is lost via projection of the 3D world into a 2D image. Solution of an ill-posed problem requires additional information, usually provided as a model of the underlying process. It is important that the model be both computationally feasible as well as theoretically well-founded. In this thesis, a probabilistic, nonlinear supervised computational learning model is proposed: the Specialized Mappings Architecture (SMA). The SMA framework is demonstrated in a computer vision system that can estimate the articulated pose parameters of a human body or human hands, given images obtained via one or more uncalibrated cameras. The SMA consists of several specialized forward mapping functions that are estimated automatically from training data, and a possibly known feedback function. Each specialized function maps certain domains of the input space (e.g., image features) onto the output space (e.g., articulated body parameters). A probabilistic model for the architecture is first formalized. Solutions to key algorithmic problems are then derived: simultaneous learning of the specialized domains along with the mapping functions, as well as performing inference given inputs and a feedback function. The SMA employs a variant of the Expectation-Maximization algorithm and approximate inference. The approach allows the use of alternative conditional independence assumptions for learning and inference, which are derived from a forward model and a feedback model. Experimental validation of the proposed approach is conducted in the task of estimating articulated body pose from image silhouettes. Accuracy and stability of the SMA framework is tested using artificial data sets, as well as synthetic and real video sequences of human bodies and hands.
Resumo:
snBench is a platform on which novice users compose and deploy distributed Sense and Respond programs for simultaneous execution on a shared, distributed infrastructure. It is a natural imperative that we have the ability to (1) verify the safety/correctness of newly submitted tasks and (2) derive the resource requirements for these tasks such that correct allocation may occur. To achieve these goals we have established a multi-dimensional sized type system for our functional-style Domain Specific Language (DSL) called Sensor Task Execution Plan (STEP). In such a type system data types are annotated with a vector of size attributes (e.g., upper and lower size bounds). Tracking multiple size aspects proves essential in a system in which Images are manipulated as a first class data type, as image manipulation functions may have specific minimum and/or maximum resolution restrictions on the input they can correctly process. Through static analysis of STEP instances we not only verify basic type safety and establish upper computational resource bounds (i.e., time and space), but we also derive and solve data and resource sizing constraints (e.g., Image resolution, camera capabilities) from the implicit constraints embedded in program instances. In fact, the static methods presented here have benefit beyond their application to Image data, and may be extended to other data types that require tracking multiple dimensions (e.g., image "quality", video frame-rate or aspect ratio, audio sampling rate). In this paper we present the syntax and semantics of our functional language, our type system that builds costs and resource/data constraints, and (through both formalism and specific details of our implementation) provide concrete examples of how the constraints and sizing information are used in practice.
Resumo:
A system for recovering 3D hand pose from monocular color sequences is proposed. The system employs a non-linear supervised learning framework, the specialized mappings architecture (SMA), to map image features to likely 3D hand poses. The SMA's fundamental components are a set of specialized forward mapping functions, and a single feedback matching function. The forward functions are estimated directly from training data, which in our case are examples of hand joint configurations and their corresponding visual features. The joint angle data in the training set is obtained via a CyberGlove, a glove with 22 sensors that monitor the angular motions of the palm and fingers. In training, the visual features are generated using a computer graphics module that renders the hand from arbitrary viewpoints given the 22 joint angles. We test our system both on synthetic sequences and on sequences taken with a color camera. The system automatically detects and tracks both hands of the user, calculates the appropriate features, and estimates the 3D hand joint angles from those features. Results are encouraging given the complexity of the task.
Resumo:
Spotting patterns of interest in an input signal is a very useful task in many different fields including medicine, bioinformatics, economics, speech recognition and computer vision. Example instances of this problem include spotting an object of interest in an image (e.g., a tumor), a pattern of interest in a time-varying signal (e.g., audio analysis), or an object of interest moving in a specific way (e.g., a human's body gesture). Traditional spotting methods, which are based on Dynamic Time Warping or hidden Markov models, use some variant of dynamic programming to register the pattern and the input while accounting for temporal variation between them. At the same time, those methods often suffer from several shortcomings: they may give meaningless solutions when input observations are unreliable or ambiguous, they require a high complexity search across the whole input signal, and they may give incorrect solutions if some patterns appear as smaller parts within other patterns. In this thesis, we develop a framework that addresses these three problems, and evaluate the framework's performance in spotting and recognizing hand gestures in video. The first contribution is a spatiotemporal matching algorithm that extends the dynamic programming formulation to accommodate multiple candidate hand detections in every video frame. The algorithm finds the best alignment between the gesture model and the input, and simultaneously locates the best candidate hand detection in every frame. This allows for a gesture to be recognized even when the hand location is highly ambiguous. The second contribution is a pruning method that uses model-specific classifiers to reject dynamic programming hypotheses with a poor match between the input and model. Pruning improves the efficiency of the spatiotemporal matching algorithm, and in some cases may improve the recognition accuracy. The pruning classifiers are learned from training data, and cross-validation is used to reduce the chance of overpruning. The third contribution is a subgesture reasoning process that models the fact that some gesture models can falsely match parts of other, longer gestures. By integrating subgesture reasoning the spotting algorithm can avoid the premature detection of a subgesture when the longer gesture is actually being performed. Subgesture relations between pairs of gestures are automatically learned from training data. The performance of the approach is evaluated on two challenging video datasets: hand-signed digits gestured by users wearing short sleeved shirts, in front of a cluttered background, and American Sign Language (ASL) utterances gestured by ASL native signers. The experiments demonstrate that the proposed method is more accurate and efficient than competing approaches. The proposed approach can be generally applied to alignment or search problems with multiple input observations, that use dynamic programming to find a solution.
Resumo:
Colour is everywhere in our daily lives and impacts things like our mood, yet we rarely take notice of it. One method of capturing and analysing the predominant colours that we encounter is through visual lifelogging devices such as the SenseCam. However an issue related to these devices is the privacy concerns of capturing image level detail. Therefore in this work we demonstrate a hardware prototype wearable camera that captures only one pixel - of the dominant colour prevelant in front of the user, thus circumnavigating the privacy concerns raised in relation to lifelogging. To simulate whether the capture of dominant colour would be sufficient we report on a simulation carried out on 1.2 million SenseCam images captured by a group of 20 individuals. We compare the dominant colours that different groups of people are exposed to and show that useful inferences can be made from this data. We believe our prototype may be valuable in future experiments to capture colour correlated associated with an individual's mood.Colour is everywhere in our daily lives and impacts things like our mood, yet we rarely take notice of it. One method of capturing and analysing the predominant colours that we encounter is through visual lifelogging devices such as the SenseCam. However an issue related to these devices is the privacy concerns of capturing image level detail. Therefore in this work we demonstrate a hardware prototype wearable camera that captures only one pixel - of the dominant colour prevelant in front of the user, thus circumnavigating the privacy concerns raised in relation to lifelogging. To simulate whether the capture of dominant colour would be sufficient we report on a simulation carried out on 1.2 million SenseCam images captured by a group of 20 individuals. We compare the dominant colours that different groups of people are exposed to and show that useful inferences can be made from this data. We believe our prototype may be valuable in future experiments to capture colour correlated associated with an individual's mood.
Resumo:
In the field of embedded systems design, coprocessors play an important role as a component to increase performance. Many embedded systems are built around a small General Purpose Processor (GPP). If the GPP cannot meet the performance requirements for a certain operation, a coprocessor can be included in the design. The GPP can then offload the computationally intensive operation to the coprocessor; thus increasing the performance of the overall system. A common application of coprocessors is the acceleration of cryptographic algorithms. The work presented in this thesis discusses coprocessor architectures for various cryptographic algorithms that are found in many cryptographic protocols. Their performance is then analysed on a Field Programmable Gate Array (FPGA) platform. Firstly, the acceleration of Elliptic Curve Cryptography (ECC) algorithms is investigated through the use of instruction set extension of a GPP. The performance of these algorithms in a full hardware implementation is then investigated, and an architecture for the acceleration the ECC based digital signature algorithm is developed. Hash functions are also an important component of a cryptographic system. The FPGA implementation of recent hash function designs from the SHA-3 competition are discussed and a fair comparison methodology for hash functions presented. Many cryptographic protocols involve the generation of random data, for keys or nonces. This requires a True Random Number Generator (TRNG) to be present in the system. Various TRNG designs are discussed and a secure implementation, including post-processing and failure detection, is introduced. Finally, a coprocessor for the acceleration of operations at the protocol level will be discussed, where, a novel aspect of the design is the secure method in which private-key data is handled
Resumo:
This thesis examines the experiences of the biological children of foster carers. In particular it explores their experiences in relation to inclusion, consultation and decision-making. The study also examines the support and training needs of birth children in foster families. Using a qualitative methodology in-depth, semi-structured interviews were conducted with fifteen birth children of foster carers aged between 18 and 30 years. The research findings show that for the majority of birth children, fostering was overall a positive experience which helped them develop into individuals who were caring and nonjudgemental. However, from the data collected in this study, it is clear that fostering also brings a range of challenges for birth children in foster families, such as managing feelings of loss, grief, jealousy and guilt when foster children leave. Birth children are reluctant to discuss these issues with their parents and often did not approach fostering social workers as they did not have a meaningful relationship in order to discuss their concerns. The findings also demonstrate that birth children undertake a lot of emotional work in supporting their parents, birth siblings and foster siblings. Despite the important role played by birth children in the fostering process, this contribution often goes unrecognised and unacknowledged by fostering professionals and agencies with birth children not included or consulted about foster care decisions that affect them. It is argued here that birth children are viewed by foster care professionals and agencies from a deficit based perspective. However, this study contends that it is not just foster parents who are involved in the foster care process, but the entire foster family. The findings of this study show that birth children are competent social actors capable of making valuable contributions to foster care decisions that affect their lives and that of their family.
Resumo:
Background: Spirituality is fundamental to all human beings, existing within a person, and developing until death. This research sought to operationalise spirituality in a sample of individuals with chronic illness. A review of the conceptual literature identified three dimensions of spirituality: connectedness, transcendence, and meaning in life. A review of the empirical literature identified one instrument that measures the three dimensions together. Yet, recent appraisals of this instrument highlighted issues with item formulation and limited evidence of reliability and validity. Aim: The aim of this research was to develop a theoretically-grounded instrument to measure spirituality – the Spirituality Instrument-27 (SpI-27). A secondary aim was to psychometrically evaluate this instrument in a sample of individuals with chronic illness (n=249). Methods: A two-phase design was adopted. Phase one consisted of the development of the SpI-27 based on item generation from a concept analysis, a literature review, and an instrument appraisal. The second phase established the psychometric properties of the instrument and included: a qualitative descriptive design to establish content validity; a pilot study to evaluate the mode of administration; and a descriptive correlational design to assess the instrument’s reliability and validity. Data were analysed using SPSS (Version 18). Results: Results of exploratory factor analysis concluded a final five-factor solution with 27 items. These five factors were labelled: Connectedness with Others, Self-Transcendence, Self-Cognisance, Conservationism, and Connectedness with a Higher Power. Cronbach’s alpha coefficients ranged from 0.823 to 0.911 for the five factors, and 0.904 for the overall scale, indicating high internal consistency. Paired-sample t-tests, intra-class correlations, and weighted kappa values supported the temporal stability of the instrument over 2 weeks. A significant positive correlation was found between the SpI-27 and the Spirituality Index of Well-Being, providing evidence for convergent validity. Conclusion: This research addresses a call for a theoretically-grounded instrument to measure spirituality.
Resumo:
An analysis is carried out, using the prolate spheroidal wave functions, of certain regularized iterative and noniterative methods previously proposed for the achievement of object restoration (or, equivalently, spectral extrapolation) from noisy image data. The ill-posedness inherent in the problem is treated by means of a regularization parameter, and the analysis shows explicitly how the deleterious effects of the noise are then contained. The error in the object estimate is also assessed, and it is shown that the optimal choice for the regularization parameter depends on the signal-to-noise ratio. Numerical examples are used to demonstrate the performance of both unregularized and regularized procedures and also to show how, in the unregularized case, artefacts can be generated from pure noise. Finally, the relative error in the estimate is calculated as a function of the degree of superresolution demanded for reconstruction problems characterized by low space–bandwidth products.
Resumo:
SMARTFIRE, an open architecture integrated CFD code and knowledge based system attempts to make fire field modeling accessible to non-experts in Computational Fluid Dynamics (CFD) such as fire fighters, architects and fire safety engineers. This is achieved by embedding expert knowledge into CFD software. This enables the 'black-art' associated with the CFD analysis such as selection of solvers, relaxation parameters, convergence criteria, time steps, grid and boundary condition specification to be guided by expert advice from the software. The user is however given the option of overriding these decisions, thus retaining ultimate control. SMARTFIRE also makes use of recent developments in CFD technology such as unstructured meshes and group solvers in order to make the CFD analysis more efficient. This paper describes the incorporation within SMARTFIRE of the expert fire modeling knowledge required for automatic problem setup and mesh generation as well as the concept and use of group solvers for automatic and manual dynamic control of the CFD code.