207 resultados para Subspace Filter Diagonalization
Resumo:
Social media platforms risk polarising public opinions by employing proprietary algorithms that produce filter bubbles and echo chambers. As a result, the ability of citizens and communities to engage in robust debate in the public sphere is diminished. In response, this paper highlights the capacity of urban interfaces, such as pervasive displays, to counteract this trend by exposing citizens to the socio-cultural diversity of the city. Engagement with different ideas, networks and communities is crucial to both innovation and the functioning of democracy. We discuss examples of urban interfaces designed to play a key role in fostering this engagement. Based on an analysis of works empirically-grounded in field observations and design research, we call for a theoretical framework that positions pervasive displays and other urban interfaces as civic media. We argue that when designed for more than wayfinding, advertisement or television broadcasts, urban screens as civic media can rectify some of the pitfalls of social media by allowing the polarised user to break out of their filter bubble and embrace the cultural diversity and richness of the city.
Resumo:
“Hardware in the Loop” (HIL) testing is widely used in the automotive industry. The sophisticated electronic control units used for vehicle control are usually tested and evaluated using HIL-simulations. The HIL increases the degree of realistic testing of any system. Moreover, it helps in designing the structure and control of the system under test so that it works effectively in the situations that will be encountered in the system. Due to the size and the complexity of interaction within a power network, most research is based on pure simulation. To validate the performance of physical generator or protection system, most testing is constrained to very simple power network. This research, however, examines a method to test power system hardware within a complex virtual environment using the concept of the HIL. The HIL testing for electronic control units and power systems protection device can be easily performed at signal level. But performance of power systems equipments, such as distributed generation systems can not be evaluated at signal level using HIL testing. The HIL testing for power systems equipments is termed here as ‘Power Network in the Loop’ (PNIL). PNIL testing can only be performed at power level and requires a power amplifier that can amplify the simulation signal to the power level. A power network is divided in two parts. One part represents the Power Network Under Test (PNUT) and the other part represents the rest of the complex network. The complex network is simulated in real time simulator (RTS) while the PNUT is connected to the Voltage Source Converter (VSC) based power amplifier. Two way interaction between the simulator and amplifier is performed using analog to digital (A/D) and digital to analog (D/A) converters. The power amplifier amplifies the current or voltage signal of simulator to the power level and establishes the power level interaction between RTS and PNUT. In the first part of this thesis, design and control of a VSC based power amplifier that can amplify a broadband voltage signal is presented. A new Hybrid Discontinuous Control method is proposed for the amplifier. This amplifier can be used for several power systems applications. In the first part of the thesis, use of this amplifier in DSTATCOM and UPS applications are presented. In the later part of this thesis the solution of network in the loop testing with the help of this amplifier is reported. The experimental setup for PNIL testing is built in the laboratory of Queensland University of Technology and the feasibility of PNIL testing has been evaluated using the experimental studies. In the last section of this thesis a universal load with power regenerative capability is designed. This universal load is used to test the DG system using PNIL concepts. This thesis is composed of published/submitted papers that form the chapters in this dissertation. Each paper has been published or submitted during the period of candidature. Chapter 1 integrates all the papers to provide a coherent view of wide bandwidth switching amplifier and its used in different power systems applications specially for the solution of power systems testing using PNIL.
Resumo:
Perceptual aliasing makes topological navigation a difficult task. In this paper we present a general approach for topological SLAM~(simultaneous localisation and mapping) which does not require motion or odometry information but only a sequence of noisy measurements from visited places. We propose a particle filtering technique for topological SLAM which relies on a method for disambiguating places which appear indistinguishable using neighbourhood information extracted from the sequence of observations. The algorithm aims to induce a small topological map which is consistent with the observations and simultaneously estimate the location of the robot. The proposed approach is evaluated using a data set of sonar measurements from an indoor environment which contains several similar places. It is demonstrated that our approach is capable of dealing with severe ambiguities and, and that it infers a small map in terms of vertices which is consistent with the sequence of observations.
Resumo:
The validation of Computed Tomography (CT) based 3D models takes an integral part in studies involving 3D models of bones. This is of particular importance when such models are used for Finite Element studies. The validation of 3D models typically involves the generation of a reference model representing the bones outer surface. Several different devices have been utilised for digitising a bone’s outer surface such as mechanical 3D digitising arms, mechanical 3D contact scanners, electro-magnetic tracking devices and 3D laser scanners. However, none of these devices is capable of digitising a bone’s internal surfaces, such as the medullary canal of a long bone. Therefore, this study investigated the use of a 3D contact scanner, in conjunction with a microCT scanner, for generating a reference standard for validating the internal and external surfaces of a CT based 3D model of an ovine femur. One fresh ovine limb was scanned using a clinical CT scanner (Phillips, Brilliance 64) with a pixel size of 0.4 mm2 and slice spacing of 0.5 mm. Then the limb was dissected to obtain the soft tissue free bone while care was taken to protect the bone’s surface. A desktop mechanical 3D contact scanner (Roland DG Corporation, MDX 20, Japan) was used to digitise the surface of the denuded bone. The scanner was used with the resolution of 0.3 × 0.3 × 0.025 mm. The digitised surfaces were reconstructed into a 3D model using reverse engineering techniques in Rapidform (Inus Technology, Korea). After digitisation, the distal and proximal parts of the bone were removed such that the shaft could be scanned with a microCT (µCT40, Scanco Medical, Switzerland) scanner. The shaft, with the bone marrow removed, was immersed in water and scanned with a voxel size of 0.03 mm3. The bone contours were extracted from the image data utilising the Canny edge filter in Matlab (The Mathswork).. The extracted bone contours were reconstructed into 3D models using Amira 5.1 (Visage Imaging, Germany). The 3D models of the bone’s outer surface reconstructed from CT and microCT data were compared against the 3D model generated using the contact scanner. The 3D model of the inner canal reconstructed from the microCT data was compared against the 3D models reconstructed from the clinical CT scanner data. The disparity between the surface geometries of two models was calculated in Rapidform and recorded as average distance with standard deviation. The comparison of the 3D model of the whole bone generated from the clinical CT data with the reference model generated a mean error of 0.19±0.16 mm while the shaft was more accurate(0.08±0.06 mm) than the proximal (0.26±0.18 mm) and distal (0.22±0.16 mm) parts. The comparison between the outer 3D model generated from the microCT data and the contact scanner model generated a mean error of 0.10±0.03 mm indicating that the microCT generated models are sufficiently accurate for validation of 3D models generated from other methods. The comparison of the inner models generated from microCT data with that of clinical CT data generated an error of 0.09±0.07 mm Utilising a mechanical contact scanner in conjunction with a microCT scanner enabled to validate the outer surface of a CT based 3D model of an ovine femur as well as the surface of the model’s medullary canal.
Resumo:
This paper proposes a novel relative entropy rate (RER) based approach for multiple HMM (MHMM) approximation of a class of discrete-time uncertain processes. Under different uncertainty assumptions, the model design problem is posed either as a min-max optimisation problem or stochastic minimisation problem on the RER between joint laws describing the state and output processes (rather than the more usual RER between output processes). A suitable filter is proposed for which performance results are established which bound conditional mean estimation performance and show that estimation performance improves as the RER is reduced. These filter consistency and convergence bounds are the first results characterising multiple HMM approximation performance and suggest that joint RER concepts provide a useful model selection criteria. The proposed model design process and MHMM filter are demonstrated on an important image processing dim-target detection problem.
Resumo:
In conventional fabrication of ceramic separation membranes, the particulate sols are applied onto porous supports. Major structural deficiencies under this approach are pin-holes and cracks, and the dramatic losses of flux when pore sizes are reduced to enhance selectivity. We have overcome these structural deficiencies by constructing hierarchically structured separation layer on a porous substrate using lager titanate nanofibers and smaller boehmite nanofibers. This yields a radical change in membrane texture. The resulting membranes effectively filter out species larger than 60 nm at flow rates orders of magnitude greater than conventional membranes. This reveals a new direction in membrane fabrication.
Resumo:
Purpose: In this research we examined, by means of case studies, the mechanisms by which relationships can be managed and by which communication and cooperation can be enhanced in sustainable supply chains. The research was predicated on the contention that the development of a sustainable supply chain depends, in part, on the transfer of knowledge and capabilities from the larger players in the supply chain. Design/Methodology/Approach: The research adopted a triangulated approach in which quantitative data were collected by questionnaire, interviews were conducted to explore and enrich the quantitative data and case studies were undertaken in order to illustrate and validate the findings. Handy‟s (1985) view of organisational culture, Allen & Meyer‟s (1990) concepts of organisational commitment and Van de Ven & Ferry‟s (1980) measures of organisational structuring have been combined into a model to test and explain how collaborative mechanisms can affect supply chain sustainability. Findings: It has been shown that the degree of match and mismatch between organisational culture and structure has an impact on staff‟s commitment level. A sustainable supply chain depends on convergence – that is the match between organisational structuring, organisation culture and organisation commitment. Research Limitations/implications: The study is a proof of concept and three case studies have been used to illustrate the nature of the model developed. Further testing and refinement of the model in practice should be the next step in this research. Practical implications: The concept of relationship management needs to filter down to all levels in the supply chain if participants are to retain commitment and buy-in to the relationship. A sustainable supply chain requires proactive relationship management and the development of an appropriate organisational culture, and trust. By legitimising individuals‟ expectations of the type of culture which is appropriate to their company and empowering employees to address mismatches that may occur a situation can be created whereby the collaborating organisations develop their competences symbiotically and so facilitate a sustainable supply chain. Originality/value: The culture/commitment/structure model developed from three separate strands of management thought has proved to be a powerful tool for analysing collaboration in supply chains and explaining how and why some supply chains are sustainable, and others are not.
Resumo:
This paper describes a novel framework for facial expression recognition from still images by selecting, optimizing and fusing ‘salient’ Gabor feature layers to recognize six universal facial expressions using the K nearest neighbor classifier. The recognition comparisons with all layer approach using JAFFE and Cohn-Kanade (CK) databases confirm that using ‘salient’ Gabor feature layers with optimized sizes can achieve better recognition performance and dramatically reduce computational time. Moreover, comparisons with the state of the art performances demonstrate the effectiveness of our approach.
Resumo:
Traditional ceramic separation membranes, which are fabricated by applying colloidal suspensions of metal hydroxides to porous supports, tend to suffer from pinholes and cracks that seriously affect their quality. Other intrinsic problems for these membranes include dramatic losses of flux when the pore sizes are reduced to enhance selectivity and dead-end pores that make no contribution to filtration. In this work, we propose a new strategy for addressing these problems by constructing a hierarchically structured separation layer on a porous substrate using large titanate nanofibers and smaller boehmite nanofibers. The nanofibers are able to divide large voids into smaller ones without forming dead-end pores and with the minimum reduction of the total void volume. The separation layer of nanofibers has a porosity of over 70% of its volume, whereas the separation layer in conventional ceramic membranes has a porosity below 36% and inevitably includes dead-end pores that make no contribution to the flux. This radical change in membrane texture greatly enhances membrane performance. The resulting membranes were able to filter out 95.3% of 60-nm particles from a 0.01 wt % latex while maintaining a relatively high flux of between 800 and 1000 L/m2·h, under a low driving pressure (20 kPa). Such flow rates are orders of magnitude greater than those of conventional membranes with equal selectivity. Moreover, the flux was stable at approximately 800 L/m2·h with a selectivity of more than 95%, even after six repeated runs of filtration and calcination. Use of different supports, either porous glass or porous alumina, had no substantial effect on the performance of the membranes; thus, it is possible to construct the membranes from a variety of supports without compromising functionality. The Darcy equation satisfactorily describes the correlation between the filtration flux and the structural parameters of the new membranes. The assembly of nanofiber meshes to combine high flux with excellent selectivity is an exciting new direction in membrane fabrication.
Resumo:
Surveillance networks are typically monitored by a few people, viewing several monitors displaying the camera feeds. It is then very difficult for a human operator to effectively detect events as they happen. Recently, computer vision research has begun to address ways to automatically process some of this data, to assist human operators. Object tracking, event recognition, crowd analysis and human identification at a distance are being pursued as a means to aid human operators and improve the security of areas such as transport hubs. The task of object tracking is key to the effective use of more advanced technologies. To recognize an event people and objects must be tracked. Tracking also enhances the performance of tasks such as crowd analysis or human identification. Before an object can be tracked, it must be detected. Motion segmentation techniques, widely employed in tracking systems, produce a binary image in which objects can be located. However, these techniques are prone to errors caused by shadows and lighting changes. Detection routines often fail, either due to erroneous motion caused by noise and lighting effects, or due to the detection routines being unable to split occluded regions into their component objects. Particle filters can be used as a self contained tracking system, and make it unnecessary for the task of detection to be carried out separately except for an initial (often manual) detection to initialise the filter. Particle filters use one or more extracted features to evaluate the likelihood of an object existing at a given point each frame. Such systems however do not easily allow for multiple objects to be tracked robustly, and do not explicitly maintain the identity of tracked objects. This dissertation investigates improvements to the performance of object tracking algorithms through improved motion segmentation and the use of a particle filter. A novel hybrid motion segmentation / optical flow algorithm, capable of simultaneously extracting multiple layers of foreground and optical flow in surveillance video frames is proposed. The algorithm is shown to perform well in the presence of adverse lighting conditions, and the optical flow is capable of extracting a moving object. The proposed algorithm is integrated within a tracking system and evaluated using the ETISEO (Evaluation du Traitement et de lInterpretation de Sequences vidEO - Evaluation for video understanding) database, and significant improvement in detection and tracking performance is demonstrated when compared to a baseline system. A Scalable Condensation Filter (SCF), a particle filter designed to work within an existing tracking system, is also developed. The creation and deletion of modes and maintenance of identity is handled by the underlying tracking system; and the tracking system is able to benefit from the improved performance in uncertain conditions arising from occlusion and noise provided by a particle filter. The system is evaluated using the ETISEO database. The dissertation then investigates fusion schemes for multi-spectral tracking systems. Four fusion schemes for combining a thermal and visual colour modality are evaluated using the OTCBVS (Object Tracking and Classification in and Beyond the Visible Spectrum) database. It is shown that a middle fusion scheme yields the best results and demonstrates a significant improvement in performance when compared to a system using either mode individually. Findings from the thesis contribute to improve the performance of semi-automated video processing and therefore improve security in areas under surveillance.
Resumo:
This study considers the solution of a class of linear systems related with the fractional Poisson equation (FPE) (−∇2)α/2φ=g(x,y) with nonhomogeneous boundary conditions on a bounded domain. A numerical approximation to FPE is derived using a matrix representation of the Laplacian to generate a linear system of equations with its matrix A raised to the fractional power α/2. The solution of the linear system then requires the action of the matrix function f(A)=A−α/2 on a vector b. For large, sparse, and symmetric positive definite matrices, the Lanczos approximation generates f(A)b≈β0Vmf(Tm)e1. This method works well when both the analytic grade of A with respect to b and the residual for the linear system are sufficiently small. Memory constraints often require restarting the Lanczos decomposition; however this is not straightforward in the context of matrix function approximation. In this paper, we use the idea of thick-restart and adaptive preconditioning for solving linear systems to improve convergence of the Lanczos approximation. We give an error bound for the new method and illustrate its role in solving FPE. Numerical results are provided to gauge the performance of the proposed method relative to exact analytic solutions.
Resumo:
Ceramic membranes are of particular interest in many industrial processes due to their ability to function under extreme conditions while maintaining their chemical and thermal stability. Major structural deficiencies under conventional fabrication approach are pin-holes and cracks, and the dramatic losses of flux when pore sizes are reduced to enhance selectivity. We overcome these structural deficiencies by constructing hierarchically structured separation layer on a porous substrate using larger titanate nanofibres and smaller boehmite nanofibres. This yields a radical change in membrane texture. The differences in the porous supports have no substantial influences on the texture of resulting membranes. The membranes with top layer of nanofibres coated on different porous supports by spin-coating method have similar size of the filtration pores, which is in a range of 10–100 nm. These membranes are able to effectively filter out species larger than 60 nm at flow rates orders of magnitude greater than conventional membranes. The retention can attain more than 95%, while maintaining a high flux rate about 900 L m-2 h. The calcination after spin-coating creates solid linkages between the fibres and between fibres and substrate, in addition to convert boehmite into -alumina nanofibres. This reveals a new direction in membrane fabrication.
Resumo:
Aims: This study investigated the effect of simulated visual impairment on the speed and accuracy of performance on a series of commonly used cognitive tests. ----- Methods: Cognitive performance was assessed for 30 young, visually normal subjects (M=22.0yrs ± 3.1 yrs) using the Digit Symbol Substitution Test (DSST), Trail Making Test (TMT) A and B and the Stroop Colour Word Test under three visual conditions: normal vision and two levels of visually degrading filters (VistechTM) administered in a random order. Distance visual acuity and contrast sensitivity were also assessed for each filter condition. ----- Results: The visual filters, which degraded contrast sensitivity to a greater extent than visual acuity, significantly increased the time to complete (p<0.05), but not the number of errors made, on the DSST and the TMT A and B and affected only some components of the Stroop test.----- Conclusions: Reduced contrast sensitivity had a marked effect on the speed but not the accuracy of performance on commonly used cognitive tests, even in young individuals; the implications of these findings are discussed.
Resumo:
Abstract—Corneal topography estimation that is based on the Placido disk principle relies on good quality of precorneal tear film and sufficiently wide eyelid (palpebral) aperture to avoid reflections from eyelashes. However, in practice, these conditions are not always fulfilled resulting in missing regions, smaller corneal coverage, and subsequently poorer estimates of corneal topography. Our aim was to enhance the standard operating range of a Placido disk videokeratoscope to obtain reliable corneal topography estimates in patients with poor tear film quality, such as encountered in those diagnosed with dry eye, and with narrower palpebral apertures as in the case of Asian subjects. This was achieved by incorporating in the instrument’s own topography estimation algorithm an image processing technique that comprises a polar-domain adaptive filter and amorphological closing operator. The experimental results from measurements of test surfaces and real corneas showed that the incorporation of the proposed technique results in better estimates of corneal topography, and, in many cases, to a significant increase in the estimated coverage area making such an enhanced videokeratoscope a better tool for clinicians.
Resumo:
Dragon is a word-based stream cipher. It was submitted to the eSTREAM project in 2005 and has advanced to Phase 3 of the software profile. This paper discusses the Dragon cipher from three perspectives: design, security analysis and implementation. The design of the cipher incorporates a single word-based non-linear feedback shift register and a non-linear filter function with memory. This state is initialized with 128- or 256-bit key-IV pairs. Each clock of the stream cipher produces 64 bits of keystream, using simple operations on 32-bit words. This provides the cipher with a high degree of efficiency in a wide variety of environments, making it highly competitive relative to other symmetric ciphers. The components of Dragon were designed to resist all known attacks. Although the design has been open to public scrutiny for several years, the only published attacks to date are distinguishing attacks which require keystream lengths greatly exceeding the stated 264 bit maximum permitted keystream length for a single key-IV pair.