615 resultados para Multi-features
Resumo:
During the 1980s, terms such as interagency or multi-agency cooperation, collaboration, coordination, and interaction have became permanent features of both crime prevention rhetoric and government crime policy. The concept of having the government, local authorities, and the community working in partnership has characterized both left and right politics for over a decade. The U.S. National Advisory Commission on Criminal Justice Standards and Goals in the U.S.. Circulars 8/84 and 44/90 released by the U.K. Home Office, and the British Morgan Report-coupled with the launch of government strategies in France, the Netherlands, England and Wales, Australia, and, more recently, in Belgium, New Zealand, and Canada-have all emphasized the importance of agencies working together to prevent or reduce crime. This paper draws upon recent Australian research and critically analyzes multi-agency crime prevention. It suggests that agency conflicts and power struggles may be exacerbated by neo-liberal economic theory, by the politics of crime prevention management, and by policies that aim to combine situational and social prevention endeavors. Furthermore, it concludes that indigenous peoples are excluded by crime prevention strategies that fail to define and interpret crime and its prevention in culturally appropriate ways.
Resumo:
During an intensive design-led workshop multidisciplinary design teams examined options for a sustainable multi-residential tower on an inner urban site in Brisbane (Australia). The main aim was to demonstrate the key principles of daylight to every habitable room and cross-ventilation to every apartment in the subtropical climate while responding to acceptable yield and price points. The four conceptual design proposals demonstrated a wide range of outcomes, with buildings ranging from 15 to 30 storeys. Daylight Factor (DF), view to the outside, and the avoidance of direct sunlight were the only quantitative and qualitative performance metrics used to implement daylighting to the proposed buildings during the charrette. This paper further assesses the daylighting performance of the four conceptual designs by utilizing Climate-based daylight modeling (CBDM), specifically Daylight Autonomy (DA) and Useful Daylight Illuminance (UDI). Results show that UDI 100-2000lux calculations provide more useful information on the daylighting design than DF. The percentage of the space with a UDI <100-2000lux larger than 50% ranged from 77% to 86% of the time for active occupant behaviour (occupancy from 6am to 6pm). The paper also highlights the architectural features that mostly affect daylighting design in subtropical climates.
Resumo:
CubIT is a multi-user, large-scale presentation and collaboration framework installed at the Queensland University of Technology’s (QUT) Cube facility, an interactive facility made up 48 multi-touch screens and very large projected display screens. CubIT was built to make the Cube facility accessible to QUT’s academic and student population. The system allows users to upload, interact with and share media content on the Cube’s very large display surfaces. CubIT implements a unique combination of features including RFID authentication, content management through multiple interfaces, multi-user shared workspace support, drag and drop upload and sharing, dynamic state control between different parts of the system and execution and synchronisation of the system across multiple computing nodes.
Resumo:
In the electricity market environment, coordination of system reliability and economics of a power system is of great significance in determining the available transfer capability (ATC). In addition, the risks associated with uncertainties should be properly addressed in the ATC determination process for risk-benefit maximization. Against this background, it is necessary that the ATC be optimally allocated and utilized within relative security constraints. First of all, the non-sequential Monte Carlo stimulation is employed to derive the probability density distribution of ATC of designated areas incorporating uncertainty factors. Second, on the basis of that, a multi-objective optimization model is formulated to determine the multi-area ATC so as to maximize the risk-benefits. Then, the solution to the developed model is achieved by the fast non-dominated sorting (NSGA-II) algorithm, which could decrease the risk caused by uncertainties while coordinating the ATCs of different areas. Finally, the IEEE 118-bus test system is served for demonstrating the essential features of the developed model and employed algorithm.
Resumo:
CubIT is a multi-user, large-scale presentation and collaboration framework installed at the Queensland University of Technology’s (QUT) Cube facility, an interactive facility made up 48 multi-touch screens and very large projected display screens. The CubIT system allows users to upload, interact with and share their own content on the Cube’s display surfaces. This paper outlines the collaborative features of CubIT which are implemented via three user interfaces, a large-screen multi-touch interface, a mobile phone and tablet application and a web-based content management system. Each of these applications plays a different role and supports different interaction mechanisms supporting a wide range of collaborative features including multi-user shared workspaces, drag and drop upload and sharing between users, session management and dynamic state control between different parts of the system.
Resumo:
Automated crowd counting has become an active field of computer vision research in recent years. Existing approaches are scene-specific, as they are designed to operate in the single camera viewpoint that was used to train the system. Real world camera networks often span multiple viewpoints within a facility, including many regions of overlap. This paper proposes a novel scene invariant crowd counting algorithm that is designed to operate across multiple cameras. The approach uses camera calibration to normalise features between viewpoints and to compensate for regions of overlap. This compensation is performed by constructing an 'overlap map' which provides a measure of how much an object at one location is visible within other viewpoints. An investigation into the suitability of various feature types and regression models for scene invariant crowd counting is also conducted. The features investigated include object size, shape, edges and keypoints. The regression models evaluated include neural networks, K-nearest neighbours, linear and Gaussian process regresion. Our experiments demonstrate that accurate crowd counting was achieved across seven benchmark datasets, with optimal performance observed when all features were used and when Gaussian process regression was used. The combination of scene invariance and multi camera crowd counting is evaluated by training the system on footage obtained from the QUT camera network and testing it on three cameras from the PETS 2009 database. Highly accurate crowd counting was observed with a mean relative error of less than 10%. Our approach enables a pre-trained system to be deployed on a new environment without any additional training, bringing the field one step closer toward a 'plug and play' system.
Resumo:
In this paper, we propose a new multi-class steganalysis for binary image. The proposed method can identify the type of steganographic technique used by examining on the given binary image. In addition, our proposed method is also capable of differentiating an image with hidden message from the one without hidden message. In order to do that, we will extract some features from the binary image. The feature extraction method used is a combination of the method extended from our previous work and some new methods proposed in this paper. Based on the extracted feature sets, we construct our multi-class steganalysis from the SVM classifier. We also present the empirical works to demonstrate that the proposed method can effectively identify five different types of steganography.
Resumo:
Traditional nearest points methods use all the samples in an image set to construct a single convex or affine hull model for classification. However, strong artificial features and noisy data may be generated from combinations of training samples when significant intra-class variations and/or noise occur in the image set. Existing multi-model approaches extract local models by clustering each image set individually only once, with fixed clusters used for matching with various image sets. This may not be optimal for discrimination, as undesirable environmental conditions (eg. illumination and pose variations) may result in the two closest clusters representing different characteristics of an object (eg. frontal face being compared to non-frontal face). To address the above problem, we propose a novel approach to enhance nearest points based methods by integrating affine/convex hull classification with an adapted multi-model approach. We first extract multiple local convex hulls from a query image set via maximum margin clustering to diminish the artificial variations and constrain the noise in local convex hulls. We then propose adaptive reference clustering (ARC) to constrain the clustering of each gallery image set by forcing the clusters to have resemblance to the clusters in the query image set. By applying ARC, noisy clusters in the query set can be discarded. Experiments on Honda, MoBo and ETH-80 datasets show that the proposed method outperforms single model approaches and other recent techniques, such as Sparse Approximated Nearest Points, Mutual Subspace Method and Manifold Discriminant Analysis.
Resumo:
Accurate and detailed measurement of an individual's physical activity is a key requirement for helping researchers understand the relationship between physical activity and health. Accelerometers have become the method of choice for measuring physical activity due to their small size, low cost, convenience and their ability to provide objective information about physical activity. However, interpreting accelerometer data once it has been collected can be challenging. In this work, we applied machine learning algorithms to the task of physical activity recognition from triaxial accelerometer data. We employed a simple but effective approach of dividing the accelerometer data into short non-overlapping windows, converting each window into a feature vector, and treating each feature vector as an i.i.d training instance for a supervised learning algorithm. In addition, we improved on this simple approach with a multi-scale ensemble method that did not need to commit to a single window size and was able to leverage the fact that physical activities produced time series with repetitive patterns and discriminative features for physical activity occurred at different temporal scales.
Resumo:
Person re-identification is particularly challenging due to significant appearance changes across separate camera views. In order to re-identify people, a representative human signature should effectively handle differences in illumination, pose and camera parameters. While general appearance-based methods are modelled in Euclidean spaces, it has been argued that some applications in image and video analysis are better modelled via non-Euclidean manifold geometry. To this end, recent approaches represent images as covariance matrices, and interpret such matrices as points on Riemannian manifolds. As direct classification on such manifolds can be difficult, in this paper we propose to represent each manifold point as a vector of similarities to class representers, via a recently introduced form of Bregman matrix divergence known as the Stein divergence. This is followed by using a discriminative mapping of similarity vectors for final classification. The use of similarity vectors is in contrast to the traditional approach of embedding manifolds into tangent spaces, which can suffer from representing the manifold structure inaccurately. Comparative evaluations on benchmark ETHZ and iLIDS datasets for the person re-identification task show that the proposed approach obtains better performance than recent techniques such as Histogram Plus Epitome, Partial Least Squares, and Symmetry-Driven Accumulation of Local Features.
Resumo:
Simple, rapid, catalyst-free synthesis of complex patterns of long, vertically aligned multiwalled carbon nanotubes, strictly confined within mechanically-written features on a Si(1 0 0) surface is reported. It is shown that dense arrays of the nanotubes can nucleate and fully fill the features when the low-temperature microwave plasma is in a direct contact with the surface. This eliminates additional nanofabrication steps and inevitable contact losses in applications associated with carbon nanotube patterns. Using metal catalyst has long been considered essential for the nucleation and growth of surface-supported carbon nanotubes (CNTs) [1] and [2]. Only very recently, the possibility of CNT growth using non-metallic (e.g., oxide [3] and SiC [4]) catalysts or artificially created carbon-enriched surface layers [5] has been demonstrated. However, successful integration of carbon nanostructures into Si-based nanodevice platforms requires catalyst-free growth, as the catalyst nanoparticles introduce contact losses, and their catalytic activity is very difficult to control during the growth [6]. Furthermore, in many applications in microfluidics, biological and molecular filters, electronic, sensor, and energy conversion nanodevices, the CNTs need to be arranged in specific complex patterns [7] and [8]. These patterns need to contain the basic features (e.g., lines and dots) written using simple procedures and fully filled with dense arrays of high-quality, straight, yet separated nanotubes. In this paper, we report on a completely metal or oxide catalyst-free plasma-based approach for the direct and rapid growth of dense arrays of long vertically-aligned multi-walled carbon nanotubes arranged into complex patterns made of various combinations of basic features on a Si(1 0 0) surface written using simple mechanical techniques. The process was conducted in a plasma environment [9] and [10] produced by a microwave discharge which typically generates the low-temperature plasmas at the discharge power below 1 kW [11]. Our process starts from mechanical writing (scribing) a pattern of arbitrary features on pre-treated Si(1 0 0) wafers. Before and after the mechanical feature writing, the Si(1 0 0) substrates were cleaned in an aqueous solution of hydrofluoric acid for 2 min to remove any possible contaminations (such as oil traces which could decompose to free carbon at elevated temperatures) from the substrate surface. A piece of another silicon wafer cleaned in the same way as the substrate, or a diamond scriber were used to produce the growth patterns by a simple arbitrary mechanical writing, i.e., by making linear scratches or dot punctures on the Si wafer surface. The results were the same in both cases, i.e., when scratching the surface by Si or a diamond scriber. The procedure for preparation of the substrates did not involve any possibility of external metallic contaminations on the substrate surface. After the preparation, the substrates were loaded into an ASTeX model 5200 chemical vapour deposition (CVD) reactor, which was very carefully conditioned to remove any residue contamination. The samples were heated to at least 800 °C to remove any oxide that could have formed during the sample loading [12]. After loading the substrates into the reactor chamber, N2 gas was supplied into the chamber at the pressure of 7 Torr to ignite and sustain the discharge at the total power of 200 W. Then, a mixture of CH4 and 60% of N2 gases were supplied at 20 Torr, and the discharge power was increased to 700 W (power density of approximately 1.49 W/cm3). During the process, the microwave plasma was in a direct contact with the substrate. During the plasma exposure, no external heating source was used, and the substrate temperature (∼850 °C) was maintained merely due to the plasma heating. The features were exposed to a microwave plasma for 3–5 min. A photograph of the reactor and the plasma discharge is shown in Fig. 1a and b.
Resumo:
The strain data acquired from structural health monitoring (SHM) systems play an important role in the state monitoring and damage identification of bridges. Due to the environmental complexity of civil structures, a better understanding of the actual strain data will help filling the gap between theoretical/laboratorial results and practical application. In the study, the multi-scale features of strain response are first revealed after abundant investigations on the actual data from two typical long-span bridges. Results show that, strain types at the three typical temporal scales of 10^5, 10^2 and 10^0 sec are caused by temperature change, trains and heavy trucks, and have their respective cut-off frequency in the order of 10^-2, 10^-1 and 10^0 Hz. Multi-resolution analysis and wavelet shrinkage are applied for separating and extracting these strain types. During the above process, two methods for determining thresholds are introduced. The excellent ability of wavelet transform on simultaneously time-frequency analysis leads to an effective information extraction. After extraction, the strain data will be compressed at an attractive ratio. This research may contribute to a further understanding of actual strain data of long-span bridges; also, the proposed extracting methodology is applicable on actual SHM systems.
Resumo:
Due to the popularity of security cameras in public places, it is of interest to design an intelligent system that can efficiently detect events automatically. This paper proposes a novel algorithm for multi-person event detection. To ensure greater than real-time performance, features are extracted directly from compressed MPEG video. A novel histogram-based feature descriptor that captures the angles between extracted particle trajectories is proposed, which allows us to capture motion patterns of multi-person events in the video. To alleviate the need for fine-grained annotation, we propose the use of Labelled Latent Dirichlet Allocation, a “weakly supervised” method that allows the use of coarse temporal annotations which are much simpler to obtain. This novel system is able to run at approximately ten times real-time, while preserving state-of-theart detection performance for multi-person events on a 100-hour real-world surveillance dataset (TRECVid SED).
Resumo:
In this paper we propose a novel approach to multi-action recognition that performs joint segmentation and classification. This approach models each action using a Gaussian mixture using robust low-dimensional action features. Segmentation is achieved by performing classification on overlapping temporal windows, which are then merged to produce the final result. This approach is considerably less complicated than previous methods which use dynamic programming or computationally expensive hidden Markov models (HMMs). Initial experiments on a stitched version of the KTH dataset show that the proposed approach achieves an accuracy of 78.3%, outperforming a recent HMM-based approach which obtained 71.2%.
Resumo:
Tumour microenvironment greatly influences the development and metastasis of cancer progression. The development of three dimensional (3D) culture models which mimic that displayed in vivo can improve cancer biology studies and accelerate novel anticancer drug screening. Inspired by a systems biology approach, we have formed 3D in vitro bioengineered tumour angiogenesis microenvironments within a glycosaminoglycan-based hydrogel culture system. This microenvironment model can routinely recreate breast and prostate tumour vascularisation. The multiple cell types cultured within this model were less sensitive to chemotherapy when compared with two dimensional (2D) cultures, and displayed comparative tumour regression to that displayed in vivo. These features highlight the use of our in vitro culture model as a complementary testing platform in conjunction with animal models, addressing key reduction and replacement goals of the future. We anticipate that this biomimetic model will provide a platform for the in-depth analysis of cancer development and the discovery of novel therapeutic targets.