843 resultados para Spam email filtering


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Before the Global Financial Crisis many providers of finance had growth mandates and actively pursued development finance deals as a way of gaining higher returns on funds with regular capital turnover and re-investment possible. This was able to be achieved through high gearing and low presales in a strong market. As asset prices fell, loan covenants breached and memories of the 1990’s returned, banks rapidly adjusted their risk appetite via retraction of gearing and expansion of presale requirements. Early signs of loosening in bank credit policy are emerging, however parties seeking development finance are faced with a severely reduced number of institutions from which to source funding. The few institutions that are lending are filtering out only the best credit risks by way of constrictive credit conditions including: low loan to value ratios, the corresponding requirement to contribute high levels of equity, lack of support in non-prime locations and the requirement for only borrowers with well established track records. In this risk averse and capital constrained environment, the ability of developers to proceed with real estate developments is still being constrained by their inability to obtain project finance. This paper will examine the pre and post GFC development finance environment. It will identify the key lending criteria relevant to real estate development finance and will detail the related changes to credit policies over this period. The associated impact to real estate development projects will be presented, highlighting the significant constraint to supply that the inability to obtain finance poses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper details the design of an autonomous helicopter control system using a low cost sensor suite. Control is maintained using simple nested PID loops. Aircraft attitude, velocity, and height is estimated using an in-house designed IMU and vision system. Information is combined using complimentary filtering. The aircraft is shown to be stabilised and responding to high level demands on all axes, including heading, height, lateral velocity and longitudinal velocity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper details the design of an autonomous helicopter control system using a low cost sensor suite. Control is maintained using simple nested PID loops. Aircraft attitude, velocity, and height is estimated using an in-house designed IMU and vision system. Information is combined using complimentary filtering. The aircraft is shown to be stabilised and responding to high level demands on all axes, including heading, height, lateral velocity and longitudinal velocity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper illustrates a method for finding useful visual landmarks for performing simultaneous localization and mapping (SLAM). The method is based loosely on biological principles, using layers of filtering and pooling to create learned templates that correspond to different views of the environment. Rather than using a set of landmarks and reporting range and bearing to the landmark, this system maps views to poses. The challenge is to produce a system that produces the same view for small changes in robot pose, but provides different views for larger changes in pose. The method has been developed to interface with the RatSLAM system, a biologically inspired method of SLAM. The paper describes the method of learning and recalling visual landmarks in detail, and shows the performance of the visual system in real robot tests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Immersive environments are part of a recent media innovation that allow users to become so involved within a computer-based simulated environment that they feel part of that virtual world (Grigorovici, 2003). A specific example is Second Life, which is an internet-based, three-dimensional immersive virtual world in which users create an online representation of themselves (an avatar) to play games and interact socially with thousands of people simultaneously. This study focuses on Second Life as an example of an immersive environment, as it is the largest adult freeform virtual world, home to 12 million avatars (IOWA State University, 2008). Already in Second Life there are more than 100 real-life brands from a range of industries, including automotive, professional services, and consumer goods and travel, among others (KZero, 2007; New Business Horizons, 2009). Compared to traditional advertising media, this interactive media can immerse users in the environment. As a result of this interactivity, users can become more involved with a virtual environment, resulting in prolonged usage over weeks, months and even years. Also, it can facilitate presence. Despite these developments, little is known about the effectiveness of marketing messages in a virtual world context. Marketers are incorporating products into Second Life using a strategy of online product placement. This study, therefore, explores the perceived effectiveness of online product placement in Second Life in terms of effects on product/brand recall, purchase intentions and trial. This research examines the association between individuals’ involvement with Second Life and online product placement effectiveness, as well as the relationship between individuals’ Second Life involvement and the effectiveness of online product placement. In addition, it investigates the association of immersion and product placement involvement. It also examines the impact of product placement involvement on online product placement effectiveness and the role of presence in affecting this relationship. An exploratory study was conducted for this research using semi-structured in-depth interviews face-to-face, email-based and in-world. The sample comprised 24 active Second Life users. Results indicate that product placement effectiveness is not directly associated with Second Life involvement, but rather effectiveness is impacted through the effect of Second Life involvement on product placement involvement. A positive relationship was found between individuals’ product placement involvement and online product placement effectiveness. Findings also indicate that online product placement effectiveness is not directly associated with immersion. Rather, it appears that effectiveness is impacted through the effect of immersion on product placement involvement. Moreover, higher levels of presence appear to have a positive impact on the relationship between product placement involvement and product placement effectiveness. Finally, a model was developed from this qualitative study for future testing. In terms of theoretical contributions, this study provides a new model for testing the effectiveness of product placement within immersive environments. From a methodological perspective, in-world interviews as a new research method were undertaken. In terms of a practical contribution, findings identified useful information for marketers and advertising agencies that aim to promote their products in immersive virtual environments like Second Life.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The article described an open-source toolbox for machine vision called Machine Vision Toolbox (MVT). MVT includes more than 60 functions including image file reading and writing, acquisition, display, filtering, blob, point and line feature extraction, mathematical morphology, homographies, visual Jacobians, camera calibration, and color space conversion. MVT can be used for research into machine vision but is also versatile enough to be usable for real-time work and even control. MVT, combined with MATLAB and a model workstation computer, is a useful and convenient environment for the investigation of machine vision algorithms. The article illustrated the use of a subset of toolbox functions for some typical problems and described MVT operations including the simulation of a complete image-based visual servo system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since the formal recognition of practice-led research in the 1990s, many higher research degree candidates in art, design and media have submitted creative works along with an accompanying written document or ‘exegesis’ for examination. Various models for the exegesis have been proposed in university guidelines and academic texts during the past decade, and students and supervisors have experimented with its contents and structure. With a substantial number of exegeses submitted and archived, it has now become possible to move beyond proposition to empirical analysis. In this article we present the findings of a content analysis of a large, local sample of submitted exegeses. We identify the emergence of a persistent pattern in the types of content included as well as overall structure. Besides an introduction and conclusion, this pattern includes three main parts, which can be summarized as situating concepts (conceptual definitions and theories); precedents of practice (traditions and exemplars in the field); and researcher’s creative practice (the creative process, the artifacts produced and their value as research). We argue that this model combines earlier approaches to the exegesis, which oscillated between academic objectivity, by providing a contextual framework for the practice, and personal reflexivity, by providing commentary on the creative practice. But this model is more than simply a hybrid: it provides a dual orientation, which allows the researcher to both situate their creative practice within a trajectory of research and do justice to its personally invested poetics. By performing the important function of connecting the practice and creative work to a wider emergent field, the model helps to support claims for a research contribution to the field. We call it a connective model of exegesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A membrane filtration plant using suitable micro or ultra-filtration membranes has the potential to significantly increase pan stage capacity and improve sugar quality. Previous investigations by SRI and others have shown that membranes will remove polysaccharides, turbidity and colloidal impurities and result in lower viscosity syrups and molasses. However, the conclusion from those investigations was that membrane filtration was not economically viable. A comprehensive assessment of current generation membrane technology was undertaken by SRI. With the aid of two pilot plants provided by Applexion and Koch Membrane Systems, extensive trials were conducted at an Australian factory using clarified juice at 80–98°C as feed to each pilot plant. Conditions were varied during the trials to examine the effect of a range of operating parameters on the filtering characteristics of each of the membranes. These parameters included feed temperature and pressure, flow velocity, soluble solids and impurity concentrations. The data were then combined to develop models to predict the filtration rate (or flux) that could be expected for nominated operating conditions. The models demonstrated very good agreement with the data collected during the trials. The trials also identified those membranes that provided the highest flux levels per unit area of membrane surface for a nominated set of conditions. Cleaning procedures were developed that ensured the water flux level was recovered following a clean-in-place process. Bulk samples of clarified juice and membrane filtered juice from each pilot were evaporated to syrup to quantify the gain in pan stage productivity that results from the removal of high molecular weight impurities by membrane filtration. The results are in general agreement with those published by other research groups.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a consumerist society obsessed with body image and thinness, obesity levels have reached an all-time high. This multi-faceted book written by a range of experts, explores the social, cultural, clinical and psychological factors that lie behind the Obesity Epidemic . It is required reading for the many healthcare professionals dealing with the effects of obesity and for anyone who wants to know more about the causes of weight gain and the best ways of dealing with it. Fat Matters covers a range of issues from sociology through medicine to technology. This is not a book for the highly specialised expert. Rather it is a book that shows the diversity of approaches to the phenomenon of obesity, tailored to the reader who wants to be up-to-date and well-informed on a subject that is possibly as frequently discussed and as misunderstood as the weather.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research study investigated the factors that influenced the development of teacher identity in a small cohort of mature-aged graduate pre-service teachers over the course of a one-year Graduate Diploma program (Middle Years). It sought to illuminate the social and relational dynamics of these pre-service teachers’ experiences as they began new ways of being and learning during a newly introduced one-year Graduate Diploma program. A relational-ontological perspective underpinned the relational-cultural framework that was applied in a workshop program as an integral part of this research. A relational-ontological perspective suggests that the development of teacher identity is to be construed more as an ontological process than an epistemological one. Its focus is more on questions surrounding the person and their ‘becoming’ a teacher than about the knowledge they have or will come to have. Hence, drawing on work by researchers such as Alsup (2006), Gilligan, (1982), Isaacs, (2007), Miller (1976), Noddings, (2005), Stout (2001), and Taylor, (1989), teacher identity was defined as an individual pre-service teacher’s unique sense of self as a teacher that included his or her beliefs about teaching and learning (Alsup, 2006; Stout, 2001; Walkington, 2005). Case-study was the preferred methodology within which this research project was framed, and narrative research was used as a method to document the way teacher identity was shaped and negotiated in discursive environments such as teacher education programs, prior experiences, classroom settings and the practicum. The data that was collected included student narratives, student email written reflections, and focus group dialogue. The narrative approach applied in this research context provided the depth of data needed to understand the nature of the mature-aged pre-service teachers’ emerging teacher identities and experiences in the graduate diploma program. Findings indicated that most of the mature-aged graduate pre-service teachers came in to the one-year graduate diploma program with a strong sense of personal and professional selves and well-established reasons why they had chosen to teach Middle Years. Their choice of program involved an expectation of support and welcome to a middle-school community and culture. Two critical issues that emerged from the pre-service teachers’ narratives were the importance they placed on the human support including the affirmation of themselves and their emerging teacher identities. Evidence from this study suggests that the lack of recognition of preservice teachers’ personal and professional selves during the graduate diploma program inhibited the development of a positive middle-school teacher identity. However, a workshop program developed for the participants in this research and addressing a range of practical concerns to beginning teachers offered them a space where they felt both a sense of belonging to a community and where their thoughts and beliefs were recognized and valued. Thus, the workshops provided participants with the positive social and relational dynamics necessary to support them in their developing teacher identities. The overall findings of this research study strongly indicate a need for a relational support structure based on a relational-ontological perspective to be built into the overall course structure of Graduate Pre-service Diplomas in Education to support the development of teacher identity. Such a support structure acknowledges that the pre-service teacher’s learning and formation is socially embedded, relational, and a continual, lifelong process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Uninhabited aerial vehicles (UAVs) are a cutting-edge technology that is at the forefront of aviation/aerospace research and development worldwide. Many consider their current military and defence applications as just a token of their enormous potential. Unlocking and fully exploiting this potential will see UAVs in a multitude of civilian applications and routinely operating alongside piloted aircraft. The key to realising the full potential of UAVs lies in addressing a host of regulatory, public relation, and technological challenges never encountered be- fore. Aircraft collision avoidance is considered to be one of the most important issues to be addressed, given its safety critical nature. The collision avoidance problem can be roughly organised into three areas: 1) Sense; 2) Detect; and 3) Avoid. Sensing is concerned with obtaining accurate and reliable information about other aircraft in the air; detection involves identifying potential collision threats based on available information; avoidance deals with the formulation and execution of appropriate manoeuvres to maintain safe separation. This thesis tackles the detection aspect of collision avoidance, via the development of a target detection algorithm that is capable of real-time operation onboard a UAV platform. One of the key challenges of the detection problem is the need to provide early warning. This translates to detecting potential threats whilst they are still far away, when their presence is likely to be obscured and hidden by noise. Another important consideration is the choice of sensors to capture target information, which has implications for the design and practical implementation of the detection algorithm. The main contributions of the thesis are: 1) the proposal of a dim target detection algorithm combining image morphology and hidden Markov model (HMM) filtering approaches; 2) the novel use of relative entropy rate (RER) concepts for HMM filter design; 3) the characterisation of algorithm detection performance based on simulated data as well as real in-flight target image data; and 4) the demonstration of the proposed algorithm's capacity for real-time target detection. We also consider the extension of HMM filtering techniques and the application of RER concepts for target heading angle estimation. In this thesis we propose a computer-vision based detection solution, due to the commercial-off-the-shelf (COTS) availability of camera hardware and the hardware's relatively low cost, power, and size requirements. The proposed target detection algorithm adopts a two-stage processing paradigm that begins with an image enhancement pre-processing stage followed by a track-before-detect (TBD) temporal processing stage that has been shown to be effective in dim target detection. We compare the performance of two candidate morphological filters for the image pre-processing stage, and propose a multiple hidden Markov model (MHMM) filter for the TBD temporal processing stage. The role of the morphological pre-processing stage is to exploit the spatial features of potential collision threats, while the MHMM filter serves to exploit the temporal characteristics or dynamics. The problem of optimising our proposed MHMM filter has been examined in detail. Our investigation has produced a novel design process for the MHMM filter that exploits information theory and entropy related concepts. The filter design process is posed as a mini-max optimisation problem based on a joint RER cost criterion. We provide proof that this joint RER cost criterion provides a bound on the conditional mean estimate (CME) performance of our MHMM filter, and this in turn establishes a strong theoretical basis connecting our filter design process to filter performance. Through this connection we can intelligently compare and optimise candidate filter models at the design stage, rather than having to resort to time consuming Monte Carlo simulations to gauge the relative performance of candidate designs. Moreover, the underlying entropy concepts are not constrained to any particular model type. This suggests that the RER concepts established here may be generalised to provide a useful design criterion for multiple model filtering approaches outside the class of HMM filters. In this thesis we also evaluate the performance of our proposed target detection algorithm under realistic operation conditions, and give consideration to the practical deployment of the detection algorithm onboard a UAV platform. Two fixed-wing UAVs were engaged to recreate various collision-course scenarios to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. Based on this collected data, our proposed detection approach was able to detect targets out to distances ranging from about 400m to 900m. These distances, (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning ahead of impact that approaches the 12.5 second response time recommended for human pilots. Furthermore, readily available graphic processing unit (GPU) based hardware is exploited for its parallel computing capabilities to demonstrate the practical feasibility of the proposed target detection algorithm. A prototype hardware-in- the-loop system has been found to be capable of achieving data processing rates sufficient for real-time operation. There is also scope for further improvement in performance through code optimisations. Overall, our proposed image-based target detection algorithm offers UAVs a cost-effective real-time target detection capability that is a step forward in ad- dressing the collision avoidance issue that is currently one of the most significant obstacles preventing widespread civilian applications of uninhabited aircraft. We also highlight that the algorithm development process has led to the discovery of a powerful multiple HMM filtering approach and a novel RER-based multiple filter design process. The utility of our multiple HMM filtering approach and RER concepts, however, extend beyond the target detection problem. This is demonstrated by our application of HMM filters and RER concepts to a heading angle estimation problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.

Relevância:

10.00% 10.00%

Publicador: