976 resultados para Filtering techniques


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis employs the theoretical fusion of disciplinary knowledge, interlacing an analysis from both functional and interpretive frameworks and applies these paradigms to three concepts—organisational identity, the balanced scorecard performance measurement system, and control. As an applied thesis, this study highlights how particular public sector organisations are using a range of multi-disciplinary forms of knowledge constructed for their needs to achieve practical outcomes. Practical evidence of this study is not bound by a single disciplinary field or the concerns raised by academics about the rigorous application of academic knowledge. The study’s value lies in its ability to explore how current communication and accounting knowledge is being used for practical purposes in organisational life. The main focus of this thesis is on identities in an organisational communication context. In exploring the theoretical and practical challenges, the research questions for this thesis were formulated as: 1. Is it possible to effectively control identities in organisations by the use of an integrated performance measurement system—the balanced scorecard—and if so, how? 2. What is the relationship between identities and an integrated performance measurement system—the balanced scorecard—in the identity construction process? Identities in the organisational context have been extensively discussed in graphic design, corporate communication and marketing, strategic management, organisational behaviour, and social psychology literatures. Corporate identity is the self-presentation of the personality of an organisation (Van Riel, 1995; Van Riel & Balmer, 1997), and organisational identity is the statement of central characteristics described by members (Albert & Whetten, 2003). In this study, identity management is positioned as a strategically complex task, embracing not only logo and name, but also multiple dimensions, levels and facets of organisational life. Responding to the collaborative efforts of researchers and practitioners in identity conceptualisation and methodological approaches, this dissertation argues that analysis can be achieved through the use of an integrated framework of identity products, patternings and processes (Cornelissen, Haslam, & Balmer, 2007), transforming conceptualisations of corporate identity, organisational identity and identification studies. Likewise, the performance measurement literature from the accounting field now emphasises the importance of ‘soft’ non-financial measures in gauging performance—potentially allowing the monitoring and regulation of ‘collective’ identities (Cornelissen et al., 2007). The balanced scorecard (BSC) (Kaplan & Norton, 1996a), as the selected integrated performance measurement system, quantifies organisational performance under the four perspectives of finance, customer, internal process, and learning and growth. Broadening the traditional performance measurement boundary, the BSC transforms how organisations perceived themselves (Vaivio, 2007). The rhetorical and communicative value of the BSC has also been emphasised in organisational self-understanding (Malina, Nørreklit, & Selto, 2007; Malmi, 2001; Norreklit, 2000, 2003). Thus, this study establishes a theoretical connection between the controlling effects of the BSC and organisational identity construction. Common to both literatures, the aspects of control became the focus of this dissertation, as ‘the exercise or act of achieving a goal’ (Tompkins & Cheney, 1985, p. 180). This study explores not only traditional technical and bureaucratic control (Edwards, 1981), but also concertive control (Tompkins & Cheney, 1985), shifting the locus of control to employees who make their own decisions towards desired organisational premises (Simon, 1976). The controlling effects on collective identities are explored through the lens of the rhetorical frames mobilised through the power of organisational enthymemes (Tompkins & Cheney, 1985) and identification processes (Ashforth, Harrison, & Corley, 2008). In operationalising the concept of control, two guiding questions were developed to support the research questions: 1.1 How does the use of the balanced scorecard monitor identities in public sector organisations? 1.2 How does the use of the balanced scorecard regulate identities in public sector organisations? This study adopts qualitative multiple case studies using ethnographic techniques. Data were gathered from interviews of 41 managers, organisational documents, and participant observation from 2003 to 2008, to inform an understanding of organisational practices and members’ perceptions in the five cases of two public sector organisations in Australia. Drawing on the functional and interpretive paradigms, the effective design and use of the systems, as well as the understanding of shared meanings of identities and identifications are simultaneously recognised. The analytical structure guided by the ‘bracketing’ (Lewis & Grimes, 1999) and ‘interplay’ strategies (Schultz & Hatch, 1996) preserved, connected and contrasted the unique findings from the multi-paradigms. The ‘temporal bracketing’ strategy (Langley, 1999) from the process view supports the comparative exploration of the analysis over the periods under study. The findings suggest that the effective use of the BSC can monitor and regulate identity products, patternings and processes. In monitoring identities, the flexible BSC framework allowed the case study organisations to monitor various aspects of finance, customer, improvement and organisational capability that included identity dimensions. Such inclusion legitimises identity management as organisational performance. In regulating identities, the use of the BSC created a mechanism to form collective identities by articulating various perspectives and causal linkages, and through the cascading and alignment of multiple scorecards. The BSC—directly reflecting organisationally valued premises and legitimised symbols—acted as an identity product of communication, visual symbols and behavioural guidance. The selective promotion of the BSC measures filtered organisational focus to shape unique identity multiplicity and characteristics within the cases. Further, the use of the BSC facilitated the assimilation of multiple identities by controlling the direction and strength of identifications, engaging different groups of members. More specifically, the tight authority of the BSC framework and systems are explained both by technical and bureaucratic controls, while subtle communication of organisational premises and information filtering is achieved through concertive control. This study confirms that these macro top-down controls mediated the sensebreaking and sensegiving process of organisational identification, supporting research by Ashforth, Harrison and Corley (2008). This study pays attention to members’ power of self-regulation, filling minor premises of the derived logic of their organisation through the playing out of organisational enthymemes (Tompkins & Cheney, 1985). Members are then encouraged to make their own decisions towards the organisational premises embedded in the BSC, through the micro bottom-up identification processes including: enacting organisationally valued identities; sensemaking; and the construction of identity narratives aligned with those organisationally valued premises. Within the process, the self-referential effect of communication encouraged members to believe the organisational messages embedded in the BSC in transforming collective and individual identities. Therefore, communication through the use of the BSC continued the self-producing of normative performance mechanisms, established meanings of identities, and enabled members’ self-regulation in identity construction. Further, this research establishes the relationship between identity and the use of the BSC in terms of identity multiplicity and attributes. The BSC framework constrained and enabled case study organisations and members to monitor and regulate identity multiplicity across a number of dimensions, levels and facets. The use of the BSC constantly heightened the identity attributes of distinctiveness, relativity, visibility, fluidity and manageability in identity construction over time. Overall, this research explains the reciprocal controlling relationships of multiple structures in organisations to achieve a goal. It bridges the gap among corporate and organisational identity theories by adopting Cornelissen, Haslam and Balmer’s (2007) integrated identity framework, and reduces the gap in understanding between identity and performance measurement studies. Parallel review of the process of monitoring and regulating identities from both literatures synthesised the theoretical strengths of both to conceptualise and operationalise identities. This study extends the discussion on positioning identity, culture, commitment, and image and reputation measures in integrated performance measurement systems as organisational capital. Further, this study applies understanding of the multiple forms of control (Edwards, 1979; Tompkins & Cheney, 1985), emphasising the power of organisational members in identification processes, using the notion of rhetorical organisational enthymemes. This highlights the value of the collaborative theoretical power of identity, communication and performance measurement frameworks. These case studies provide practical insights about the public sector where existing bureaucracy and desired organisational identity directions are competing within a large organisational setting. Further research on personal identity and simple control in organisations that fully cascade the BSC down to individual members would provide enriched data. The extended application of the conceptual framework to other public and private sector organisations with a longitudinal view will also contribute to further theory building.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information Overload and Mismatch are two fundamental problems affecting the effectiveness of information filtering systems. Even though both term-based and patternbased approaches have been proposed to address the problems of overload and mismatch, neither of these approaches alone can provide a satisfactory solution to address these problems. This paper presents a novel two-stage information filtering model which combines the merits of term-based and pattern-based approaches to effectively filter sheer volume of information. In particular, the first filtering stage is supported by a novel rough analysis model which efficiently removes a large number of irrelevant documents, thereby addressing the overload problem. The second filtering stage is empowered by a semantically rich pattern taxonomy mining model which effectively fetches incoming documents according to the specific information needs of a user, thereby addressing the mismatch problem. The experimental results based on the RCV1 corpus show that the proposed twostage filtering model significantly outperforms the both termbased and pattern-based information filtering models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Guardian reportage of the United Kingdom Member of Parliament (MP) expenses scandal of 2009 used crowdsourcing and computational journalism techniques. Computational journalism can be broadly defined as the application of computer science techniques to the activities of journalism. Its foundation lies in computer assisted reporting techniques and its importance is increasing due to the: (a) increasing availability of large scale government datasets for scrutiny; (b) declining cost, increasing power and ease of use of data mining and filtering software; and Web 2.0; and (c) explosion of online public engagement and opinion.. This paper provides a case study of the Guardian MP expenses scandal reportage and reveals some key challenges and opportunities for digital journalism. It finds journalists may increasingly take an active role in understanding, interpreting, verifying and reporting clues or conclusions that arise from the interrogations of datasets (computational journalism). Secondly a distinction should be made between information reportage and computational journalism in the digital realm, just as a distinction might be made between citizen reporting and citizen journalism. Thirdly, an opportunity exists for online news providers to take a ‘curatorial’ role, selecting and making easily available the best data sources for readers to use (information reportage). These activities have always been fundamental to journalism, however the way in which they are undertaken may change. Findings from this paper may suggest opportunities and challenges for the implementation of computational journalism techniques in practice by digital Australian media providers, and further areas of research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The following report considers a number of key challenges the Australian Federal Government faces in designing the regulatory framework and the reach of its planned mandatory internet filter. Previous reports on the mandatory filtering scheme have concentrated on the filtering technologies, their efficacy, their cost and their likely impact on the broadband environment. This report focuses on the scope and the nature of content that is likely to be caught by the proposed filter and on identifying associated public policy implications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to the change in attitudes and lifestyles, people expect to find new partners and friends via various ways now-a-days. Online dating networks create a network for people to meet each other and allow making contact with different objectives of developing a personal, romantic or sexual relationship. Due to the higher expectation of users, online matching companies are trying to adopt recommender systems. However, the existing recommendation techniques such as content-based, collaborative filtering or hybrid techniques focus on users explicit contact behaviors but ignore the implicit relationship among users in the network. This paper proposes a social matching system that uses past relations and user similarities in finding potential matches. The proposed system is evaluated on the dataset collected from an online dating network. Empirical analysis shows that the recommendation success rate has increased to 31% as compared to the baseline success rate of 19%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Applications of stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics, industrial automation and stereomicroscopy. A key issue in stereo vision is that of image matching, or identifying corresponding points in a stereo pair. The difference in the positions of corresponding points in image coordinates is termed the parallax or disparity. When the orientation of the two cameras is known, corresponding points may be projected back to find the location of the original object point in world coordinates. Matching techniques are typically categorised according to the nature of the matching primitives they use and the matching strategy they employ. This report provides a detailed taxonomy of image matching techniques, including area based, transform based, feature based, phase based, hybrid, relaxation based, dynamic programming and object space methods. A number of area based matching metrics as well as the rank and census transforms were implemented, in order to investigate their suitability for a real-time stereo sensor for mining automation applications. The requirements of this sensor were speed, robustness, and the ability to produce a dense depth map. The Sum of Absolute Differences matching metric was the least computationally expensive; however, this metric was the most sensitive to radiometric distortion. Metrics such as the Zero Mean Sum of Absolute Differences and Normalised Cross Correlation were the most robust to this type of distortion but introduced additional computational complexity. The rank and census transforms were found to be robust to radiometric distortion, in addition to having low computational complexity. They are therefore prime candidates for a matching algorithm for a stereo sensor for real-time mining applications. A number of issues came to light during this investigation which may merit further work. These include devising a means to evaluate and compare disparity results of different matching algorithms, and finding a method of assigning a level of confidence to a match. Another issue of interest is the possibility of statistically combining the results of different matching algorithms, in order to improve robustness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vector field visualisation is one of the classic sub-fields of scientific data visualisation. The need for effective visualisation of flow data arises in many scientific domains ranging from medical sciences to aerodynamics. Though there has been much research on the topic, the question of how to communicate flow information effectively in real, practical situations is still largely an unsolved problem. This is particularly true for complex 3D flows. In this presentation we give a brief introduction and background to vector field visualisation and comment on the effectiveness of the most common solutions. We will then give some examples of current development on texture-based techniques, and given practical examples of their use in CFD research and hydrodynamic applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a framework for performing real-time recursive estimation of landmarks’ visual appearance. Imaging data in its original high dimensional space is probabilistically mapped to a compressed low dimensional space through the definition of likelihood functions. The likelihoods are subsequently fused with prior information using a Bayesian update. This process produces a probabilistic estimate of the low dimensional representation of the landmark visual appearance. The overall filtering provides information complementary to the conventional position estimates which is used to enhance data association. In addition to robotics observations, the filter integrates human observations in the appearance estimates. The appearance tracks as computed by the filter allow landmark classification. The set of labels involved in the classification task is thought of as an observation space where human observations are made by selecting a label. The low dimensional appearance estimates returned by the filter allow for low cost communication in low bandwidth sensor networks. Deployment of the filter in such a network is demonstrated in an outdoor mapping application involving a human operator, a ground and an air vehicle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Road surface macro-texture is an indicator used to determine the skid resistance levels in pavements. Existing methods of quantifying macro-texture include the sand patch test and the laser profilometer. These methods utilise the 3D information of the pavement surface to extract the average texture depth. Recently, interest in image processing techniques as a quantifier of macro-texture has arisen, mainly using the Fast Fourier Transform (FFT). This paper reviews the FFT method, and then proposes two new methods, one using the autocorrelation function and the other using wavelets. The methods are tested on pictures obtained from a pavement surface extending more than 2km's. About 200 images were acquired from the surface at approx. 10m intervals from a height 80cm above ground. The results obtained from image analysis methods using the FFT, the autocorrelation function and wavelets are compared with sensor measured texture depth (SMTD) data obtained from the same paved surface. The results indicate that coefficients of determination (R2) exceeding 0.8 are obtained when up to 10% of outliers are removed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Eigen-based techniques and other monolithic approaches to face recognition have long been a cornerstone in the face recognition community due to the high dimensionality of face images. Eigen-face techniques provide minimal reconstruction error and limit high-frequency content while linear discriminant-based techniques (fisher-faces) allow the construction of subspaces which preserve discriminatory information. This paper presents a frequency decomposition approach for improved face recognition performance utilising three well-known techniques: Wavelets; Gabor / Log-Gabor; and the Discrete Cosine Transform. Experimentation illustrates that frequency domain partitioning prior to dimensionality reduction increases the information available for classification and greatly increases face recognition performance for both eigen-face and fisher-face approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to achieve meaningful reductions in individual ecological footprints, individuals must dramatically alter their day to day behaviours. Effective interventions will need to be evidence based and there is a necessity for the rapid transfer or communication of information from the point of research, into policy and practice. A number of health disciplines, including psychology and public health, share a common mission to promote health and well-being and it is becoming clear that the most practical pathway to achieving this mission is through interdisciplinary collaboration. This paper argues that an interdisciplinary collaborative approach will facilitate research that results in the rapid transfer of findings into policy and practice. The application of this approach is described in relation to the Green Living project which explored the psycho-social predictors of environmentally friendly behaviour. Following a qualitative pilot study, and in consultation with an expert panel comprising academics, industry professionals and government representatives, a self-administered mail survey was distributed to a random sample of 3000 residents of Brisbane and Moreton Bay (Queensland, Australia). The Green Living survey explored specific beliefs which included attitudes, norms, perceived control, intention and behaviour, as well as a number of other constructs such as environmental concern and altruism. This research has two beneficial outcomes. First, it will inform a practical model for predicting sustainable living behaviours and a number of local councils have already expressed an interest in making use of the results as part of their ongoing community engagement programs. Second, it provides an example of how a collaborative interdisciplinary project can provide a more comprehensive approach to research than can be accomplished by a single disciplinary project.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recommender systems are one of the recent inventions to deal with ever growing information overload. Collaborative filtering seems to be the most popular technique in recommender systems. With sufficient background information of item ratings, its performance is promising enough. But research shows that it performs very poor in a cold start situation where previous rating data is sparse. As an alternative, trust can be used for neighbor formation to generate automated recommendation. User assigned explicit trust rating such as how much they trust each other is used for this purpose. However, reliable explicit trust data is not always available. In this paper we propose a new method of developing trust networks based on user’s interest similarity in the absence of explicit trust data. To identify the interest similarity, we have used user’s personalized tagging information. This trust network can be used to find the neighbors to make automated recommendations. Our experiment result shows that the proposed trust based method outperforms the traditional collaborative filtering approach which uses users rating data. Its performance improves even further when we utilize trust propagation techniques to broaden the range of neighborhood.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the motion characteristics of on-site objects is desirable for the analysis of construction work zones, especially in problems related to safety and productivity studies. This article presents a methodology for rapid object identification and tracking. The proposed methodology contains algorithms for spatial modeling and image matching. A high-frame-rate range sensor was utilized for spatial data acquisition. The experimental results indicated that an occupancy grid spatial modeling algorithm could quickly build a suitable work zone model from the acquired data. The results also showed that an image matching algorithm is able to find the most similar object from a model database and from spatial models obtained from previous scans. It is then possible to use the matched information to successfully identify and track objects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.