222 resultados para Misalignment
Resumo:
Managing through projects has become important for generating new knowledge to cope with technological and market discontinuities. This paper examines how the fit between the creation of technological and market knowledge and important project management characteristics, i.e. project autonomy and completion criteria, influences the success of new business development (NBD) projects. In-depth longitudinal case research on NBD projects commercialised from 1993 to 2003 in the consumer electronics industry highlights that project management characteristics focusing only on the creation of technological knowledge contributed to the failure of those NBD projects that required new market knowledge as well. The findings indicate that senior management support and engaging in an alliance with partners possessing complementary market knowledge can offset this misalignment of the organisation of NBD projects.
Resumo:
Visual localization systems that are practical for autonomous vehicles in outdoor industrial applications must perform reliably in a wide range of conditions. Changing outdoor conditions cause difficulty by drastically altering the information available in the camera images. To confront the problem, we have developed a visual localization system that uses a surveyed three-dimensional (3D)-edge map of permanent structures in the environment. The map has the invariant properties necessary to achieve long-term robust operation. Previous 3D-edge map localization systems usually maintain a single pose hypothesis, making it difficult to initialize without an accurate prior pose estimate and also making them susceptible to misalignment with unmapped edges detected in the camera image. A multihypothesis particle filter is employed here to perform the initialization procedure with significant uncertainty in the vehicle's initial pose. A novel observation function for the particle filter is developed and evaluated against two existing functions. The new function is shown to further improve the abilities of the particle filter to converge given a very coarse estimate of the vehicle's initial pose. An intelligent exposure control algorithm is also developed that improves the quality of the pertinent information in the image. Results gathered over an entire sunny day and also during rainy weather illustrate that the localization system can operate in a wide range of outdoor conditions. The conclusion is that an invariant map, a robust multihypothesis localization algorithm, and an intelligent exposure control algorithm all combine to enable reliable visual localization through challenging outdoor conditions.
Resumo:
The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.
Resumo:
Inspection of solder joints has been a critical process in the electronic manufacturing industry to reduce manufacturing cost, improve yield, and ensure product quality and reliability. The solder joint inspection problem is more challenging than many other visual inspections because of the variability in the appearance of solder joints. Although many research works and various techniques have been developed to classify defect in solder joints, these methods have complex systems of illumination for image acquisition and complicated classification algorithms. An important stage of the analysis is to select the right method for the classification. Better inspection technologies are needed to fill the gap between available inspection capabilities and industry systems. This dissertation aims to provide a solution that can overcome some of the limitations of current inspection techniques. This research proposes two inspection steps for automatic solder joint classification system. The “front-end” inspection system includes illumination normalisation, localization and segmentation. The illumination normalisation approach can effectively and efficiently eliminate the effect of uneven illumination while keeping the properties of the processed image. The “back-end” inspection involves the classification of solder joints by using Log Gabor filter and classifier fusion. Five different levels of solder quality with respect to the amount of solder paste have been defined. Log Gabor filter has been demonstrated to achieve high recognition rates and is resistant to misalignment. Further testing demonstrates the advantage of Log Gabor filter over both Discrete Wavelet Transform and Discrete Cosine Transform. Classifier score fusion is analysed for improving recognition rate. Experimental results demonstrate that the proposed system improves performance and robustness in terms of classification rates. This proposed system does not need any special illumination system, and the images are acquired by an ordinary digital camera. In fact, the choice of suitable features allows one to overcome the problem given by the use of non complex illumination systems. The new system proposed in this research can be incorporated in the development of an automated non-contact, non-destructive and low cost solder joint quality inspection system.
Resumo:
The task addressed in this thesis is the automatic alignment of an ensemble of misaligned images in an unsupervised manner. This application is especially useful in computer vision applications where annotations of the shape of an object of interest present in a collection of images is required. Performing this task manually is a slow, tedious, expensive and error prone process which hinders the progress of research laboratories and businesses. Most recently, the unsupervised removal of geometric variation present in a collection of images has been referred to as congealing based on the seminal work of Learned-Miller [21]. The only assumption made in congealing is that the parametric nature of the misalignment is known a priori (e.g. translation, similarity, a�ne, etc) and that the object of interest is guaranteed to be present in each image. The capability to congeal an ensemble of misaligned images stemming from the same object class has numerous applications in object recognition, detection and tracking. This thesis concerns itself with the construction of a congealing algorithm titled, least-squares congealing, which is inspired by the well known image to image alignment algorithm developed by Lucas and Kanade [24]. The algorithm is shown to have superior performance characteristics when compared to previously established methods: canonical congealing by Learned-Miller [21] and stochastic congealing by Z�ollei [39].
Resumo:
The use of appropriate financial incentives within construction projects can contribute to strong alignment of project stakeholder motivation with project goals. However, effective incentive system design can be a challenging task and takes skillful planning by client managers in the early stages of a project. In response to a lack of information currently available to construction clients in this area, this paper explores the features of a successful incentive system and identifies key learnings for client managers to consider when designing incentives. Our findings, based on data from a large Australian case study, suggest that key stakeholders place greater emphasis on the project management processes that support incentives than on the incentive itself. Further, contractors need adequate time and information to accurately estimate construction costs prior to their tender price submission to ensure cost-focused incentive goals remain achievable. Thus, client managers should be designing incentives as part of a supportive procurement strategy to maximize project stakeholder motivation and prevent goal misalignment.
Resumo:
The pervasiveness of technology in the 21st Century has meant that adults and children live in a society where digital devices are integral to their everyday lives and participation in society. How we communicate, learn, work, entertain ourselves, and even shop is influenced by technology. Therefore, before children begin school they are potentially exposed to a range of learning opportunities mediated by digital devices. These devices include microwaves, mobile phones, computers, and console games such as Playstations® and iPods®. In Queensland preparatory classrooms and in the homes of these children, teachers and parents support and scaffold young children’s experiences, providing them with access to a range of tools that promote learning and provide entertainment. This paper examines teachers’ and parents’ perspectives and considers whether they are techno-optimists who advocate for and promote the inclusion of digital technology, or whether they are they techno-pessimists, who prefer to exclude digital devices from young children’s everyday experiences. An exploratory, single case study design was utilised to gather data from three teachers and ten parents of children in the preparatory year. Teacher data was collected through interviews and email correspondence. Parent data was collected from questionnaires and focus groups. All parents who responded to the research invitation were mothers. The results of data analysis identified a misalignment among adults’ perspectives. Teachers were identified as techno-optimists and parents were identified as techno-pessimists with further emergent themes particular to each category being established. This is concerning because both teachers and mothers influence young children’s experiences and numeracy knowledge, thus, a shared understanding and a common commitment to supporting young children’s use of technology would be beneficial. Further research must investigate fathers’ perspectives of digital devices and the beneficial and detrimental roles that a range of digital devices, tools, and entertainment gadgets play in 21st Century children’s lives.
Resumo:
Inspection of solder joints has been a critical process in the electronic manufacturing industry to reduce manufacturing cost, improve yield, and ensure product quality and reliability. This paper proposes two inspection modules for an automatic solder joint classification system. The “front-end” inspection system includes illumination normalisation, localisation and segmentation. The “back-end” inspection involves the classification of solder joints using the Log Gabor filter and classifier fusion. Five different levels of solder quality with respect to the amount of solder paste have been defined. The Log Gabor filter has been demonstrated to achieve high recognition rates and is resistant to misalignment. This proposed system does not need any special illumination system, and the images are acquired by an ordinary digital camera. This system could contribute to the development of automated non-contact, non-destructive and low cost solder joint quality inspection systems.
Resumo:
The pervasiveness of technology in the 21st Century has meant that adults and children live in a society where digital devices are integral to their everyday lives and participation in society. How we communicate, learn, work, entertain ourselves, and even shop is influenced by technology. Therefore, before children begin school they are potentially exposed to a range of learning opportunities mediated by digital devices. These devices include microwaves, mobile phones, computers, and console games such as Playstations® and iPods®. In Queensland preparatory classrooms and in the homes of these children, teachers and parents support and scaffold young children’s experiences, providing them with access to a range of tools that promote learning and provide entertainment. This paper examines teachers’ and parents’ perspectives and considers whether they are techno-optimists who advocate for and promote the inclusion of digital technology, or whether they are they techno-pessimists, who prefer to exclude digital devices from young children’s everyday experiences. An exploratory, single case study design was utilised to gather data from three teachers and ten parents of children in the preparatory year. Teacher data was collected through interviews and email correspondence. Parent data was collected from questionnaires and focus groups. All parents who responded to the research invitation were mothers. The results of data analysis identified a misalignment among adults’ perspectives. Teachers were identified as techno-optimists and parents were identified as techno-pessimists with further emergent themes particular to each category being established. This is concerning because both teachers and mothers influence young children’s experiences and numeracy knowledge, thus, a shared understanding and a common commitment to supporting young children’s use of technology would be beneficial. Further research must investigate fathers’ perspectives of digital devices and the beneficial and detrimental roles that a range of digital devices, tools, and entertainment gadgets play in 21st Century children’s lives.
Resumo:
Product innovation is an important contributor to the performance of infrastructure projects in the construction industry. Maximizing the potential for innovative product adoption is a challenging task due to the complexities of the construction innovation system. A qualitative methodology involving interviews with major construction project stakeholders is employed to address the research question: ‘What are the main obstacles to the adoption of innovative products in the road industry?’ The characteristics of six key product innovation obstacles in Australian road projects are described. The six key obstacles are: project goal misalignment, client pressures, weak contractual relations, lack of product trialling, inflexible product specifications and product liability concerns. A snapshot of the dynamics underlying these obstacles is provided. There are few such assessments in the literature, despite the imperative to improve construction innovation rates globally in order to deliver road infrastructure projects of increasing size and complexity. Key obstacles are interpreted through an open innovation construct, providing direction for policy to enhance the uptake of innovation across the construction product supply network. Early evidence suggests the usefulness of an open innovation construct that integrates three conceptual lenses: network governance, absorptive capacity and knowledge intermediation, in order to interpret product adoption obstacles in the context of Australian road infrastructure projects. The paper also provides practical advice and direction for government and industry organizations that wish to promote the flow of innovative product knowledge across the construction supply network.
In the pursuit of effective affective computing : the relationship between features and registration
Resumo:
For facial expression recognition systems to be applicable in the real world, they need to be able to detect and track a previously unseen person's face and its facial movements accurately in realistic environments. A highly plausible solution involves performing a "dense" form of alignment, where 60-70 fiducial facial points are tracked with high accuracy. The problem is that, in practice, this type of dense alignment had so far been impossible to achieve in a generic sense, mainly due to poor reliability and robustness. Instead, many expression detection methods have opted for a "coarse" form of face alignment, followed by an application of a biologically inspired appearance descriptor such as the histogram of oriented gradients or Gabor magnitudes. Encouragingly, recent advances to a number of dense alignment algorithms have demonstrated both high reliability and accuracy for unseen subjects [e.g., constrained local models (CLMs)]. This begs the question: Aside from countering against illumination variation, what do these appearance descriptors do that standard pixel representations do not? In this paper, we show that, when close to perfect alignment is obtained, there is no real benefit in employing these different appearance-based representations (under consistent illumination conditions). In fact, when misalignment does occur, we show that these appearance descriptors do work well by encoding robustness to alignment error. For this work, we compared two popular methods for dense alignment-subject-dependent active appearance models versus subject-independent CLMs-on the task of action-unit detection. These comparisons were conducted through a battery of experiments across various publicly available data sets (i.e., CK+, Pain, M3, and GEMEP-FERA). We also report our performance in the recent 2011 Facial Expression Recognition and Analysis Challenge for the subject-independent task.
Resumo:
Organizations adopt a Supply Chain Management System (SCMS) expecting benefits to the organization and its functions. However, organizations are facing mounting challenges to realizing benefits through SCMS. Studies suggest a growing dissatisfaction among client organizations due to an increasing gap between expectations and realization of SCMS benefits. Further, reflecting the Enterprise System studies such as Seddon et al. (2010), SCMS benefits are also expected to flow to the organization throughout its lifecycle rather than being realized all at once. This research therefore proposes to derive a lifecycle-wide understanding of SCMS benefits and realization to derive a benefit expectation management framework to attain the full potential of an SCMS. The primary research question of this study is: How can client organizations better manage their benefit expectations of SCM systems? The specific research goals of the current study include: (1) to better understand the misalignment of received and expected benefits of SCM systems; (2) to identify the key factors influencing SCM system expectations and to develop a framework to manage SCMS benefits; (3) to explore how organizational satisfaction is influenced by the lack of SCMS benefit confirmation; and (4) to explore how to improve the realization of SCM system benefits. Expectation-Confirmation Theory (ECT) provides the theoretical underpinning for this study. ECT has been widely used in the consumer behavior literature to study customer satisfaction, post-purchase behavior and service marketing in general. Recently, ECT has been extended into Information Systems (IS) research focusing on individual user satisfaction and IS continuance. However, only a handful of studies have employed ECT to study organizational satisfaction on large-scale IS. The current study will enrich the research stream by extending ECT into organizational-level analysis and verifying the preliminary findings of relevant works by Staples et al. (2002), Nevo and Chan (2007) and Nevo and Wade (2007). Moreover, this study will go further trying to operationalize the constructs of ECT into the context of SCMS. The empirical findings of the study commence with a content analysis, through which 41 vendor reports and academic reports are analyzed yielding sixty expected benefits of SCMS. Then, the expected benefits are compared with the benefits realized at a case organization in the Fast Moving Consumer Goods industry sector that had implemented a SAP Supply Chain Management System seven years earlier. The study develops an SCMS Benefit Expectation Management (SCMS-BEM) Framework. The comparison of benefit expectations and confirmations highlights that, while certain benefits are realized earlier in the lifecycle, other benefits could take almost a decade to realize. Further analysis and discussion on how the developed SCMS-BEM Framework influences ECT when applied in SCMS was also conducted. It is recommended that when establishing their expectations of the SCMS, clients should remember that confirmation of these expectations will have a long lifecycle, as shown in the different time periods in the SCMS-BEM Framework. Moreover, the SCMS-BEM Framework will allow organizations to maintain high levels of satisfaction through careful mitigation and confirming expectations based on the lifecycle phase. In addition, the study reveals that different stakeholder groups have different expectations of the same SCMS. The perspective of multiple stakeholders has significant implications for the application of ECT in the SCMS context. When forming expectations of the SCMS, the collection of organizational benefits of SCMS should represent the perceptions of all stakeholder groups. The same mechanism should be employed in the measurements of received SCMS benefits. Moreover, for SCMS, there exists interdependence of the satisfaction among the various stakeholders. The satisfaction of decision-makers or the authorized staff is not only driven by their own expectation confirmation level, it is also influenced by the confirmation level of other stakeholders‘ expectations in the organization. Satisfaction from any one particular stakeholder group can not reflect the true satisfaction of the client organization. Furthermore, it is inferred from the SCMS-BEM Framework that organizations should place emphasis on the viewpoints of the operational and management staff when evaluating the benefits of SCMS in the short and middle term. At the same time, organizations should be placing more attention on the perspectives of strategic staff when evaluating the performance of the SCMS in the long term.
Resumo:
Many methods exist at the moment for deformable face fitting. A drawback to nearly all these approaches is that they are (i) noisy in terms of landmark positions, and (ii) the noise is biased across frames (i.e. the misalignment is toward common directions across all frames). In this paper we propose a grouped $\mathcal{L}1$-norm anchored method for simultaneously aligning an ensemble of deformable face images stemming from the same subject, given noisy heterogeneous landmark estimates. Impressive alignment performance improvement and refinement is obtained using very weak initialization as "anchors".
Resumo:
Different types of HTS joints of Bi-2212/Ag tapes and laminates, which are fabricated by dip-coating and partial-melt processes, have been investigated. All joints are prepared using green single and laminated tapes and according to the scheme: coating-joining-processing. The heat treated tapes have critical current (Ic) between 7 and 27 A, depending on tape thickness and the number of Bi-2212 ceramic layers in laminated tapes. It is found that the current transport properties of joints depend on the type of laminate, joint configuration and joint treatment, Ic losses in joints of Bi-2212 tapes and laminates are attributed to defects in their structure, such as pores, secondary phases and misalignment of Bi-2212 grains near the Ag edges. By optimizing joint configuration, current transmission up to 100% is achieved for both single tapes and laminated tapes.
Resumo:
In the field of face recognition, Sparse Representation (SR) has received considerable attention during the past few years. Most of the relevant literature focuses on holistic descriptors in closed-set identification applications. The underlying assumption in SR-based methods is that each class in the gallery has sufficient samples and the query lies on the subspace spanned by the gallery of the same class. Unfortunately, such assumption is easily violated in the more challenging face verification scenario, where an algorithm is required to determine if two faces (where one or both have not been seen before) belong to the same person. In this paper, we first discuss why previous attempts with SR might not be applicable to verification problems. We then propose an alternative approach to face verification via SR. Specifically, we propose to use explicit SR encoding on local image patches rather than the entire face. The obtained sparse signals are pooled via averaging to form multiple region descriptors, which are then concatenated to form an overall face descriptor. Due to the deliberate loss spatial relations within each region (caused by averaging), the resulting descriptor is robust to misalignment & various image deformations. Within the proposed framework, we evaluate several SR encoding techniques: l1-minimisation, Sparse Autoencoder Neural Network (SANN), and an implicit probabilistic technique based on Gaussian Mixture Models. Thorough experiments on AR, FERET, exYaleB, BANCA and ChokePoint datasets show that the proposed local SR approach obtains considerably better and more robust performance than several previous state-of-the-art holistic SR methods, in both verification and closed-set identification problems. The experiments also show that l1-minimisation based encoding has a considerably higher computational than the other techniques, but leads to higher recognition rates.