205 resultados para Task Complexity
Resumo:
In this paper we extend the concept of speaker annotation within a single-recording, or speaker diarization, to a collection wide approach we call speaker attribution. Accordingly, speaker attribution is the task of clustering expectantly homogenous intersession clusters obtained using diarization according to common cross-recording identities. The result of attribution is a collection of spoken audio across multiple recordings attributed to speaker identities. In this paper, an attribution system is proposed using mean-only MAP adaptation of a combined-gender UBM to model clusters from a perfect diarization system, as well as a JFA-based system with session variability compensation. The normalized cross-likelihood ratio is calculated for each pair of clusters to construct an attribution matrix and the complete linkage algorithm is employed to conduct clustering of the inter-session clusters. A matched cluster purity and coverage of 87.1% was obtained on the NIST 2008 SRE corpus.
Resumo:
The literature abounds with descriptions of failures in high-profile projects and a range of initiatives has been generated to enhance project management practice (e.g., Morris, 2006). Estimating from our own research, there are scores of other project failures that are unrecorded. Many of these failures can be explained using existing project management theory; poor risk management, inaccurate estimating, cultures of optimism dominating decision making, stakeholder mismanagement, inadequate timeframes, and so on. Nevertheless, in spite of extensive discussion and analysis of failures and attention to the presumed causes of failure, projects continue to fail in unexpected ways. In the 1990s, three U.S. state departments of motor vehicles (DMV) cancelled major projects due to time and cost overruns and inability to meet project goals (IT-Cortex, 2010). The California DMV failed to revitalize their drivers’ license and registration application process after spending $45 million. The Oregon DMV cancelled their five year, $50 million project to automate their manual, paper-based operation after three years when the estimates grew to $123 million; its duration stretched to eight years or more and the prototype was a complete failure. In 1997, the Washington state DMV cancelled their license application mitigation project because it would have been too big and obsolete by the time it was estimated to be finished. There are countless similar examples of projects that have been abandoned or that have not delivered the requirements.
Resumo:
A forced landing is an unscheduled event in flight requiring an emergency landing, and is most commonly attributed to engine failure, failure of avionics or adverse weather. Since the ability to conduct a successful forced landing is the primary indicator for safety in the aviation industry, automating this capability for unmanned aerial vehicles (UAVs) will help facilitate their integration into, and subsequent routine operations over civilian airspace. Currently, there is no commercial system available to perform this task; however, a team at the Australian Research Centre for Aerospace Automation (ARCAA) is working towards developing such an automated forced landing system. This system, codenamed Flight Guardian, will operate onboard the aircraft and use machine vision for site identification, artificial intelligence for data assessment and evaluation, and path planning, guidance and control techniques to actualize the landing. This thesis focuses on research specific to the third category, and presents the design, testing and evaluation of a Trajectory Generation and Guidance System (TGGS) that navigates the aircraft to land at a chosen site, following an engine failure. Firstly, two algorithms are developed that adapts manned aircraft forced landing techniques to suit the UAV planning problem. Algorithm 1 allows the UAV to select a route (from a library) based on a fixed glide range and the ambient wind conditions, while Algorithm 2 uses a series of adjustable waypoints to cater for changing winds. A comparison of both algorithms in over 200 simulated forced landings found that using Algorithm 2, twice as many landings were within the designated area, with an average lateral miss distance of 200 m at the aimpoint. These results present a baseline for further refinements to the planning algorithms. A significant contribution is seen in the design of the 3-D Dubins Curves planning algorithm, which extends the elementary concepts underlying 2-D Dubins paths to account for powerless flight in three dimensions. This has also resulted in the development of new methods in testing for path traversability, in losing excess altitude, and in the actual path formation to ensure aircraft stability. Simulations using this algorithm have demonstrated lateral and vertical miss distances of under 20 m at the approach point, in wind speeds of up to 9 m/s. This is greater than a tenfold improvement on Algorithm 2 and emulates the performance of manned, powered aircraft. The lateral guidance algorithm originally developed by Park, Deyst, and How (2007) is enhanced to include wind information in the guidance logic. A simple assumption is also made that reduces the complexity of the algorithm in following a circular path, yet without sacrificing performance. Finally, a specific method of supplying the correct turning direction is also used. Simulations have shown that this new algorithm, named the Enhanced Nonlinear Guidance (ENG) algorithm, performs much better in changing winds, with cross-track errors at the approach point within 2 m, compared to over 10 m using Park's algorithm. A fourth contribution is made in designing the Flight Path Following Guidance (FPFG) algorithm, which uses path angle calculations and the MacCready theory to determine the optimal speed to fly in winds. This algorithm also uses proportional integral- derivative (PID) gain schedules to finely tune the tracking accuracies, and has demonstrated in simulation vertical miss distances of under 2 m in changing winds. A fifth contribution is made in designing the Modified Proportional Navigation (MPN) algorithm, which uses principles from proportional navigation and the ENG algorithm, as well as methods specifically its own, to calculate the required pitch to fly. This algorithm is robust to wind changes, and is easily adaptable to any aircraft type. Tracking accuracies obtained with this algorithm are also comparable to those obtained using the FPFG algorithm. For all three preceding guidance algorithms, a novel method utilising the geometric and time relationship between aircraft and path is also employed to ensure that the aircraft is still able to track the desired path to completion in strong winds, while remaining stabilised. Finally, a derived contribution is made in modifying the 3-D Dubins Curves algorithm to suit helicopter flight dynamics. This modification allows a helicopter to autonomously track both stationary and moving targets in flight, and is highly advantageous for applications such as traffic surveillance, police pursuit, security or payload delivery. Each of these achievements serves to enhance the on-board autonomy and safety of a UAV, which in turn will help facilitate the integration of UAVs into civilian airspace for a wider appreciation of the good that they can provide. The automated UAV forced landing planning and guidance strategies presented in this thesis will allow the progression of this technology from the design and developmental stages, through to a prototype system that can demonstrate its effectiveness to the UAV research and operations community.
Resumo:
Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.
Resumo:
In fault detection and diagnostics, limitations coming from the sensor network architecture are one of the main challenges in evaluating a system’s health status. Usually the design of the sensor network architecture is not solely based on diagnostic purposes, other factors like controls, financial constraints, and practical limitations are also involved. As a result, it quite common to have one sensor (or one set of sensors) monitoring the behaviour of two or more components. This can significantly extend the complexity of diagnostic problems. In this paper a systematic approach is presented to deal with such complexities. It is shown how the problem can be formulated as a Bayesian network based diagnostic mechanism with latent variables. The developed approach is also applied to the problem of fault diagnosis in HVAC systems, an application area with considerable modeling and measurement constraints.
Resumo:
Project management in the construction industry involves coordination of many tasks and individuals, affected by complexity and uncertainty, which increases the need for efficient cooperation. Procurement is crucial since it sets the basis for cooperation between clients and contractors. This is true whether the project is local, regional or global in scope. Traditionally, procurement procedures are competitive, resulting in conflicts, adversarial relationships and less desirable project results. The purpose of this paper is to propose and empirically test an alternative procurement model based on cooperative procurement procedures that facilitates cooperation between clients and contractors in construction projects. The model is based on four multi-item constructs – incentive-based compensation, limited bidding options, partner selection and cooperation. Based on a sample of 87 client organisations, the model was empirically tested and exhibited strong support, including content, nomological, convergent and discriminant validity, as well as reliability. Our findings indicate that partner selection based on task related attributes mediates the relationship between two important pre-selection processes (incentive-based compensation and limited bid invitation) and preferred outcome of cooperation. The contribution of the paper is identifying valid and reliable measurement constructs and confirming a unique sequential order for achieving cooperation. Moreover, the findings are applicable for many types of construction projects because of the similarities in the construction industry worldwide.
Resumo:
In the multi-view approach to semisupervised learning, we choose one predictor from each of multiple hypothesis classes, and we co-regularize our choices by penalizing disagreement among the predictors on the unlabeled data. We examine the co-regularization method used in the co-regularized least squares (CoRLS) algorithm, in which the views are reproducing kernel Hilbert spaces (RKHS's), and the disagreement penalty is the average squared difference in predictions. The final predictor is the pointwise average of the predictors from each view. We call the set of predictors that can result from this procedure the co-regularized hypothesis class. Our main result is a tight bound on the Rademacher complexity of the co-regularized hypothesis class in terms of the kernel matrices of each RKHS. We find that the co-regularization reduces the Rademacher complexity by an amount that depends on the distance between the two views, as measured by a data dependent metric. We then use standard techniques to bound the gap between training error and test error for the CoRLS algorithm. Experimentally, we find that the amount of reduction in complexity introduced by co regularization correlates with the amount of improvement that co-regularization gives in the CoRLS algorithm.
Resumo:
This study aimed to explore resilience and wellbeing among a group of eight refugee women originating from several countries (mainly African) and living in Brisbane, most of whom were single mothers. To challenge mostly quantitative and gender-blind explorations of mental health concepts among refugee groups, the project sought an emic and contextual understanding of resilience and wellbeing. Established perspectives, while useful, tend to overlook the complexities of refugee mental health experiences and can neglect the dense nature of individual stories. The purpose of my study was to contest relatively simplistic narratives of mental health constructs that tend to dominate migrant and refugee studies and influence practice paradigms in the human services field. In this ethnographic exploration of mental health constructs conducted in 2008 and 2009, the use of in-depth interviews, participant observations, and visual ethnographic elements provided an opportunity for refugee women to tell their own stories. The participants’ unique narratives of pre- and post-migration experiences, shaped by specific gender, age, social, cultural and political aspects prevailing in their lives, yielded ‘thick’ ethnographic description (Geertz, 1973) of their social worlds. The findings explored in this study, namely language issues, the impact of community dynamics, and the single status of refugee women, clearly demonstrate that mental health constructs are fluid, multifaceted and complex in reality. In fact, language, community dynamics, and being a single mother, represented both opportunities and barriers in the lives of participants. In some contexts, these factors were conducive to resilience and wellbeing, while in other circumstances, these three elements acted as a hindrance to positive mental health outcomes. There are multiple dimensions to the findings, signifying that the social worlds of refugee women cannot be simplified using set definitions and neat notions of resilience and wellbeing. Instead, the intricacies and complexities embedded in the mundane of the everyday highlight novel conceptualisations of resilience and wellbeing. Based on the particular circumstances of single refugee mothers, whose experiences differ from that of married women, this thesis presents novel articulations of mental health constructs, as an alternative view to existing trends in the literature on refugee issues. Rich and multi-dimensional meanings associated with the socio-cultural determinants of mental health emerged in the process. This thesis’ findings highlight a significant gap in diasporic studies as well as simplistic assumptions about refugee women’s resettlement experiences. Single refugee women’s distinct issues are so complex and dense, that a contextual approach is critical to yield accurate depictions of their circumstances. It is therefore essential to understand refugee lived experiences within broader socio-political contexts to truly appreciate the depth of these narratives. In this manner, critical aspects salient to refugee journeys can inform different understandings of resilience, wellbeing and mental health, and shape contemporary policy and human service practice paradigms.
Resumo:
Most learning paradigms impose a particular syntax on the class of concepts to be learned; the chosen syntax can dramatically affect whether the class is learnable or not. For classification paradigms, where the task is to determine whether the underlying world does or does not have a particular property, how that property is represented has no implication on the power of a classifier that just outputs 1’s or 0’s. But is it possible to give a canonical syntactic representation of the class of concepts that are classifiable according to the particular criteria of a given paradigm? We provide a positive answer to this question for classification in the limit paradigms in a logical setting, with ordinal mind change bounds as a measure of complexity. The syntactic characterization that emerges enables to derive that if a possibly noncomputable classifier can perform the task assigned to it by the paradigm, then a computable classifier can also perform the same task. The syntactic characterization is strongly related to the difference hierarchy over the class of open sets of some topological space; this space is naturally defined from the class of possible worlds and possible data of the learning paradigm.
Resumo:
Schools in Australia today are expected to improve student outcomes in an increasingly constrained context focused on accountability. At the same time as teachers work within these constraints, they are also working to ensure that all students are able to achieve quality outcomes from schooling while dealing with increasing diversity of the student population. So what is a teacher’s role in providing equitable possibilities for all of the students in their care? Teachers are left with the difficult task of balancing the many mandatory requirements placed on them, and on their students, such as gaining improvements on high stakes test scores, with the important work of dealing with individual students and student cohorts in equitable and socially just ways. This impacts on the work of all teachers, however for those teachers working in schools in contexts of complexity and duress these pressures are never greater.
Resumo:
Measuring the business value that Internet technologies deliver for organisations has proven to be a difficult and elusive task, given their complexity and increased embeddedness within the value chain. Yet, despite the lack of empirical evidence that links the adoption of Information Technology (IT) with increased financial performance, many organisations continue to adopt new technologies at a rapid rate. This is evident in the widespread adoption of Web 2.0 online Social Networking Services (SNSs) such as Facebook, Twitter and YouTube. These new Internet based technologies, widely used for social purposes, are being employed by organisations to enhance their business communication processes. However, their use is yet to be correlated with an increase in business performance. Owing to the conflicting empirical evidence that links prior IT applications with increased business performance, IT, Information Systems (IS), and E-Business Model (EBM) research has increasingly looked to broader social and environmental factors as a means for examining and understanding the broader influences shaping IT, IS and E-Business (EB) adoption behaviour. Findings from these studies suggest that organisations adopt new technologies as a result of strong external pressures, rather than a clear measure of enhanced business value. In order to ascertain if this is the case with the adoption of SNSs, this study explores how organisations are creating value (and measuring that value) with the use of SNSs for business purposes, and the external pressures influencing their adoption. In doing so, it seeks to address two research questions: 1. What are the external pressures influencing organisations to adopt SNSs for business communication purposes? 2. Are SNSs providing increased business value for organisations, and if so, how is that value being captured and measured? Informed by the background literature fields of IT, IS, EBM, and Web 2.0, a three-tiered theoretical framework is developed that combines macro-societal, social and technological perspectives as possible causal mechanisms influencing the SNS adoption event. The macro societal view draws on the concept of Castells. (1996) network society and the behaviour of crowds, herds and swarms, to formulate a new explanatory concept of the network vortex. The social perspective draws on key components of institutional theory (DiMaggio & Powell, 1983, 1991), and the technical view draws from the organising vision concept developed by Swanson and Ramiller (1997). The study takes a critical realist approach, and conducts four stages of data collection and one stage of data coding and analysis. Stage 1 consisted of content analysis of websites and SNSs of many organisations, to identify the types of business purposes SNSs are being used for. Stage 2 also involved content analysis of organisational websites, in order to identify suitable sample organisations in which to conduct telephone interviews. Stage 3 consisted of conducting 18 in-depth, semi-structured telephone interviews within eight Australian organisations from the Media/Publishing and Galleries, Libraries, Archives and Museum (GLAM) industries. These sample organisations were considered leaders in the use of SNSs technologies. Stage 4 involved an SNS activity count of the organisations interviewed in Stage 3, in order to rate them as either Advanced Innovator (AI) organisations, or Learning Focussed (LF) organisations. A fifth stage of data coding and analysis of all four data collection stages was conducted, based on the theoretical framework developed for the study, and using QSR NVivo 8 software. The findings from this study reveal that SNSs have been adopted by organisations for the purpose of increasing business value, and as a result of strong social and macro-societal pressures. SNSs offer organisations a wide range of value enhancing opportunities that have broader benefits for customers and society. However, measuring the increased business value is difficult with traditional Return On Investment (ROI) mechanisms, ascertaining the need for new value capture and measurement rationales, to support the accountability of SNS adoption practices. The study also identified the presence of technical, social and macro-societal pressures, all of which influenced SNS adoption by organisations. These findings contribute important theoretical insight into the increased complexity of pressures influencing technology adoption rationales by organisations, and have important practical implications for practice, by reflecting the expanded global online networks in which organisations now operate. The limitations of the study include the small number of sample organisations in which interviews were conducted, its limited generalisability, and the small range of SNSs selected for the study. However, these were compensated in part by the expertise of the interviewees, and the global significance of the SNSs that were chosen. Future research could replicate the study to a larger sample from different industries, sectors and countries. It could also explore the life cycle of SNSs in a longitudinal study, and map how the technical, social and macro-societal pressures are emphasised through stages of the life cycle. The theoretical framework could also be applied to other social fad technology adoption studies.
Resumo:
We address the problem of constructing randomized online algorithms for the Metrical Task Systems (MTS) problem on a metric δ against an oblivious adversary. Restricting our attention to the class of “work-based” algorithms, we provide a framework for designing algorithms that uses the technique of regularization. For the case when δ is a uniform metric, we exhibit two algorithms that arise from this framework, and we prove a bound on the competitive ratio of each. We show that the second of these algorithms is ln n + O(loglogn) competitive, which is the current state-of-the art for the uniform MTS problem.