919 resultados para Operational constraints
Resumo:
A constraints- based framework for understanding processes of movement coordination and control is predicated on a range of theoretical ideas including the work of Bernstein (1967), Gibson (1979), Newell (1986) and Kugler, Kelso & Turvey (1982). Contrary to a normative perspective that focuses on the production of idealized movement patterns to be acquired by children during development and learning (see Alain & Brisson, 1986), this approach formulates the emergence of movement co- ordination as a function of the constraints imposed upon each individual. In this framework, cognitive, perceptual and movement difficulties and disorders are considered to be constraints on the perceptual- motor system, and children’s movements are viewed as emergent functional adaptations to these constraints (Davids et al., 2008; Rosengren, Savelsbergh & van der Kamp, 2003). From this perspective, variability of movement behaviour is not viewed as noise or error to be eradicated during development, but rather, as essentially functional in facilitating the child to satisfy the unique constraints which impinge on his/her developing perceptual- motor and cognitive systems in everyday life (Davids et al., 2008). Recently, it has been reported that functional neurobiological variability is predicated on system degeneracy, an inherent feature of neurobiological systems which facilitates the achievement of task performance goals in a variety of different ways (Glazier & Davids, 2009). Degeneracy refers to the capacity of structurally different components of complex movement systems to achieve different performance outcomes in varying contexts (Tononi et al., 1999; Edelman & Gally, 2001). System degeneracy allows individuals with and without movement disorders to achieve their movement goals by harnessing movement variability during performance. Based on this idea, perceptual- motor disorders can be simply viewed as unique structural and functional system constraints which individuals have to satisfy in interactions with their environments. The aim of this chapter is to elucidate how the interaction of structural and functional organismic, and environmental constraints can be harnessed in a nonlinear pedagogy by individuals with movement disorders.
Resumo:
Organisations are increasingly investing in complex technological innovations, such as enterprise information systems, with the aim of improving the operation of the business, and in this way gaining competitive advantage. However, the implementation of technological innovations tends to have an excessive focus on either technology innovation effectiveness, or the resulting operational effectiveness. Focusing on either one of them is detrimental to long-term performance. Cross-functional teams have been used by many organisations as a way of involving expertise from different functional areas in the implementation of technologies. The role of boundary spanning actors is discussed as they bring a common language to the cross-functional teams. Multiple regression analysis has been used to identify the structural relationships and provide an explanation for the influence of cross-functional teams, technology innovation effectiveness and operational effectiveness in the continuous improvement of operational performance. The findings indicate that cross functional teams have an indirect influence on continuous improvement of operational performance through the alignment between technology innovation effectiveness and operational effectiveness.
Postural stability and hand preference as constraints on one-handed catching performance in children
Resumo:
CCTV and surveillance networks are increasingly being used for operational as well as security tasks. One emerging area of technology that lends itself to operational analytics is soft biometrics. Soft biometrics can be used to describe a person and detect them throughout a sparse multi-camera network. This enables them to be used to perform tasks such as determining the time taken to get from point to point, and the paths taken through an environment by detecting and matching people across disjoint views. However, in a busy environment where there are 100's if not 1000's of people such as an airport, attempting to monitor everyone is highly unrealistic. In this paper we propose an average soft biometric, that can be used to identity people who look distinct, and are thus suitable for monitoring through a large, sparse camera network. We demonstrate how an average soft biometric can be used to identify unique people to calculate operational measures such as the time taken to travel from point to point.
Resumo:
Between 2001 and 2005, the US airline industry faced financial turmoil. At the same time, the European airline industry entered a period of substantive deregulation. This period witnessed opportunities for low-cost carriers to become more competitive in the market as a result of these combined events. To help assess airline performance in the aftermath of these events, this paper provides new evidence of technical efficiency for 42 national and international airlines in 2006 using the data envelopment analysis (DEA) bootstrap approach first proposed by Simar and Wilson (J Econ, 136:31-64, 2007). In the first stage, technical efficiency scores are estimated using a bootstrap DEA model. In the second stage, a truncated regression is employed to quantify the economic drivers underlying measured technical efficiency. The results highlight the key role played by non-discretionary inputs in measures of airline technical efficiency.
Resumo:
Knowledge base is one of the emerging concepts in the Knowledge Management area. As there exists no agreed- upon standard definition of a knowledge base, this paper defines a knowledge base in terms of our research of Enterprise Systems (ES). The knowledge base is defined with reference to Learning Network Theory. Using this theoretical framework, we investigate the roles of management and operational staff in organisations and how their interactions can create a better ES-knowledge base to contribute to ES success. We focus on the post- implementation phase of ES as part of the ES lifecycle. Our findings will facilitate future research directions and contribute to better understandings of how the knowledge base can be integrated and how this integration leads to Enterprise System success.
Resumo:
There are many applications in aeronautical/aerospace engineering where some values of the design parameters states cannot be provided or determined accurately. These values can be related to the geometry(wingspan, length, angles) and or to operational flight conditions that vary due to the presence of uncertainty parameters (Mach, angle of attack, air density and temperature, etc.). These uncertainty design parameters cannot be ignored in engineering design and must be taken into the optimisation task to produce more realistic and reliable solutions. In this paper, a robust/uncertainty design method with statistical constraints is introduced to produce a set of reliable solutions which have high performance and low sensitivity. Robust design concept coupled with Multi Objective Evolutionary Algorithms (MOEAs) is defined by applying two statistical sampling formulas; mean and variance/standard deviation associated with the optimisation fitness/objective functions. The methodology is based on a canonical evolution strategy and incorporates the concepts of hierarchical topology, parallel computing and asynchronous evaluation. It is implemented for two practical Unmanned Aerial System (UAS) design problems; the flrst case considers robust multi-objective (single disciplinary: aerodynamics) design optimisation and the second considers a robust multidisciplinary (aero structures) design optimisation. Numerical results show that the solutions obtained by the robust design method with statistical constraints have a more reliable performance and sensitivity in both aerodynamics and structures when compared to the baseline design.
Resumo:
We consider a robust filtering problem for uncertain discrete-time, homogeneous, first-order, finite-state hidden Markov models (HMMs). The class of uncertain HMMs considered is described by a conditional relative entropy constraint on measures perturbed from a nominal regular conditional probability distribution given the previous posterior state distribution and the latest measurement. Under this class of perturbations, a robust infinite horizon filtering problem is first formulated as a constrained optimization problem before being transformed via variational results into an unconstrained optimization problem; the latter can be elegantly solved using a risk-sensitive information-state based filtering.
Resumo:
The management of models over time in many domains requires different constraints to apply to some parts of the model as it evolves. Using EMF and its meta-language Ecore, the development of model management code and tools usually relies on the meta- model having some constraints, such as attribute and reference cardinalities and changeability, set in the least constrained way that any model user will require. Stronger versions of these constraints can then be enforced in code, or by attaching additional constraint expressions, and their evaluations engines, to the generated model code. We propose a mechanism that allows for variations to the constraining meta-attributes of metamodels, to allow enforcement of different constraints at different lifecycle stages of a model. We then discuss the implementation choices within EMF to support the validation of a state-specific metamodel on model graphs when changing states, as well as the enforcement of state-specific constraints when executing model change operations.
Resumo:
Research on expertise, talent identification and development has tended to be mono-disciplinary, typically adopting geno-centric or environmentalist positions, with an overriding focus on operational issues. In this thesis, the validity of dualist positions on sport expertise is evaluated. It is argued that, to advance understanding of expertise and talent development, a shift towards a multidisciplinary and integrative science focus is necessary, along with the development of a comprehensive multidisciplinary theoretical rationale. Dynamical systems theory is utilised as a multidisciplinary theoretical rationale for the succession of studies, capturing how multiple interacting constraints can shape the development of expert performers. Phase I of the research examines experiential knowledge of coaches and players on the development of fast bowling talent utilising qualitative research methodology. It provides insights into the developmental histories of expert fast bowlers, as well as coaching philosophies on the constraints of fast bowling expertise. Results suggest talent development programmes should eschew the notion of common optimal performance models and emphasize the individual nature of pathways to expertise. Coaching and talent development programmes should identify the range of interacting constraints that impinge on the performance potential of individual athletes, rather than evaluating current performance on physical tests referenced to group norms. Phase II of this research comprises three further studies that investigate several of the key components identified as important for fast bowling expertise, talent identification and development extrapolated from Phase I of this research. This multidisciplinary programme of work involves a comprehensive analysis of fast bowling performance in a cross-section of the Cricket Australia high performance pathways, from the junior, emerging and national elite fast bowling squads. Briefly, differences were found in trunk kinematics associated with the generation of ball speed across the three groups. These differences in release mechanics indicated the functional adaptations in movement patterns as bowlers’ physical and anatomical characteristics changed during maturation. Second to the generation of ball speed, the ability to produce a range of delivery types was highlighted as a key component of expertise in the qualitative phase. The ability of athletes to produce consistent results on different surfaces and in different environments has drawn attention to the challenge of measuring consistency and flexibility in skill assessments. Examination of fast bowlers in Phase II demonstrated that national bowlers can make adjustments to the accuracy of subsequent deliveries during performance of a cricket bowling skills test, and perform a range of delivery types with increased accuracy and consistency. Finally, variability in selected delivery stride ground reaction force components in fast bowling revealed the degenerate nature of this complex multi-articular skill where the same performance outcome can be achieved with unique movement strategies. Utilising qualitative and quantitative methodologies to examine fast bowling expertise, the importance of degeneracy and adaptability in fast bowling has been highlighted alongside learning design that promotes dynamic learning environments.
Resumo:
This chapter investigates the place of new media in Queensland in the light of the Australian curriculum. ‘Multimodal texts’ in English are being defined as largely electronically ‘created’ and yet restricted access to digital resources at the chalkface may preclude this work from happening. The myth of the ‘digital native’ (Prensky, 2007), combined with the reality of the ‘digital divide’ coupled with technophobia amongst some quite experienced teachers, responsible for implementing the curriculum, paints a picture of constraints. These constraints are due in part to protective state bans in Queensland on social networking sites and school bans on mobile phone use. Some ‘Generation next’ will have access to digital platforms for the purpose of designing texts at home and school, and others will not. Yet without adequate Professional Development for teachers and substantially increased ICT infrastructure funding for all schools, the way new media and multimodal opportunities are interpreted at state level in the curriculum may leave much to be desired in schools. This chapter draws on research that I recently conducted on the professional development needs of beginning teachers, as well as a critical reading of the ACARA policy documents.
Resumo:
Mandatory data breach notification laws are a novel and potentially important legal instrument regarding organisational protection of personal information. These laws require organisations that have suffered a data breach involving personal information to notify those persons that may be affected, and potentially government authorities, about the breach. The Australian Law Reform Commission (ALRC) has proposed the creation of a mandatory data breach notification scheme, implemented via amendments to the Privacy Act 1988 (Cth). However, the conceptual differences between data breach notification law and information privacy law are such that it is questionable whether a data breach notification scheme can be solely implemented via an information privacy law. Accordingly, this thesis by publications investigated, through six journal articles, the extent to which data breach notification law was conceptually and operationally compatible with information privacy law. The assessment of compatibility began with the identification of key issues related to data breach notification law. The first article, Stakeholder Perspectives Regarding the Mandatory Notification of Australian Data Breaches started this stage of the research which concluded in the second article, The Mandatory Notification of Data Breaches: Issues Arising for Australian and EU Legal Developments (‘Mandatory Notification‘). A key issue that emerged was whether data breach notification was itself an information privacy issue. This notion guided the remaining research and focused attention towards the next stage of research, an examination of the conceptual and operational foundations of both laws. The second article, Mandatory Notification and the third article, Encryption Safe Harbours and Data Breach Notification Laws did so from the perspective of data breach notification law. The fourth article, The Conceptual Basis of Personal Information in Australian Privacy Law and the fifth article, Privacy Invasive Geo-Mashups: Privacy 2.0 and the Limits of First Generation Information Privacy Laws did so for information privacy law. The final article, Contextualizing the Tensions and Weaknesses of Information Privacy and Data Breach Notification Laws synthesised previous research findings within the framework of contextualisation, principally developed by Nissenbaum. The examination of conceptual and operational foundations revealed tensions between both laws and shared weaknesses within both laws. First, the distinction between sectoral and comprehensive information privacy legal regimes was important as it shaped the development of US data breach notification laws and their subsequent implementable scope in other jurisdictions. Second, the sectoral versus comprehensive distinction produced different emphases in relation to data breach notification thus leading to different forms of remedy. The prime example is the distinction between market-based initiatives found in US data breach notification laws compared to rights-based protections found in the EU and Australia. Third, both laws are predicated on the regulation of personal information exchange processes even though both laws regulate this process from different perspectives, namely, a context independent or context dependent approach. Fourth, both laws have limited notions of harm that is further constrained by restrictive accountability frameworks. The findings of the research suggest that data breach notification is more compatible with information privacy law in some respects than others. Apparent compatibilities clearly exist as both laws have an interest in the protection of personal information. However, this thesis revealed that ostensible similarities are founded on some significant differences. Data breach notification law is either a comprehensive facet to a sectoral approach or a sectoral adjunct to a comprehensive regime. However, whilst there are fundamental differences between both laws they are not so great to make them incompatible with each other. The similarities between both laws are sufficient to forge compatibilities but it is likely that the distinctions between them will produce anomalies particularly if both laws are applied from a perspective that negates contextualisation.
Resumo:
Many modern business environments employ software to automate the delivery of workflows; whereas, workflow design and generation remains a laborious technical task for domain specialists. Several differ- ent approaches have been proposed for deriving workflow models. Some approaches rely on process data mining approaches, whereas others have proposed derivations of workflow models from operational struc- tures, domain specific knowledge or workflow model compositions from knowledge-bases. Many approaches draw on principles from automatic planning, but conceptual in context and lack mathematical justification. In this paper we present a mathematical framework for deducing tasks in workflow models from plans in mechanistic or strongly controlled work environments, with a focus around automatic plan generations. In addition, we prove an associative composition operator that permits crisp hierarchical task compositions for workflow models through a set of mathematical deduction rules. The result is a logical framework that can be used to prove tasks in workflow hierarchies from operational information about work processes and machine configurations in controlled or mechanistic work environments.