102 resultados para bounded rationality


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The focus of this study is the phenomenon of teams and teamwork. Currently the Professional Standards of Queensland’s teachers state that teams are critical to teachers’ work. This study uses a phenomenographic approach to investigate science teachers’ conceptions of teams and teamwork in the science departments of fifteen Queensland State secondary schools. The research identifies eight conceptions of teams and teamwork. The research findings suggest that the team represents a collective of science teachers bounded by the Science Department and their current timetabled subject. Collaboration was found in the study to be an activity that occurred between teachers in the same social space. The research recognises a new category of relationship between teachers, designated as ‘ask-and-receive’. The research identifies a lack of teamwork within the science department and the school. There appears to be no teaming with other subject departments. The research findings highlight the non-supportive team and teamwork policies, procedures and structures in the schools and identify the lack of recognition of the specialised skills of science teachers. The implications for the schools and science teachers are considerable, as the current Professional Standards of Education Queensland and the Queensland College of Teachers provide benchmarks of knowledge and practice of teams and teamwork for teachers. The research suggests that the professional standards relating to teams and teamwork cannot be achieved in the present school environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Frontline employee behaviours are recognised as vital for achieving a competitive advantage for service organisations. The services marketing literature has comprehensively examined ways to improve frontline employee behaviours in service delivery and recovery. However, limited attention has been paid to frontline employee behaviours that favour customers in ways that go against organisational norms or rules. This study examines these behaviours by introducing a behavioural concept of Customer-Oriented Deviance (COD). COD is defined as, “frontline employees exhibiting extra-role behaviours that they perceive to defy existing expectations or prescribed rules of higher authority through service adaptation, communication and use of resources to benefit customers during interpersonal service encounters.” This thesis develops a COD measure and examines the key determinants of these behaviours from a frontline employee perspective. Existing research on similar behaviours that has originated in the positive deviance and pro-social behaviour domains has limitations and is considered inadequate to examine COD in the services context. The absence of a well-developed body of knowledge on non-conforming service behaviours has implications for both theory and practice. The provision of ‘special favours’ increases customer satisfaction but the over-servicing of customers is also counterproductive for the service delivery and costly for the organisation. Despite these implications of non-conforming service behaviours, there is little understanding about the nature of these behaviours and its key drivers. This research builds on inadequacies in prior research on positive deviance, pro-social and pro-customer literature to develop the theoretical foundation of COD. The concept of positive deviance which has predominantly been used to study organisational behaviours is applied within a services marketing setting. Further, it addresses previous limitations in pro-social and pro-customer behavioural literature that has examined limited forms of behaviours with no clear understanding on the nature of these behaviours. Building upon these literature streams, this research adopts a holistic approach towards the conceptualisation of COD. It addresses previous shortcomings in the literature by providing a well bounded definition, developing a psychometrically sound measure of COD and a conceptually well-founded model of COD. The concept of COD was examined across three separate studies and based on the theoretical foundations of role theory and social identity theory. Study 1 was exploratory and based on in-depth interviews using the Critical Incident Technique (CIT). The aim of Study 1 was to understand the nature of COD and qualitatively identify its key drivers. Thematic analysis was conducted to analyse the data and the two potential dimensions of COD behaviours of Deviant Service Adaptation (DSA) and Deviant Service Communication (DSC) were revealed in the analysis. In addition, themes representing the potential influences of COD were broadly classified as individual factors, situational factors, and organisational factors. Study 2 was a scale development procedure that involved the generation and purification of items for the measure based on two student samples working in customer service roles (Pilot sample, N=278; Initial validation sample, N=231). The results for the reliability and Exploratory Factor Analyses (EFA) on the pilot sample suggested the scale had poor psychometric properties. As a result, major revisions were made in terms of item wordings and new items were developed based on the literature to reflect a new dimension, Deviant Use of Resources (DUR). The revised items were tested on the initial validation sample with the EFA analysis suggesting a four-factor structure of COD. The aim of Study 3 was to further purify the COD measure and test for nomological validity based on its theoretical relationships with key antecedents and similar constructs (key correlates). The theoretical model of COD consisting of nine hypotheses was tested on a retail and hospitality sample of frontline employees (Retail N=311; Hospitality N=305) of a market research panel using an online survey. The data was analysed using Structural Equation Modelling (SEM). The results provided support for a re-specified second-order three-factor model of COD which consists of 11 items. Overall, the COD measure was found to be reliable and valid, demonstrating convergent validity, discriminant validity and marginal partial invariance for the factor loadings. The results showed support for nomological validity, although the antecedents had differing impact on COD across samples. Specifically, empathy and perspective-taking, role conflict, and job autonomy significantly influenced COD in the retail sample, whereas empathy and perspective-taking, risk-taking propensity and role conflict were significant predictors in the hospitality sample. In addition, customer orientation-selling orientation, the altruistic dimension of organisational citizenship behaviours, workplace deviance, and social desirability responding were found to correlate with COD. This research makes several contributions to theory. First, the findings of this thesis extend the literature on positive deviance, pro-social and pro-customer behaviours. Second, the research provides an empirically tested model which describes the antecedents of COD. Third, this research contributes by providing a reliable and valid measure of COD. Finally, the research investigates the differential effects of the key antecedents in different service sectors on COD. The research findings also contribute to services marketing practice. Based on the research findings, service practitioners can better understand the phenomenon of COD and utilise the measurement tool to calibrate COD levels within their organisations. Knowledge on the key determinants of COD will help improve recruitment and training programs and drive internal initiatives within the firm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An experimental investigation has been made of a round, non-buoyant plume of nitric oxide, NO, in a turbulent grid flow of ozone, 03, using the Turbulent Smog Chamber at the University of Sydney. The measurements have been made at a resolution not previously reported in the literature. The reaction is conducted at non-equilibrium so there is significant interaction between turbulent mixing and chemical reaction. The plume has been characterized by a set of constant initial reactant concentration measurements consisting of radial profiles at various axial locations. Whole plume behaviour can thus be characterized and parameters are selected for a second set of fixed physical location measurements where the effects of varying the initial reactant concentrations are investigated. Careful experiment design and specially developed chemilurninescent analysers, which measure fluctuating concentrations of reactive scalars, ensure that spatial and temporal resolutions are adequate to measure the quantities of interest. Conserved scalar theory is used to define a conserved scalar from the measured reactive scalars and to define frozen, equilibrium and reaction dominated cases for the reactive scalars. Reactive scalar means and the mean reaction rate are bounded by frozen and equilibrium limits but this is not always the case for the reactant variances and covariances. The plume reactant statistics are closer to the equilibrium limit than those for the ambient reactant. The covariance term in the mean reaction rate is found to be negative and significant for all measurements made. The Toor closure was found to overestimate the mean reaction rate by 15 to 65%. Gradient model turbulent diffusivities had significant scatter and were not observed to be affected by reaction. The ratio of turbulent diffusivities for the conserved scalar mean and that for the r.m.s. was found to be approximately 1. Estimates of the ratio of the dissipation timescales of around 2 were found downstream. Estimates of the correlation coefficient between the conserved scalar and its dissipation (parallel to the mean flow) were found to be between 0.25 and the significant value of 0.5. Scalar dissipations for non-reactive and reactive scalars were found to be significantly different. Conditional statistics are found to be a useful way of investigating the reactive behaviour of the plume, effectively decoupling the interaction of chemical reaction and turbulent mixing. It is found that conditional reactive scalar means lack significant transverse dependence as has previously been found theoretically by Klimenko (1995). It is also found that conditional variance around the conditional reactive scalar means is relatively small, simplifying the closure for the conditional reaction rate. These properties are important for the Conditional Moment Closure (CMC) model for turbulent reacting flows recently proposed by Klimenko (1990) and Bilger (1993). Preliminary CMC model calculations are carried out for this flow using a simple model for the conditional scalar dissipation. Model predictions and measured conditional reactive scalar means compare favorably. The reaction dominated limit is found to indicate the maximum reactedness of a reactive scalar and is a limiting case of the CMC model. Conventional (unconditional) reactive scalar means obtained from the preliminary CMC predictions using the conserved scalar p.d.f. compare favorably with those found from experiment except where measuring position is relatively far upstream of the stoichiometric distance. Recommendations include applying a full CMC model to the flow and investigations both of the less significant terms in the conditional mean species equation and the small variation of the conditional mean with radius. Forms for the p.d.f.s, in addition to those found from experiments, could be useful for extending the CMC model to reactive flows in the atmosphere.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While it is commonly accepted that computability on a Turing machine in polynomial time represents a correct formalization of the notion of a feasibly computable function, there is no similar agreement on how to extend this notion on functionals, that is, what functionals should be considered feasible. One possible paradigm was introduced by Mehlhorn, who extended Cobham's definition of feasible functions to type 2 functionals. Subsequently, this class of functionals (with inessential changes of the definition) was studied by Townsend who calls this class POLY, and by Kapron and Cook who call the same class basic feasible functionals. Kapron and Cook gave an oracle Turing machine model characterisation of this class. In this article, we demonstrate that the class of basic feasible functionals has recursion theoretic properties which naturally generalise the corresponding properties of the class of feasible functions, thus giving further evidence that the notion of feasibility of functionals mentioned above is correctly chosen. We also improve the Kapron and Cook result on machine representation.Our proofs are based on essential applications of logic. We introduce a weak fragment of second order arithmetic with second order variables ranging over functions from NN which suitably characterises basic feasible functionals, and show that it is a useful tool for investigating the properties of basic feasible functionals. In particular, we provide an example how one can extract feasible programs from mathematical proofs that use nonfeasible functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the context of learning paradigms of identification in the limit, we address the question: why is uncertainty sometimes desirable? We use mind change bounds on the output hypotheses as a measure of uncertainty, and interpret ‘desirable’ as reduction in data memorization, also defined in terms of mind change bounds. The resulting model is closely related to iterative learning with bounded mind change complexity, but the dual use of mind change bounds — for hypotheses and for data — is a key distinctive feature of our approach. We show that situations exists where the more mind changes the learner is willing to accept, the lesser the amount of data it needs to remember in order to converge to the correct hypothesis. We also investigate relationships between our model and learning from good examples, set-driven, monotonic and strong-monotonic learners, as well as class-comprising versus class-preserving learnability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present paper motivates the study of mind change complexity for learning minimal models of length-bounded logic programs. It establishes ordinal mind change complexity bounds for learnability of these classes both from positive facts and from positive and negative facts. Building on Angluin’s notion of finite thickness and Wright’s work on finite elasticity, Shinohara defined the property of bounded finite thickness to give a sufficient condition for learnability of indexed families of computable languages from positive data. This paper shows that an effective version of Shinohara’s notion of bounded finite thickness gives sufficient conditions for learnability with ordinal mind change bound, both in the context of learnability from positive data and for learnability from complete (both positive and negative) data. Let Omega be a notation for the first limit ordinal. Then, it is shown that if a language defining framework yields a uniformly decidable family of languages and has effective bounded finite thickness, then for each natural number m >0, the class of languages defined by formal systems of length <= m: • is identifiable in the limit from positive data with a mind change bound of Omega (power)m; • is identifiable in the limit from both positive and negative data with an ordinal mind change bound of Omega × m. The above sufficient conditions are employed to give an ordinal mind change bound for learnability of minimal models of various classes of length-bounded Prolog programs, including Shapiro’s linear programs, Arimura and Shinohara’s depth-bounded linearly covering programs, and Krishna Rao’s depth-bounded linearly moded programs. It is also noted that the bound for learning from positive data is tight for the example classes considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To understand human behavior, it is important to know under what conditions people deviate from selfish rationality. This study explores the interaction of natural survival instincts and internalized social norms using data on the sinking of the Titanic and the Lusitania. We show that time pressure appears to be crucial when explaining behavior under extreme conditions of life and death. Even though the two vessels and the composition of their passengers were quite similar, the behavior of the individuals on board was dramatically different. On the Lusitania, selfish behavior dominated (which corresponds to the classical homo oeconomicus); on the Titanic, social norms and social status (class) dominated, which contradicts standard economics. This difference could be attributed to the fact that the Lusitania sank in 18 minutes, creating a situation in which the short-run flight impulse dominates behavior. On the slowly sinking Titanic (2 hours, 40 minutes), there was time for socially determined behavioral patterns to re-emerge. To our knowledge, this is the first time that these shipping disasters have been analyzed in a comparative manner with advanced statistical (econometric) techniques using individual data of the passengers and crew. Knowing human behavior under extreme conditions allows us to gain insights about how varied human behavior can be depending on differing external conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Optimal decision-making requires us to accurately pinpoint the basis of our thoughts, e.g. whether they originate from our memory or our imagination. This paper argues that the phenomenal qualities of our subjective experience provide permissible evidence to revise beliefs, particularly as it pertains to memory. I look to the source monitoring literature to reconcile circumstances where mnemic beliefs and mnemic qualia conflict. By separating the experience of remembering from biological facts of memory, unusual cases make sense, such as memory qualia without memory (e.g. déjà vu, false memories) or a failure to have memory qualia with memory (e.g. functional amnesia, unintentional plagiarism). I argue that a pragmatic, probabilistic approach to belief revision is a way to rationally incorporate information from conscious experience, whilst acknowledging its inherent difficulties as an epistemic source. I conclude with a Bayesian defense of source monitoring based on C.I. Lewis’ coherence argument for memorial knowledge.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This correspondence paper addresses the problem of output feedback stabilization of control systems in networked environments with quality-of-service (QoS) constraints. The problem is investigated in discrete-time state space using Lyapunov’s stability theory and the linear inequality matrix technique. A new discrete-time modeling approach is developed to describe a networked control system (NCS) with parameter uncertainties and nonideal network QoS. It integrates a network-induced delay, packet dropout, and other network behaviors into a unified framework. With this modeling, an improved stability condition, which is dependent on the lower and upper bounds of the equivalent network-induced delay, is established for the NCS with norm-bounded parameter uncertainties. It is further extended for the output feedback stabilization of the NCS with nonideal QoS. Numerical examples are given to demonstrate the main results of the theoretical development.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent developments in technology, globalization, and consumer activism have challenged the "broadcasting model" of natonally bounded, vertically integrated, monopolistic, expert-paradigm media industries, dedicated to supplying leisure entertainment to more or less passive consumers. Instead, attention has turned to globally traded formats, social network markets, consumer-created content, multiplatform "publication," and a semiotic long tail where niche representations can be as valuable as blockbusters. Such chenges are just as much a challenge to education as they are to business models. And education, both formal and informal, is a dynamic agent in these processes, participation, and creative content require a rethink of "studies" just as much as of "media."

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Social infrastructure and sustainable development represent two distinct but interlinked concepts bounded by a geographic location. For those involved in the planning of a residential development, the notion of social infrastructure is crucial to the building of a healthy community and sustainable environment. This is because social infrastructure is provided in response to the basic needs of communities and to enhance the quality of life, equity, stability and social well being. It also acts as the building block to the enhancement of human and social capital. While acknowledging the different levels of social infrastructure provision from neighbourhood, local, district and sub-regional levels, past evidence has shown that the provision at neighbourhood and local level and are affecting well-being of residents and the community sustainability. With intense physical development taking place in Australia's South East Queensland (SEQ) region, local councils are under immense pressure to provide adequate social and community facilities for their residents. This paper shows how participation-oriented, need-sensitive Integrated Social Infrastructure Planning Guideline is used to offer a solution for the efficient planning and provision of multi-level social infrastructure for the SEQ region. The paper points out to the successful implementation of the guideline for social infrastructure planning in multiple levels of spatial jurisdictions of Australia's fastest growing region.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the use of certain data-dependent estimates of the complexity of a function class, called Rademacher and Gaussian complexities. In a decision theoretic setting, we prove general risk bounds in terms of these complexities. We consider function classes that can be expressed as combinations of functions from basis classes and show how the Rademacher and Gaussian complexities of such a function class can be bounded in terms of the complexity of the basis classes. We give examples of the application of these techniques in finding data-dependent risk bounds for decision trees, neural networks and support vector machines.