634 resultados para Gibbs phenomenon
Resumo:
The measurement error model is a well established statistical method for regression problems in medical sciences, although rarely used in ecological studies. While the situations in which it is appropriate may be less common in ecology, there are instances in which there may be benefits in its use for prediction and estimation of parameters of interest. We have chosen to explore this topic using a conditional independence model in a Bayesian framework using a Gibbs sampler, as this gives a great deal of flexibility, allowing us to analyse a number of different models without losing generality. Using simulations and two examples, we show how the conditional independence model can be used in ecology, and when it is appropriate.
Resumo:
Driving is a vigilance task, requiring sustained attention to maintain performance and avoid crashes. Hypovigilance (i.e., marked reduction in vigilance) while driving manifests as poor driving performance and is commonly attributed to fatigue (Dinges, 1995). However, poor driving performance has been found to be more frequent when driving in monotonous road environments, suggesting that monotony plays a role in generating hypovigilance (Thiffault & Bergeron, 2003b). Research to date has tended to conceptualise monotony as a uni-dimensional task characteristic, typically used over a prolonged period of time to facilitate other factors under investigation, most notably fatigue. However, more often than not, more than one exogenous factor relating to the task or operating environment is manipulated to vary or generate monotony (Mascord & Heath, 1992). Here we aimed to explore whether monotony is a multi-dimensional construct that is determined by characteristics of both the task proper and the task environment. The general assumption that monotony is a task characteristic used solely to elicit hypovigilance or poor performance related to fatigue appears to have led to there being little rigorous investigation into the exact nature of the relationship. While the two concepts are undoubtedly linked, the independent effect of monotony on hypovigilance remains largely ignored. Notwithstanding, there is evidence that monotony effects can emerge very early in vigilance tasks and are not necessarily accompanied by fatigue (see Meuter, Rakotonirainy, Johns, & Wagner, 2005). This phenomenon raises a largely untested, empirical question explored in two studies: Can hypovigilance emerge as a consequence of task and/or environmental monotony, independent of time on task and fatigue? In Study 1, using a short computerised vigilance task requiring responses to be withheld to infrequent targets, we explored the differential impacts of stimuli and task demand manipulations on the development of a monotonous context and the associated effects on vigilance performance (as indexed by respone errors and response times), independent of fatigue and time on task. The role of individual differences (sensation seeking, extroversion and cognitive failures) in moderating monotony effects was also considered. The results indicate that monotony affects sustained attention, with hypovigilance and associated performance worse in monotonous than in non-monotonous contexts. Critically, performance decrements emerged early in the task (within 4.3 minutes) and remained consistent over the course of the experiment (21.5 minutes), suggesting that monotony effects can operate independent of time on task and fatigue. A combination of low task demands and low stimulus variability form a monotonous context characterised by hypovigilance and poor task performance. Variations to task demand and stimulus variability were also found to independently affect performance, suggesting that monotony is a multi-dimensional construct relating to both task monotony (associated with the task itself) and environmental monotony (related to characteristics of the stimulus). Consequently, it can be concluded that monotony is multi-dimensional and is characterised by low variability in stimuli and/or task demands. The proposition that individual differences emerge under conditions of varying monotony with high sensation seekers and/or extroverts performing worse in monotonous contexts was only partially supported. Using a driving simulator, the findings of Study 1 were extended to a driving context to identify the behavioural and psychophysiological indices of monotony-related hypovigilance associated with variations to road design and road side scenery (Study 2). Supporting the proposition that monotony is a multi-dimensional construct, road design variability emerged as a key moderating characteristic of environmental monotony, resulting in poor driving performance indexed by decrements in steering wheel measures (mean lateral position). Sensation seeking also emerged as a moderating factor, where participants high in sensation seeking tendencies displayed worse driving behaviour in monotonous conditions. Importantly, impaired driving performance was observed within 8 minutes of commencing the driving task characterised by environmental monotony (low variability in road design) and was not accompanied by a decline in psychophysiological arousal. In addition, no subjective declines in alertness were reported. With fatigue effects associated with prolonged driving (van der Hulst, Meijman, & Rothengatter, 2001) and indexed by drowsiness, this pattern of results indicates that monotony can affect driver vigilance, independent of time on task and fatigue. Perceptual load theory (Lavie, 1995, 2005) and mindlessness theory (Robertson, Manly, Andrade, Baddley, & Yiend, 1997) provide useful theoretical frameworks for explaining and predicting monotony effects by positing that the low load (of task and/or stimuli) associated with a monotonous task results in spare attentional capacity which spills over involuntarily, resulting in the processing of task-irrelevant stimuli or task unrelated thoughts. That is, individuals – even when not fatigued - become easily distracted when performing a highly monotonous task, resulting in hypovigilance and impaired performance. The implications for road safety, including the likely effectiveness of fatigue countermeasures to mitigate monotony-related driver hypovigilance are discussed.
Resumo:
Entrepreneurial Orientation (EO) has a 30 year history as one of the most used concepts in entrepreneurship research. “Recent attention in formal sessions at the Academy of Management conference programs confirm Entrepreneurial Orientation as a primary construct with a majority of Entrepreneurship Division sponsored sessions devoted to studies using EO related measures”, as reported by the 2010 division program chair, Per Davidson (Roberts, 2010: 9). However, questions continue to be raised concerning over-dependence on parts of one strategic scale, possible inappropriate or under-theorized adaptations, and the lack of theoretical development on application and performance variance in emergent, organizational, and socioeconomic settings. One recent area of investigation in analysis, methods, theory and application concerns an “EO gestalt”, focusing on the family of EO-related measures and theory, rather than on one or more dimensions, in order to explore the theory and process of the Entrepreneurial Orientation phenomenon. The goals of the 4th Annual EO3 PDW are to enlighten researchers on the development of Entrepreneurial Orientation theory and related scales, balance the use of Entrepreneurial Orientation current knowledge with new research frontiers suggested by EO3 scholars’ questions, and transcend boundaries in the discoveries undertaken in the shared interdisciplinary and cross-cultural research agenda currently developing for Entrepreneurial Orientation concepts. Going into its forth year, the EO3 PDW has been pivotal in formalizing discussion, pushing research forward, and gaining insights from experienced and cutting edge scholars, as it provides a point of reference for coalescing research questions and findings surrounding this important concept.
Resumo:
In this research we examined, by means of case studies, the mechanisms by which relationships can be managed and by which communication and cooperation can be enhanced in developing sustainable supply chains. The research was predicated on the contention that the development of a sustainable supply chain depends, in part, on the transfer of knowledge and capabilities from the larger players in the supply chain. A sustainable supply chain requires proactive relationship management and the development of an appropriate organisational culture, and trust. By legitimising individuals’ expectations of the type of culture which is appropriate to their company and empowering employees to address mismatches that may occur, a situation can be created whereby the collaborating organisations develop their competences symbiotically and so facilitate a sustainable supply chain. Effective supply chain management enhances organisation performance and competitiveness through the management of operations across organisational boundaries. Relational contracting approaches facilitate the exchange of information and knowledge and build capacity in the supply chain, thus enhancing its sustainability. Relationship management also provides the conditions necessary for the development of collaborative and cooperative relationships However, often subcontractors and suppliers are not empowered to attend project meetings or to have direct communication with project based staff. With this being a common phenomenon in the construction industry, one might ask: what are the barriers to implementation of relationship management through the supply chain? In other words, the problem addressed in this research is the engagement of the supply chain through relationship management.
Resumo:
Alliances, with other inter-organisational forms, have become a strategy of choice and necessity for both the private and public sectors. From initial formation, alliances develop and change in different ways, with research suggesting that many alliances will be terminated without their potential value being realised. Alliance process theorists address this phenomenon, seeking explanations as to why alliances unfold the way they do. However, these explanations have generally focussed on economic and structural determinants: empirically, little is known about how and why the agency of alliance actors shapes the alliance path. Theorists have suggested that current alliance process theory has provided valuable, but partial accounts of alliance development, which could be usefully extended by considering social and individual factors. The purpose of this research therefore was to extend alliance process theory by exploring individual agency as an explanation of alliance events and in doing so, reveal the potential of a multi-frame approach for understanding alliance process. Through an historical study of a single, rich case of alliance process, this thesis provided three explanations for the sequence of alliance events, each informed by a different theoretical perspective. The explanatory contribution of the Individual Agency (IA) perspective was distilled through juxtaposition with the perspectives of Environmental Determinism (ED) and Indeterminacy/Chance (I/C). The research produced a number of findings. First, it provided empirical support for the tentative proposition that the choices and practices of alliance actors are partially explanatory of alliance change and that these practices are particular to the alliance context. Secondly, the study found that examining the case through three theoretical frames provided a more complete explanation. Two propositions were put forward as to how individual agency can be theorised within this three-perspective framework. Finally, the case explained which alliance actors were required to shape alliance decision making in this case and why.
Resumo:
This paper presents the results from a study of information behaviors in the context of people's everyday lives as part of a larger study of information behaviors (IB). 34 participants from across 6 countries maintained a daily information journal or diary – mainly through a secure web log – for two weeks, to an aggregate of 468 participant days over five months. The text-rich diary data was analyzed using Grounded Theory analysis. The findings indicate that information avoidance is a common phenomenon in everyday life and consisted of both passive avoidance and active avoidance. This has implications for several aspects of peoples' lives including health, finance, and personal relationships.
Resumo:
Calibration process in micro-simulation is an extremely complicated phenomenon. The difficulties are more prevalent if the process encompasses fitting aggregate and disaggregate parameters e.g. travel time and headway. The current practice in calibration is more at aggregate level, for example travel time comparison. Such practices are popular to assess network performance. Though these applications are significant there is another stream of micro-simulated calibration, at disaggregate level. This study will focus on such microcalibration exercise-key to better comprehend motorway traffic risk level, management of variable speed limit (VSL) and ramp metering (RM) techniques. Selected section of Pacific Motorway in Brisbane will be used as a case study. The discussion will primarily incorporate the critical issues encountered during parameter adjustment exercise (e.g. vehicular, driving behaviour) with reference to key traffic performance indicators like speed, lane distribution and headway; at specific motorway points. The endeavour is to highlight the utility and implications of such disaggregate level simulation for improved traffic prediction studies. The aspects of calibrating for points in comparison to that for whole of the network will also be briefly addressed to examine the critical issues such as the suitability of local calibration at global scale. The paper will be of interest to transport professionals in Australia/New Zealand where micro-simulation in particular at point level, is still comparatively a less explored territory in motorway management.
Resumo:
Calibration process in micro-simulation is an extremely complicated phenomenon. The difficulties are more prevalent if the process encompasses fitting aggregate and disaggregate parameters e.g. travel time and headway. The current practice in calibration is more at aggregate level, for example travel time comparison. Such practices are popular to assess network performance. Though these applications are significant there is another stream of micro-simulated calibration, at disaggregate level. This study will focus on such micro-calibration exercise-key to better comprehend motorway traffic risk level, management of variable speed limit (VSL) and ramp metering (RM) techniques. Selected section of Pacific Motorway in Brisbane will be used as a case study. The discussion will primarily incorporate the critical issues encountered during parameter adjustment exercise (e.g. vehicular, driving behaviour) with reference to key traffic performance indicators like speed, land distribution and headway; at specific motorway points. The endeavour is to highlight the utility and implications of such disaggregate level simulation for improved traffic prediction studies. The aspects of calibrating for points in comparison to that for whole of the network will also be briefly addressed to examine the critical issues such as the suitability of local calibration at global scale. The paper will be of interest to transport professionals in Australia/New Zealand where micro-simulation in particular at point level, is still comparatively a less explored territory in motorway management.
Resumo:
Cancer-related fatigue (CRF) is a distressing symptom frequently experienced by patients with advanced cancer. While there have been some advances in the understanding of the management of fatigue associated with cancer treatment, CRF associated with advanced cancer remains a phenomenon that is not well-managed. The aetiologic factors associated with CRF, the impacts of CRF and the current management of CRF are discussed in this review article in relation to patients with advanced cancer. The paper concludes that while further research is required in the area, there are several potentially effective strategies currently available that can reduce the severity of CRF in patients with advanced cancer.
Resumo:
This special issue came about following an international symposium on bullying held in December 2008 at the Department of Child Studies, Linko¨ping University, Sweden, led by Jakob Cromdal and Paul Horton. The articles represent a diverse body of theoretical and empirical work that emphasises children and young people’s views of and participation in everyday experiences. The articles, as a collection, aim to be provocative in terms of challenging some existing dominant understandings about bullying to propose alternate ways to understand this phenomenon.
Resumo:
Corrosion is a common phenomenon and critical aspects of steel structural application. It affects the daily design, inspection and maintenance in structural engineering, especially for the heavy and complex industrial applications, where the steel structures are subjected to hash corrosive environments in combination of high working stress condition and often in open field and/or under high temperature production environments. In the paper, it presents the actual engineering application of advanced finite element methods in the predication of the structural integrity and robustness at a designed service life for the furnaces of alumina production, which was operated in the high temperature, corrosive environments and rotating with high working stress condition.
Resumo:
The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.
Resumo:
Crowdsourcing harnesses the potential of large and open networks of people. It is a relatively new phenomenon and attracted substantial interest in practice. Related research, however, lacks a theoretical foundation. We propose a system-theoretical perspective on crowdsourcing systems to address this gap and illustrate its applicability by using it to classify crowdsourcing systems. By deriving two principal dimensions from theory, we identify four fundamental types of crowdsourcing systems that help to distinguish important features of such systems. We analyse their respective characteristics and discuss implications and requirements for various aspects related to the design of such systems. Our results demonstrate that systems theory can inform the study of crowdsourcing systems. The identified system types and the implications on their design may prove useful for researchers to frame future studies and for practitioners to identify the right crowdsourcing systems for a particular purpose.
Resumo:
The effect of sample geometry on the melting rates of burning iron rods was assessed. Promoted-ignition tests were conducted with rods having cylindrical, rectangular, and triangular cross-sectional shapes over a range of cross-sectional areas. The regression rate of the melting interface (RRMI) was assessed using a statistical approach which enabled the quantification of confidence levels for the observed differences in RRMI. Statistically significant differences in RRMI were observed for rods with the same cross-sectional area but different cross-sectional shape. The magnitude of the proportional difference in RRMI increased with the cross-sectional area. Triangular rods had the highest RRMI, followed by rectangular rods, and then cylindrical rods. The dependence of RRMI on rod shape is shown to relate to the action of molten metal at corners. The corners of the rectangular and triangular rods melted faster than the faces due to their locally higher surface area to volume ratios. This phenomenon altered the attachment geometry between liquid and solid phases, increasing the surface area available for heat transfer, causing faster melting. Findings relating to the application of standard flammability test results in industrial situations are also presented.