48 resultados para THEORETICAL BASIS
em Queensland University of Technology - ePrints Archive
Resumo:
Communication processes are vital in the lifecycle of BPM projects. With this in mind, much research has been performed into facilitating this key component between stakeholders. Amongst the methods used to support this process are personalized process visualisations. In this paper, we review the development of this visualization trend, then, we propose a theoretical analysis framework based upon communication theory. We use this framework to provide theoretical support to the conjecture that 3D virtual worlds are powerful tools for communicating personalised visualisations of processes within a workplace. Meta requirements are then derived and applied, via 3D virtual world functionalities, to generate example visualisations containing personalized aspects, which we believe enhance the process of communcation between analysts and stakeholders in BPM process (re)design activities.
Resumo:
In this paper, two ideal formation models of serrated chips, the symmetric formation model and the unilateral right-angle formation model, have been established for the first time. Based on the ideal models and related adiabatic shear theory of serrated chip formation, the theoretical relationship among average tooth pitch, average tooth height and chip thickness are obtained. Further, the theoretical relation of the passivation coefficient of chip's sawtooth and the chip thickness compression ratio is deduced as well. The comparison between these theoretical prediction curves and experimental data shows good agreement, which well validates the robustness of the ideal chip formation models and the correctness of the theoretical deducing analysis. The proposed ideal models may have provided a simple but effective theoretical basis for succeeding research on serrated chip morphology. Finally, the influences of most principal cutting factors on serrated chip formation are discussed on the basis of a series of finite element simulation results for practical advices of controlling serrated chips in engineering application.
Resumo:
AIM: This paper analyses and illustrates the application of Bandura's self-efficacy construct to an innovative self-management programme for patients with both type 2 diabetes and coronary heart disease. BACKGROUND: Using theory as a framework for any health intervention provides a solid and valid foundation for aspects of planning and delivering such an intervention; however, it is reported that many health behaviour intervention programmes are not based upon theory and are consequently limited in their applicability to different populations. The cardiac-diabetes self-management programme has been specifically developed for patients with dual conditions with the strategies for delivering the programme based upon Bandura's self-efficacy theory. This patient group is at greater risk of negative health outcomes than that with a single chronic condition and therefore requires appropriate intervention programmes with solid theoretical foundations that can address the complexity of care required. SOURCES OF EVIDENCE: The cardiac-diabetes self-management programme has been developed incorporating theory, evidence and practical strategies. DISCUSSION: This paper provides explicit knowledge of the theoretical basis and components of a cardiac-diabetes self-management programme. Such detail enhances the ability to replicate or adopt the intervention in similar or differing populations and/or cultural contexts as it provides in-depth understanding of each element within the intervention. CONCLUSION: Knowledge of the concepts alone is not sufficient to deliver a successful health programme. Supporting patients to master skills of self-care is essential in order for patients to successfully manage two complex, chronic illnesses. IMPLICATIONS FOR NURSING PRACTICE OR HEALTH POLICY: Valuable information has been provided to close the theory-practice gap for more consistent health outcomes, engaging with patients for promoting holistic care within organizational and cultural contexts.
Resumo:
While organizations strive to leverage the vast information generated daily from social media platforms, and decision makers are keen to identify and exploit its value, the quality of this information remains uncertain. Past research on information quality criteria and evaluation issues in social media is largely disparate, incomparable and lacking any common theoretical basis. In attention to this gap, this study adapts existing guidelines and exemplars of construct conceptualization in information systems research, to deductively define information quality and related criteria in the social media context. Building on a notion of information derived from semiotic theory, this paper suggests a general conceptualization of information quality in the social media context that can be used in future research to develop more context specific conceptual models.
Resumo:
By way of response to Professor Duncan's article,1 this article examines the theoretical basis for the implication of contractual terms, particularly the implication of a term at law. In this regard the recent decision of Barrett J in Overlook v Foxtel [2002] NSWSC 17 is considered, to the extent that it provides guidance concerning the implication of an obligation of good faith in the context of a commercial contract. A number of observations are made which may be considered likely to have application to the relationship of commercial landlord and tenant. The conclusion reached is that although the commercial landlord and tenant contractual relationship is highly regulated, this may not deny a remedy to a tenant who is the victim of a landlord's 'bad faith'. Finally, the article concludes by considering the extent to which it may be possible to contractually exclude the implied obligation of good faith.
Resumo:
Sets out a system of corporate governance regulation, aimed at combining legal and social methods of governing director behaviour and at creating a framework flexible enough to accommodate different business and ethical cultures. Outlines the theoretical basis of corporate governance and the broad responsibilities of directors, and discusses the extent to which they can and should be regulated. Discusses the constitution of a regulatory framework encompassing law, soft law and best practice, and ethics.
Resumo:
The worldwide organ shortage occurs despite people’s positive organ donation attitudes. The discrepancy between attitudes and behaviour is evident in Australia particularly, with widespread public support for organ donation but low donation and communication rates. This problem is compounded further by the paucity of theoretically based research to improve our understanding of people’s organ donation decisions. This program of research contributes to our knowledge of individual decision making processes for three aspects of organ donation: (1) posthumous (upon death) donation, (2) living donation (to a known and unknown recipient), and (3) providing consent for donation by communicating donation wishes on an organ donor consent register (registering) and discussing the donation decision with significant others (discussing). The research program used extended versions of the Theory of Planned Behaviour (TPB) and the Prototype/Willingness Model (PWM), incorporating additional influences (moral norm, self-identity, organ recipient prototypes), to explicate the relationship between people’s positive attitudes and low rates of organ donation behaviours. Adopting the TPB and PWM (and their extensions) as a theoretical basis overcomes several key limitations of the extant organ donation literature including the often atheoretical nature of organ donation research, thefocus on individual difference factors to construct organ donor profiles and the omission of important psychosocial influences (e.g., control perceptions, moral values) that may impact on people’s decision-making in this context. In addition, the use of the TPB and PWM adds further to our understanding of the decision making process for communicating organ donation wishes. Specifically, the extent to which people’s registering and discussing decisions may be explained by a reasoned and/or a reactive decision making pathway is examined (Stage 3) with the novel application of the TPB augmented with the social reaction pathway in the PWM. This program of research was conducted in three discrete stages: a qualitative stage (Stage 1), a quantitative stage with extended models (Stage 2), and a quantitative stage with augmented models (Stage 3). The findings of the research program are reported in nine papers which are presented according to the three aspects of organ donation examined (posthumous donation, living donation, and providing consent for donation by registering or discussing the donation preference). Stage One of the research program comprised qualitative focus groups/interviews with university students and community members (N = 54) (Papers 1 and 2). Drawing broadly on the TPB framework (Paper 1), content analysed responses revealed people’s commonly held beliefs about the advantages and disadvantages (e.g., prolonging/saving life), important people or groups (e.g., family), and barriers and motivators (e.g., a family’s objection to donation), related to living and posthumous organ donation. Guided by a PWM perspective, Paper Two identified people’s commonly held perceptions of organ donors (e.g., altruistic and giving), non-donors (e.g., self-absorbed and unaware), and transplant recipients (e.g., unfortunate, and in some cases responsible/blameworthy for their predicament). Stage Two encompassed quantitative examinations of people’s decision makingfor living (Papers 3 and 4) and posthumous (Paper 5) organ donation, and for registering and discussing donation wishes (Papers 6 to 8) to test extensions to both the TPB and PWM. Comparisons of health students’ (N = 487) motivations and willingness for living related and anonymous donation (Paper 3) revealed that a person’s donor identity, attitude, past blood donation, and knowing a posthumous donor were four common determinants of willingness, with the results highlighting students’ identification as a living donor as an important motive. An extended PWM is presented in Papers Four and Five. University students’ (N = 284) willingness for living related and anonymous donation was tested in Paper Four with attitude, subjective norm, donor prototype similarity, and moral norm (but not donor prototype favourability) predicting students’ willingness to donate organs in both living situations. Students’ and community members’ (N = 471) posthumous organ donation willingness was assessed in Paper Five with attitude, subjective norm, past behaviour, moral norm, self-identity, and prior blood donation all significantly directly predicting posthumous donation willingness, with only an indirect role for organ donor prototype evaluations. The results of two studies examining people’s decisions to register and/or discuss their organ donation wishes are reported in Paper Six. People’s (N = 24) commonly held beliefs about communicating their organ donation wishes were explored initially in a TPB based qualitative elicitation study. The TPB belief determinants of intentions to register and discuss the donation preference were then assessed for people who had not previously communicated their donation wishes (N = 123). Behavioural and normative beliefs were important determinants of registering and discussing intentions; however, control beliefs influenced people’s registering intentions only. Paper Seven represented the first empirical test of the role of organ transplant recipient prototypes (i.e., perceptions of organ transplant recipients) in people’s (N = 465) decisions to register consent for organ donation. Two factors, Substance Use and Responsibility, were identified and Responsibility predicted people’s organ donor registration status. Results demonstrated that unregistered respondents were the most likely to evaluate transplant recipients negatively. Paper Eight established the role of organ donor prototype evaluations, within an extended TPB model, in predicting students’ and community members’ registering (n = 359) and discussing (n = 282) decisions. Results supported the utility of an extended TPB and suggested a role for donor prototype evaluations in predicting people’s discussing intentions only. Strong intentions to discuss donation wishes increased the likelihood that respondents reported discussing their decision 1-month later. Stage Three of the research program comprised an examination of augmented models (Paper 9). A test of the TPB augmented with elements from the social reaction pathway in the PWM, and extensions to these models was conducted to explore whether people’s registering (N = 339) and discussing (N = 315) decisions are explained via a reasoned (intention) and/or social reaction (willingness) pathway. Results suggested that people’s decisions to communicate their organ donation wishes may be better explained via the reasoned pathway, particularly for registering consent; however, discussing also involves reactive elements. Overall, the current research program represents an important step toward clarifying the relationship between people’s positive organ donation attitudes but low rates of organ donation and communication behaviours. Support has been demonstrated for the use of extensions to two complementary theories, the TPB and PWM, which can inform future research aiming to explicate further the organ donation attitude-behaviour relationship. The focus on a range of organ donation behaviours enables the identification of key targets for future interventions encouraging people’s posthumous and living donation decisions, and communication of their organ donation preference.
Resumo:
With increasing pressure to provide environmentally responsible infrastructure products and services, stakeholders are putting significant foci on the early identification of financial viability and outcome of infrastructure projects. Traditionally, there has been an imbalance between sustainable measures and project budget. On one hand, the industry tends to employ the first-cost mentality and approach to developing infrastructure projects. On the other, environmental experts and technology innovators often push for the ultimately green products and systems without much of a concern for cost. This situation is being quickly changed as the industry is under pressure to continue to return profit, while better adapting to current and emerging global issues of sustainability. For the infrastructure sector to contribute to sustainable development, it will need to increase value and efficiency. Thus, there is a great need for tools that will enable decision makers evaluate competing initiatives and identify the most sustainable approaches to procuring infrastructure projects. In order to ensure that these objectives are achieved, the concept of life-cycle costing analysis (LCCA) will play significant roles in the economics of an infrastructure project. Recently, a few research initiatives have applied the LCCA models for road infrastructure that focused on the traditional economics of a project. There is little coverage of life-cycle costing as a method to evaluate the criteria and assess the economic implications of pursuing sustainability in road infrastructure projects. To rectify this problem, this paper reviews the theoretical basis of previous LCCA models before discussing their inability to determinate the sustainability indicators in road infrastructure project. It then introduces an on-going research aimed at developing a new model to integrate the various new cost elements based on the sustainability indicators with the traditional and proven LCCA approach. It is expected that the research will generate a working model for sustainability based life-cycle cost analysis.
Resumo:
Uninhabited aerial vehicles (UAVs) are a cutting-edge technology that is at the forefront of aviation/aerospace research and development worldwide. Many consider their current military and defence applications as just a token of their enormous potential. Unlocking and fully exploiting this potential will see UAVs in a multitude of civilian applications and routinely operating alongside piloted aircraft. The key to realising the full potential of UAVs lies in addressing a host of regulatory, public relation, and technological challenges never encountered be- fore. Aircraft collision avoidance is considered to be one of the most important issues to be addressed, given its safety critical nature. The collision avoidance problem can be roughly organised into three areas: 1) Sense; 2) Detect; and 3) Avoid. Sensing is concerned with obtaining accurate and reliable information about other aircraft in the air; detection involves identifying potential collision threats based on available information; avoidance deals with the formulation and execution of appropriate manoeuvres to maintain safe separation. This thesis tackles the detection aspect of collision avoidance, via the development of a target detection algorithm that is capable of real-time operation onboard a UAV platform. One of the key challenges of the detection problem is the need to provide early warning. This translates to detecting potential threats whilst they are still far away, when their presence is likely to be obscured and hidden by noise. Another important consideration is the choice of sensors to capture target information, which has implications for the design and practical implementation of the detection algorithm. The main contributions of the thesis are: 1) the proposal of a dim target detection algorithm combining image morphology and hidden Markov model (HMM) filtering approaches; 2) the novel use of relative entropy rate (RER) concepts for HMM filter design; 3) the characterisation of algorithm detection performance based on simulated data as well as real in-flight target image data; and 4) the demonstration of the proposed algorithm's capacity for real-time target detection. We also consider the extension of HMM filtering techniques and the application of RER concepts for target heading angle estimation. In this thesis we propose a computer-vision based detection solution, due to the commercial-off-the-shelf (COTS) availability of camera hardware and the hardware's relatively low cost, power, and size requirements. The proposed target detection algorithm adopts a two-stage processing paradigm that begins with an image enhancement pre-processing stage followed by a track-before-detect (TBD) temporal processing stage that has been shown to be effective in dim target detection. We compare the performance of two candidate morphological filters for the image pre-processing stage, and propose a multiple hidden Markov model (MHMM) filter for the TBD temporal processing stage. The role of the morphological pre-processing stage is to exploit the spatial features of potential collision threats, while the MHMM filter serves to exploit the temporal characteristics or dynamics. The problem of optimising our proposed MHMM filter has been examined in detail. Our investigation has produced a novel design process for the MHMM filter that exploits information theory and entropy related concepts. The filter design process is posed as a mini-max optimisation problem based on a joint RER cost criterion. We provide proof that this joint RER cost criterion provides a bound on the conditional mean estimate (CME) performance of our MHMM filter, and this in turn establishes a strong theoretical basis connecting our filter design process to filter performance. Through this connection we can intelligently compare and optimise candidate filter models at the design stage, rather than having to resort to time consuming Monte Carlo simulations to gauge the relative performance of candidate designs. Moreover, the underlying entropy concepts are not constrained to any particular model type. This suggests that the RER concepts established here may be generalised to provide a useful design criterion for multiple model filtering approaches outside the class of HMM filters. In this thesis we also evaluate the performance of our proposed target detection algorithm under realistic operation conditions, and give consideration to the practical deployment of the detection algorithm onboard a UAV platform. Two fixed-wing UAVs were engaged to recreate various collision-course scenarios to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. Based on this collected data, our proposed detection approach was able to detect targets out to distances ranging from about 400m to 900m. These distances, (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning ahead of impact that approaches the 12.5 second response time recommended for human pilots. Furthermore, readily available graphic processing unit (GPU) based hardware is exploited for its parallel computing capabilities to demonstrate the practical feasibility of the proposed target detection algorithm. A prototype hardware-in- the-loop system has been found to be capable of achieving data processing rates sufficient for real-time operation. There is also scope for further improvement in performance through code optimisations. Overall, our proposed image-based target detection algorithm offers UAVs a cost-effective real-time target detection capability that is a step forward in ad- dressing the collision avoidance issue that is currently one of the most significant obstacles preventing widespread civilian applications of uninhabited aircraft. We also highlight that the algorithm development process has led to the discovery of a powerful multiple HMM filtering approach and a novel RER-based multiple filter design process. The utility of our multiple HMM filtering approach and RER concepts, however, extend beyond the target detection problem. This is demonstrated by our application of HMM filters and RER concepts to a heading angle estimation problem.
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Practical applications for stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics and industrial automation. The initial motivation behind this work was to produce a stereo vision sensor for mining automation applications. For such applications, the input stereo images would consist of close range scenes of rocks. A fundamental problem faced by matching algorithms is the matching or correspondence problem. This problem involves locating corresponding points or features in two images. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This work implemented a number of areabased matching algorithms to assess their suitability for this application. Area-based techniques were investigated because of their potential to yield dense depth maps, their amenability to fast hardware implementation, and their suitability to textured scenes such as rocks. In addition, two non-parametric transforms, the rank and census, were also compared. Both the rank and the census transforms were found to result in improved reliability of matching in the presence of radiometric distortion - significant since radiometric distortion is a problem which commonly arises in practice. In addition, they have low computational complexity, making them amenable to fast hardware implementation. Therefore, it was decided that matching algorithms using these transforms would be the subject of the remainder of the thesis. An analytic expression for the process of matching using the rank transform was derived from first principles. This work resulted in a number of important contributions. Firstly, the derivation process resulted in one constraint which must be satisfied for a correct match. This was termed the rank constraint. The theoretical derivation of this constraint is in contrast to the existing matching constraints which have little theoretical basis. Experimental work with actual and contrived stereo pairs has shown that the new constraint is capable of resolving ambiguous matches, thereby improving match reliability. Secondly, a novel matching algorithm incorporating the rank constraint has been proposed. This algorithm was tested using a number of stereo pairs. In all cases, the modified algorithm consistently resulted in an increased proportion of correct matches. Finally, the rank constraint was used to devise a new method for identifying regions of an image where the rank transform, and hence matching, are more susceptible to noise. The rank constraint was also incorporated into a new hybrid matching algorithm, where it was combined a number of other ideas. These included the use of an image pyramid for match prediction, and a method of edge localisation to improve match accuracy in the vicinity of edges. Experimental results obtained from the new algorithm showed that the algorithm is able to remove a large proportion of invalid matches, and improve match accuracy.
Resumo:
Bioelectrical impedance analysis, (BIA), is a method of body composition analysis first investigated in 1962 which has recently received much attention by a number of research groups. The reasons for this recent interest are its advantages, (viz: inexpensive, non-invasive and portable) and also the increasing interest in the diagnostic value of body composition analysis. The concept utilised by BIA to predict body water volumes is the proportional relationship for a simple cylindrical conductor, (volume oc length2/resistance), which allows the volume to be predicted from the measured resistance and length. Most of the research to date has measured the body's resistance to the passage of a 50· kHz AC current to predict total body water, (TBW). Several research groups have investigated the application of AC currents at lower frequencies, (eg 5 kHz), to predict extracellular water, (ECW). However all research to date using BIA to predict body water volumes has used the impedance measured at a discrete frequency or frequencies. This thesis investigates the variation of impedance and phase of biological systems over a range of frequencies and describes the development of a swept frequency bioimpedance meter which measures impedance and phase at 496 frequencies ranging from 4 kHz to 1 MHz. The impedance of any biological system varies with the frequency of the applied current. The graph of reactance vs resistance yields a circular arc with the resistance decreasing with increasing frequency and reactance increasing from zero to a maximum then decreasing to zero. Computer programs were written to analyse the measured impedance spectrum and determine the impedance, Zc, at the characteristic frequency, (the frequency at which the reactance is a maximum). The fitted locus of the measured data was extrapolated to determine the resistance, Ro, at zero frequency; a value that cannot be measured directly using surface electrodes. The explanation of the theoretical basis for selecting these impedance values (Zc and Ro), to predict TBW and ECW is presented. Studies were conducted on a group of normal healthy animals, (n=42), in which TBW and ECW were determined by the gold standard of isotope dilution. The prediction quotients L2/Zc and L2/Ro, (L=length), yielded standard errors of 4.2% and 3.2% respectively, and were found to be significantly better than previously reported, empirically determined prediction quotients derived from measurements at a single frequency. The prediction equations established in this group of normal healthy animals were applied to a group of animals with abnormally low fluid levels, (n=20), and also to a group with an abnormal balance of extra-cellular to intracellular fluids, (n=20). In both cases the equations using L2/Zc and L2/Ro accurately and precisely predicted TBW and ECW. This demonstrated that the technique developed using multiple frequency bioelectrical impedance analysis, (MFBIA), can accurately predict both TBW and ECW in both normal and abnormal animals, (with standard errors of the estimate of 6% and 3% for TBW and ECW respectively). Isotope dilution techniques were used to determine TBW and ECW in a group of 60 healthy human subjects, (male. and female, aged between 18 and 45). Whole body impedance measurements were recorded on each subject using the MFBIA technique and the correlations between body water volumes, (TBW and ECW), and heighe/impedance, (for all measured frequencies), were compared. The prediction quotients H2/Zc and H2/Ro, (H=height), again yielded the highest correlation with TBW and ECW respectively with corresponding standard errors of 5.2% and 10%. The values of the correlation coefficients obtained in this study were very similar to those recently reported by others. It was also observed that in healthy human subjects the impedance measured at virtually any frequency yielded correlations not significantly different from those obtained from the MFBIA quotients. This phenomenon has been reported by other research groups and emphasises the need to validate the technique by investigating its application in one or more groups with abnormalities in fluid levels. The clinical application of MFBIA was trialled and its capability of detecting lymphoedema, (an excess of extracellular fluid), was investigated. The MFBIA technique was demonstrated to be significantly more sensitive, (P<.05), in detecting lymphoedema than the current technique of circumferential measurements. MFBIA was also shown to provide valuable information describing the changes in the quantity of muscle mass of the patient during the course of the treatment. The determination of body composition, (viz TBW and ECW), by MFBIA has been shown to be a significant improvement on previous bioelectrical impedance techniques. The merit of the MFBIA technique is evidenced in its accurate, precise and valid application in animal groups with a wide variation in body fluid volumes and balances. The multiple frequency bioelectrical impedance analysis technique developed in this study provides accurate and precise estimates of body composition, (viz TBW and ECW), regardless of the individual's state of health.
Resumo:
Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.
Resumo:
Historically, cities as urban forms have been critical to human development. In 1950, 30% of the world’s population lived in major cities. By the year 2000 this had increased to 47% with further expected growth to 50% by the end of 2007. Projections suggest that city-based densities will edge towards 60% of the global total by 2030. Such rapidly increasing urbanisation, in both developed and developing economies, challenges options for governance and planning, as well as crisis and disaster management. A common issue to the livability of cities as urban forms through time has been access to clean and reliable water supply. This is an issue that is particularly important in countries with arid ecosystems, such as Australia. This paper examines preliminary aspects, and theoretical basis, of a study into the resilience of the (potable) water supply system in Southeast Queensland (SEQ), an area with one of the most significant urban growth rates in Australia. The first stage will be to assess needs and requirements for gauging resilience characteristics of a generic water supply system, consisting of supply catchment, storage reservoir/s and treatment plant/s. The second stage will extend the analysis to examine the resilience of the SEQ water supply system incorporating specific characteristics of the SEQ water grid made increasingly vulnerable due to climate variability and projected impacts on rainfall characteristics and compounded by increasing demands due to population growth. Longer-term findings will inform decision making based on the application of the concept of resilience to designing and operating stand-alone and networked water supply infrastructure systems as well as its application to water resource systems more generally.
Resumo:
There are several ways that the Commissioner of Taxation may indirectly obtain priority over unsecured creditors. This is contrary to the principle of pari passu, a principle endorsed by the 1988 Harmer Report as one that is a fundamental objective of the law of insolvency. As the law and practice of Australia's taxation regime evolves, the law is being drafted in a manner that is inconsistent with the principle of pari passu. The natural consequence of this development is that it places at risk the capacity of corporate and bankruptcy laws to coexist and cooperate with taxation laws. This article posits that undermining the consistency of Commonwealth legislative objectives is undesirable. The authors suggest that one means of addressing the inconsistency is to examine whether there is a clearly aligned theoretical basis for the development of these areas of law and the extent that alignment addresses these inconsistencies. This forms the basis for the recommendations made around such inconsistencies using statutory priorities as an exemplar.
Resumo:
Boundaries are an important field of study because they mediate almost every aspect of organizational life. They are becoming increasingly more important as organizations change more frequently and yet, despite the endemic use of the boundary metaphor in common organizational parlance, they are poorly understood. Organizational boundaries are under-theorized and researchers in related fields often simply assume their existence, without defining them. The literature on organizational boundaries is fragmented with no unifying theoretical basis. As a result, when it is recognized that an organizational boundary is "dysfunctional". there is little recourse to models on which to base remediating action. This research sets out to develop just such a theoretical model and is guided by the general question: "What is the nature of organizational boundaries?" It is argued that organizational boundaries can be conceptualised through elements of both social structure and of social process. Elements of structure include objects, coupling, properties and identity. Social processes include objectification, identification, interaction and emergence. All of these elements are integrated by a core category, or basic social process, called boundary weaving. An organizational boundary is a complex system of objects and emergent properties that are woven together by people as they interact together, objectifying the world around them, identifying with these objects and creating couplings of varying strength and polarity as well as their own fragmented identity. Organizational boundaries are characterised by the multiplicity of interconnections, a particular domain of objects, varying levels of embodiment and patterns of interaction. The theory developed in this research emerged from an exploratory, qualitative research design employing grounded theory methodology. The field data was collected from the training headquarters of the New Zealand Army using semi-structured interviews and follow up observations. The unit of analysis is an organizational boundary. Only one research context was used because of the richness and multiplicity of organizational boundaries that were present. The model arose, grounded in the data collected, through a process of theoretical memoing and constant comparative analysis. Academic literature was used as a source of data to aid theory development and the saturation of some central categories. The final theory is classified as middle range, being substantive rather than formal, and is generalizable across medium to large organizations in low-context societies. The main limitation of the research arose from the breadth of the research with multiple lines of inquiry spanning several academic disciplines, with some relevant areas such as the role of identity and complexity being addressed at a necessarily high level. The organizational boundary theory developed by this research replaces the typology approaches, typical of previous theory on organizational boundaries and reconceptualises the nature of groups in organizations as well as the role of "boundary spanners". It also has implications for any theory that relies on the concept of boundaries, such as general systems theory. The main contribution of this research is the development of a holistic model of organizational boundaries including an explanation of the multiplicity of boundaries . no organization has a single definable boundary. A significant aspect of this contribution is the integration of aspects of complexity theory and identity theory to explain the emergence of higher-order properties of organizational boundaries and of organizational identity. The core category of "boundary weaving". is a powerful new metaphor that significantly reconceptualises the way organizational boundaries may be understood in organizations. It invokes secondary metaphors such as the weaving of an organization's "boundary fabric". and provides managers with other metaphorical perspectives, such as the management of boundary friction, boundary tension, boundary permeability and boundary stability. Opportunities for future research reside in formalising and testing the theory as well as developing analytical tools that would enable managers in organizations to apply the theory in practice.