939 resultados para Operational speed
Resumo:
Variable Speed Limits (VSL) is an Intelligent Transportation Systems (ITS) control tool which can enhance traffic safety and which has the potential to contribute to traffic efficiency. Queensland's motorways experience a large volume of commuter traffic in peak periods, leading to heavy recurrent congestion and a high frequency of incidents. Consequently, Queensland's Department of Transport and Main Roads have considered deploying VSL to improve safety and efficiency. This paper identifies three types of VSL and three applicable conditions for activating VSL on for Queensland motorways: high flow, queuing and adverse weather. The design objectives and methodology for each condition are analysed, and micro-simulation results are presented to demonstrate the effectiveness of VSL.
Resumo:
CCTV and surveillance networks are increasingly being used for operational as well as security tasks. One emerging area of technology that lends itself to operational analytics is soft biometrics. Soft biometrics can be used to describe a person and detect them throughout a sparse multi-camera network. This enables them to be used to perform tasks such as determining the time taken to get from point to point, and the paths taken through an environment by detecting and matching people across disjoint views. However, in a busy environment where there are 100's if not 1000's of people such as an airport, attempting to monitor everyone is highly unrealistic. In this paper we propose an average soft biometric, that can be used to identity people who look distinct, and are thus suitable for monitoring through a large, sparse camera network. We demonstrate how an average soft biometric can be used to identify unique people to calculate operational measures such as the time taken to travel from point to point.
Resumo:
Between 2001 and 2005, the US airline industry faced financial turmoil. At the same time, the European airline industry entered a period of substantive deregulation. This period witnessed opportunities for low-cost carriers to become more competitive in the market as a result of these combined events. To help assess airline performance in the aftermath of these events, this paper provides new evidence of technical efficiency for 42 national and international airlines in 2006 using the data envelopment analysis (DEA) bootstrap approach first proposed by Simar and Wilson (J Econ, 136:31-64, 2007). In the first stage, technical efficiency scores are estimated using a bootstrap DEA model. In the second stage, a truncated regression is employed to quantify the economic drivers underlying measured technical efficiency. The results highlight the key role played by non-discretionary inputs in measures of airline technical efficiency.
Resumo:
Knowledge base is one of the emerging concepts in the Knowledge Management area. As there exists no agreed- upon standard definition of a knowledge base, this paper defines a knowledge base in terms of our research of Enterprise Systems (ES). The knowledge base is defined with reference to Learning Network Theory. Using this theoretical framework, we investigate the roles of management and operational staff in organisations and how their interactions can create a better ES-knowledge base to contribute to ES success. We focus on the post- implementation phase of ES as part of the ES lifecycle. Our findings will facilitate future research directions and contribute to better understandings of how the knowledge base can be integrated and how this integration leads to Enterprise System success.
Resumo:
The presence of High Speed Rail (HSR) systems influences market shares of road and air transport, and the development of cities and regions they serve. With the deployment of HSR infrastructure, changes in accessibility have occurred. These changes have lead researchers to investigate effects on the economic and spatial derived variables. Contention exists when managing the trade off between efficiency, and access points which are usually in the range of hundreds of kilometres apart. In short, it is argued that intermediate cities, bypassed by HSR services, suffer a decline in their accessibility and developmental opportunities. The present Chapter will analyse possible impacts derived from the presence of HSR infrastructure. In particular, it will consider small and medium agglomerations in the vicinity of HSR corridors, not always served by HSR stations. Thus, a methodology is developed to quantify accessibility benefits and their distribution. These benefits will be investigated in relation to different rail transit strategies integrating HSR infrastructure where a HSR station cannot be positioned. These strategies are selected principally for the type of service offered: (i) cadenced, (ii) express, (iii) frequent or (iv) non-stopping. Furthermore, to ground the theoretical approach linking accessibility and competitiveness, a case study in the North-Eastern Italian regions will be used for the application of the accessibility distributive patterns between the HSR infrastructure and the selected strategies. Results indicate that benefits derive from well informed decisions on HSR station positioning and the appropriate blend of complementary services in the whole region to interface HSR infrastructure. The results are significant for all countries in Europe and worldwide, not only for investing in HSR infrastructure, but mostly in terms of building territorial cohesion, while seeking international recognition for developing successful new technology and systems.
Resumo:
Continuing monitoring of diesel engine performance is critical for early detection of fault developments in the engine before they materialize and become a functional failure. Instantaneous crank angular speed (IAS) analysis is one of a few non intrusive condition monitoring techniques that can be utilized for such tasks. In this experimental study, IAS analysis was employed to estimate the loading condition of a 4-stroke 4-cylinder diesel engine in a laboratory condition. It was shown that IAS analysis can provide useful information about engine speed variation caused by the changing piston momentum and crankshaft acceleration during the engine combustion process. It was also found that the major order component of the IAS spectrum directly associated with the engine firing frequency (at twice the mean shaft revolution speed) can be utilized to estimate the engine loading condition regardless of whether the engine is operating at normal running conditions or in a simulated faulty injector case. The amplitude of this order component follows a clear exponential curve as the loading condition changes. A mathematical relationship was established for the estimation of the engine power output based on the amplitude of the major order component of the measured IAS spectrum.
Resumo:
Continuing monitoring of diesel engine performance is critical for early detection of fault developments in the engine before they materialize and become a functional failure. Instantaneous crank angular speed (IAS) analysis is one of a few non intrusive condition monitoring techniques that can be utilized for such tasks. In this experimental study, IAS analysis was employed to estimate the loading condition of a 4-stroke 4-cylinder diesel engine in a laboratory condition. It was shown that IAS analysis can provide useful information about engine speed variation caused by the changing piston momentum and crankshaft acceleration during the engine combustion process. It was also found that the major order component of the IAS spectrum directly associated with the engine firing frequency (at twice the mean shaft revolution speed) can be utilized to estimate the engine loading condition regardless of whether the engine is operating at normal running conditions or in a simulated faulty injector case. The amplitude of this order component follows a clear exponential curve as the loading condition changes. A mathematical relationship was established for the estimation of the engine power output based on the amplitude of the major order component of the measured IAS spectrum.
Resumo:
Research on expertise, talent identification and development has tended to be mono-disciplinary, typically adopting geno-centric or environmentalist positions, with an overriding focus on operational issues. In this thesis, the validity of dualist positions on sport expertise is evaluated. It is argued that, to advance understanding of expertise and talent development, a shift towards a multidisciplinary and integrative science focus is necessary, along with the development of a comprehensive multidisciplinary theoretical rationale. Dynamical systems theory is utilised as a multidisciplinary theoretical rationale for the succession of studies, capturing how multiple interacting constraints can shape the development of expert performers. Phase I of the research examines experiential knowledge of coaches and players on the development of fast bowling talent utilising qualitative research methodology. It provides insights into the developmental histories of expert fast bowlers, as well as coaching philosophies on the constraints of fast bowling expertise. Results suggest talent development programmes should eschew the notion of common optimal performance models and emphasize the individual nature of pathways to expertise. Coaching and talent development programmes should identify the range of interacting constraints that impinge on the performance potential of individual athletes, rather than evaluating current performance on physical tests referenced to group norms. Phase II of this research comprises three further studies that investigate several of the key components identified as important for fast bowling expertise, talent identification and development extrapolated from Phase I of this research. This multidisciplinary programme of work involves a comprehensive analysis of fast bowling performance in a cross-section of the Cricket Australia high performance pathways, from the junior, emerging and national elite fast bowling squads. Briefly, differences were found in trunk kinematics associated with the generation of ball speed across the three groups. These differences in release mechanics indicated the functional adaptations in movement patterns as bowlers’ physical and anatomical characteristics changed during maturation. Second to the generation of ball speed, the ability to produce a range of delivery types was highlighted as a key component of expertise in the qualitative phase. The ability of athletes to produce consistent results on different surfaces and in different environments has drawn attention to the challenge of measuring consistency and flexibility in skill assessments. Examination of fast bowlers in Phase II demonstrated that national bowlers can make adjustments to the accuracy of subsequent deliveries during performance of a cricket bowling skills test, and perform a range of delivery types with increased accuracy and consistency. Finally, variability in selected delivery stride ground reaction force components in fast bowling revealed the degenerate nature of this complex multi-articular skill where the same performance outcome can be achieved with unique movement strategies. Utilising qualitative and quantitative methodologies to examine fast bowling expertise, the importance of degeneracy and adaptability in fast bowling has been highlighted alongside learning design that promotes dynamic learning environments.
Resumo:
There is continuing debate regarding the psychometric properties of self-report measures of behaviour, particularly in road safety research. Practical considerations often preclude the use of objective assessments, leading to reliance on self-report measures. Acknowledging that such measures are likely to remain commonly used, this pilot project sought not to argue whether self-report measures should continue to be used, but to explore factors associated with how individuals respond to self-reported speeding measures. This paper reports preliminary findings from a qualitative study (focus groups and in-depth interviews) conducted with licensed drivers to explore the operational utility of self-reported speeding behaviour measures. Drawing upon concepts from the Theory of Planned Behaviour (TPB; Ajzen, 1991) and Agency Theory (Bandura, 2001), we identified four dimensions of self-reported speeding: including timeframe, speed zone, degree over the speed limit and, overall frequency of the behaviour, and examined participants’ perceptions of the operational utility of these factors. Issues related to comprehensibility, perceived accuracy, response format and layout were also explored. Results indicated that: heterogeneity in the timeframe of behavioural reflections suggests a need to provide a set timeframe for participants to consider when thinking about their previous speeding behaviour; response categories and formats should be carefully considered to ensure the most accurate representations of the frequency and degree of speeding are captured; the need to clearly articulate “low-level” speeding on self-report measures; and, that self-reports of speeding behaviour are typically context-irrelevant unless stipulated in the question. Limitations and directions for further research are discussed.
Resumo:
Introduction: An observer, looking sideways from a moving vehicle, while wearing a neutral density filter over one eye, can have a distorted perception of speed, known as the Enright phenomenon. The purpose of this study was to determine how the Enright phenomenon influences driving behaviour. Methods: A geometric model of the Enright phenomenon was developed. Ten young, visually normal, participants (mean age = 25.4 years) were tested on a straight section of a closed driving circuit and instructed to look out of the right side of the vehicle and drive at either 40 Km/h or 60 Km/h under the following binocular viewing conditions: with a 0.9 ND filter over the left eye (leading eye); 0.9 ND filter over the right eye (trailing eye); 0.9 ND filters over both eyes, and with no filters over either eye. The order of filter conditions was randomised and the speed driven recorded for each condition. Results: Speed judgments did not differ significantly between the two baseline conditions (no filters and both eyes filtered) for either speed tested. For the baseline conditions, when subjects were asked to drive at 60 Km/h they matched this speed well (61 ± 10.2 Km/h) but drove significantly faster than requested (51.6 ± 9.4 Km/h) when asked to drive at 40 Km/h. Subjects significantly exceeded baseline speeds by 8.7± 5.0 Km/h, when the trailing eye was filtered and travelled slower than baseline speeds by 3.7± 4.6 Km/h when the leading eye was filtered. Conclusions: This is the first quantitative study demonstrating how the Enright effect can influence perceptions of driving speed, and demonstrates that monocular filtering of an eye can significantly impact driving speeds, albeit to a lesser extent than predicted by geometric models of the phenomenon.
Resumo:
Mandatory data breach notification laws are a novel and potentially important legal instrument regarding organisational protection of personal information. These laws require organisations that have suffered a data breach involving personal information to notify those persons that may be affected, and potentially government authorities, about the breach. The Australian Law Reform Commission (ALRC) has proposed the creation of a mandatory data breach notification scheme, implemented via amendments to the Privacy Act 1988 (Cth). However, the conceptual differences between data breach notification law and information privacy law are such that it is questionable whether a data breach notification scheme can be solely implemented via an information privacy law. Accordingly, this thesis by publications investigated, through six journal articles, the extent to which data breach notification law was conceptually and operationally compatible with information privacy law. The assessment of compatibility began with the identification of key issues related to data breach notification law. The first article, Stakeholder Perspectives Regarding the Mandatory Notification of Australian Data Breaches started this stage of the research which concluded in the second article, The Mandatory Notification of Data Breaches: Issues Arising for Australian and EU Legal Developments (‘Mandatory Notification‘). A key issue that emerged was whether data breach notification was itself an information privacy issue. This notion guided the remaining research and focused attention towards the next stage of research, an examination of the conceptual and operational foundations of both laws. The second article, Mandatory Notification and the third article, Encryption Safe Harbours and Data Breach Notification Laws did so from the perspective of data breach notification law. The fourth article, The Conceptual Basis of Personal Information in Australian Privacy Law and the fifth article, Privacy Invasive Geo-Mashups: Privacy 2.0 and the Limits of First Generation Information Privacy Laws did so for information privacy law. The final article, Contextualizing the Tensions and Weaknesses of Information Privacy and Data Breach Notification Laws synthesised previous research findings within the framework of contextualisation, principally developed by Nissenbaum. The examination of conceptual and operational foundations revealed tensions between both laws and shared weaknesses within both laws. First, the distinction between sectoral and comprehensive information privacy legal regimes was important as it shaped the development of US data breach notification laws and their subsequent implementable scope in other jurisdictions. Second, the sectoral versus comprehensive distinction produced different emphases in relation to data breach notification thus leading to different forms of remedy. The prime example is the distinction between market-based initiatives found in US data breach notification laws compared to rights-based protections found in the EU and Australia. Third, both laws are predicated on the regulation of personal information exchange processes even though both laws regulate this process from different perspectives, namely, a context independent or context dependent approach. Fourth, both laws have limited notions of harm that is further constrained by restrictive accountability frameworks. The findings of the research suggest that data breach notification is more compatible with information privacy law in some respects than others. Apparent compatibilities clearly exist as both laws have an interest in the protection of personal information. However, this thesis revealed that ostensible similarities are founded on some significant differences. Data breach notification law is either a comprehensive facet to a sectoral approach or a sectoral adjunct to a comprehensive regime. However, whilst there are fundamental differences between both laws they are not so great to make them incompatible with each other. The similarities between both laws are sufficient to forge compatibilities but it is likely that the distinctions between them will produce anomalies particularly if both laws are applied from a perspective that negates contextualisation.
Resumo:
Many modern business environments employ software to automate the delivery of workflows; whereas, workflow design and generation remains a laborious technical task for domain specialists. Several differ- ent approaches have been proposed for deriving workflow models. Some approaches rely on process data mining approaches, whereas others have proposed derivations of workflow models from operational struc- tures, domain specific knowledge or workflow model compositions from knowledge-bases. Many approaches draw on principles from automatic planning, but conceptual in context and lack mathematical justification. In this paper we present a mathematical framework for deducing tasks in workflow models from plans in mechanistic or strongly controlled work environments, with a focus around automatic plan generations. In addition, we prove an associative composition operator that permits crisp hierarchical task compositions for workflow models through a set of mathematical deduction rules. The result is a logical framework that can be used to prove tasks in workflow hierarchies from operational information about work processes and machine configurations in controlled or mechanistic work environments.
Resumo:
As an international norm, the Responsibility to Protect (R2P) has gained substantial influence and institutional presence—and created no small controversy—in the ten years since its first conceptualisation. Conversely, the Protection of Civilians in Armed Conflict (PoC) has a longer pedigree and enjoys a less contested reputation. Yet UN Security Council action in Libya in 2011 has thrown into sharp relief the relationship between the two. UN Security Council Resolutions 1970 and 1973 follow exactly the process envisaged by R2P in response to imminent atrocity crimes, yet the operative paragraphs of the resolutions themselves invoke only PoC. This article argues that, while the agendas of PoC and R2P converge with respect to Security Council action in cases like Libya, outside this narrow context it is important to keep the two norms distinct. Peacekeepers, humanitarian actors, international lawyers, individual states and regional organisations are required to act differently with respect to the separate agendas and contexts covered by R2P and PoC. While overlap between the two does occur in highly visible cases like Libya, neither R2P nor PoC collapses normatively, institutionally or operationally into the other.
Resumo:
The design-build (DB) system has been demonstrated as an effective delivery method and has gained popularity worldwide. However it is observed that a number of operational variations of DB system have emerged since the last decade to cater for different client’s requirements. After the client decides to procure his project through the DB system, he still has to choose an appropriate configuration to deliver their projects optimally. However, there is little research on the selection of DB operational variations. One of the main reasons for this is the lack of evaluation criteria for determining the appropriateness of each operational variation. To obtain such criteria, a three-round Delphi survey has been conducted with 20 construction experts in the People’s Republic of China (PRC). Seven top selection criteria were identified. These are: (1) availability of competent design-builders; (2) client’s capabilities; (3) project complexity; (4) client’s control of project; (5) early commencement & short duration; (6) reduced responsibility or involvement; and (7) clearly defined end user’s requirements. These selection criteria were found to have a statistically significant agreement. These findings may furnish various stakeholders, DB clients in particular, with better insight to understand and compare the different operational variations of the DB system.
Resumo:
Many academic researchers have conducted studies on the selection of design-build (DB) delivery method; however, there are few studies on the selection of DB operational variations, which poses challenges to many clients. The selection of DB operational variation is a multi-criteria decision making process that requires clients to objectively evaluate the performance of each DB operational variation with reference to the selection criteria. This evaluation process is often characterized by subjectivity and uncertainty. In order to resolve this deficiency, the current investigation aimed to establish a fuzzy multicriteria decision-making (FMCDM) model for selecting the most suitable DB operational variation. A three-round Delphi questionnaire survey was conducted to identify the selection criteria and their relative importance. A fuzzy set theory approach, namely the modified horizontal approach with the bisector error method, was applied to establish the fuzzy membership functions, which enables clients to perform quantitative calculations on the performance of each DB operational variation. The FMCDM was developed using the weighted mean method to aggregate the overall performance of DB operational variations with regard to the selection criteria. The proposed FMCDM model enables clients to perform quantitative calculations in a fuzzy decision-making environment and provides a useful tool to cope with different project attributes.