924 resultados para specification


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Motorcycles are overrepresented in road traffic crashes and particularly vulnerable at signalized intersections. The objective of this study is to identify causal factors affecting the motorcycle crashes at both four-legged and T signalized intersections. Treating the data in time-series cross-section panels, this study explores different Hierarchical Poisson models and found that the model allowing autoregressive lag 1 dependent specification in the error term is the most suitable. Results show that the number of lanes at the four-legged signalized intersections significantly increases motorcycle crashes largely because of the higher exposure resulting from higher motorcycle accumulation at the stop line. Furthermore, the presence of a wide median and an uncontrolled left-turn lane at major roadways of four-legged intersections exacerbate this potential hazard. For T signalized intersections, the presence of exclusive right-turn lane at both major and minor roadways and an uncontrolled left-turn lane at major roadways of T intersections increases motorcycle crashes. Motorcycle crashes increase on high-speed roadways because they are more vulnerable and less likely to react in time during conflicts. The presence of red light cameras reduces motorcycle crashes significantly for both four-legged and T intersections. With the red-light camera, motorcycles are less exposed to conflicts because it is observed that they are more disciplined in queuing at the stop line and less likely to jump start at the start of green.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Singapore crash statistics show that motorcycles are involved in about 54% of crashes at intersections. Moreover, about 46% of fatal and 67% of injury motorcycle crashes occur at signalized intersections. The objective of this study is to identify causal factors affecting the motorcycle crashes at both four-legged and three-legged signalized intersections. Treating the data in time-series cross-section panels, this study explores different Hierarchical Poisson models and found that the model allowing autoregressive lag 1 dependent specification in the error term is the most suitable. Analysis of the results shows the number of lanes at the intersections significantly increases motorcycle crashes largely because of the higher exposure resulting from higher motorcycle accumulation at the stop line. Furthermore, the presence of a wide median at four-legged intersections and an exclusive right-turn lane and an uncontrolled left-turn lane at three-legged intersections exacerbate this potential hazard. Moreover, motorcycle crashes increase on high-speed roadways because of the vulnerability of the motorcyclists. The presence of red light cameras reduces motorcycle crashes significantly on the intersection roadways for both four-legged and three-legged intersections. With the red-light camera, motorcycles are less exposed to conflicts because it is observed that they are more disciplined in queuing at the stop line and less likely to jump start at the start of green.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study proposes a full Bayes (FB) hierarchical modeling approach in traffic crash hotspot identification. The FB approach is able to account for all uncertainties associated with crash risk and various risk factors by estimating a posterior distribution of the site safety on which various ranking criteria could be based. Moreover, by use of hierarchical model specification, FB approach is able to flexibly take into account various heterogeneities of crash occurrence due to spatiotemporal effects on traffic safety. Using Singapore intersection crash data(1997-2006), an empirical evaluate was conducted to compare the proposed FB approach to the state-of-the-art approaches. Results show that the Bayesian hierarchical models with accommodation for site specific effect and serial correlation have better goodness-of-fit than non hierarchical models. Furthermore, all model-based approaches perform significantly better in safety ranking than the naive approach using raw crash count. The FB hierarchical models were found to significantly outperform the standard EB approach in correctly identifying hotspots.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reducing complexity in Information Systems is a main concern in both research and industry. One strategy for reducing complexity is separation of concerns. This strategy advocates separating various concerns, like security and privacy, from the main concern. It results in less complex, easily maintainable, and more reusable Information Systems. Separation of concerns is addressed through the Aspect Oriented paradigm. This paradigm has been well researched and implemented in programming, where languages such as AspectJ have been developed. However, the rsearch on aspect orientation for Business Process Management is still at its beginning. While some efforts have been made proposing Aspect Oriented Business Process Modelling, it has not yet been investigated how to enact such process models in a Workflow Management System. In this paper, we define a set of requirements that specifies the execution of aspect oriented business process models. We create a Coloured Petri Net specification for the semantics of so-called Aspect Service that fulfils these requirements. Such a service extends the capability of a Workflow Management System with support for execution of aspect oriented business process models. The design specification of the Aspect Service is also inspected through state space analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper engages with debates about whether comprehensive prior specification of criteria and standards is sufficient for informed professional judgement. A preoccupation has emerged with the specificity and explication of criteria intended to regulate judgement. This has resulted in criteria-compliance in the use of defined standards to validate judgements and improve reliability and consistency. Compliance has become a priority, the consequence being the prominence of explicit criteria, to the lack of acknowledgement of the operation of latent and meta-criteria within judgement practice. This paper examines judgement as a process involving three categories of assessment criteria in the context of standards-referenced systems: explicit, latent and meta-criteria. These are understood to be wholly interrelated and interdependent. A conceptualisation of judgement involving the interplay of the three criteria types is presented with an exploration of how they function to focus or alter assessments of quality in judgements of achievement in English and Mathematics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although the relationship between socioeconomic status (SES) and health is well documented for developed countries, less evidence has been presented for developing countries. The aim of this paper is to analyse this relationship at the household level for Fiji, a developing country in the South Pacific, using original household survey data. To allow for the endogeneity of SES status in the household health production function, we utilize a simultaneous equation approach where estimates are achieved by full information maximum likelihood. By restricting our sample to one, relatively small island, and including area and district hospital effects, physical geography effects are unpacked from income effects. We measure SES, as permanent income which is constructed using principal components analysis. An alternative specification considers transitory household income. We find that a 1% increase in wealth (our measure of permanent income) would lead to a 15% decrease in the probability of an incapacitating illness occurring intra-household. Although the presence of a strong relationship indicates that relatively small improvements in SES status can significantly improve health at the household level, it is argued that the design of appropriate policy would also require an understanding of the various mechanisms through which the relationship operates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A significant issue encountered when fusing data received from multiple sensors is the accuracy of the timestamp associated with each piece of data. This is particularly important in applications such as Simultaneous Localisation and Mapping (SLAM) where vehicle velocity forms an important part of the mapping algorithms; on fastmoving vehicles, even millisecond inconsistencies in data timestamping can produce errors which need to be compensated for. The timestamping problem is compounded in a robot swarm environment due to the use of non-deterministic readily-available hardware (such as 802.11-based wireless) and inaccurate clock synchronisation protocols (such as Network Time Protocol (NTP)). As a result, the synchronisation of the clocks between robots can be out by tens-to-hundreds of milliseconds making correlation of data difficult and preventing the possibility of the units performing synchronised actions such as triggering cameras or intricate swarm manoeuvres. In this thesis, a complete data fusion unit is designed, implemented and tested. The unit, named BabelFuse, is able to accept sensor data from a number of low-speed communication buses (such as RS232, RS485 and CAN Bus) and also timestamp events that occur on General Purpose Input/Output (GPIO) pins referencing a submillisecondaccurate wirelessly-distributed "global" clock signal. In addition to its timestamping capabilities, it can also be used to trigger an attached camera at a predefined start time and frame rate. This functionality enables the creation of a wirelessly-synchronised distributed image acquisition system over a large geographic area; a real world application for this functionality is the creation of a platform to facilitate wirelessly-distributed 3D stereoscopic vision. A ‘best-practice’ design methodology is adopted within the project to ensure the final system operates according to its requirements. Initially, requirements are generated from which a high-level architecture is distilled. This architecture is then converted into a hardware specification and low-level design, which is then manufactured. The manufactured hardware is then verified to ensure it operates as designed and firmware and Linux Operating System (OS) drivers are written to provide the features and connectivity required of the system. Finally, integration testing is performed to ensure the unit functions as per its requirements. The BabelFuse System comprises of a single Grand Master unit which is responsible for maintaining the absolute value of the "global" clock. Slave nodes then determine their local clock o.set from that of the Grand Master via synchronisation events which occur multiple times per-second. The mechanism used for synchronising the clocks between the boards wirelessly makes use of specific hardware and a firmware protocol based on elements of the IEEE-1588 Precision Time Protocol (PTP). With the key requirement of the system being submillisecond-accurate clock synchronisation (as a basis for timestamping and camera triggering), automated testing is carried out to monitor the o.sets between each Slave and the Grand Master over time. A common strobe pulse is also sent to each unit for timestamping; the correlation between the timestamps of the di.erent units is used to validate the clock o.set results. Analysis of the automated test results show that the BabelFuse units are almost threemagnitudes more accurate than their requirement; clocks of the Slave and Grand Master units do not di.er by more than three microseconds over a running time of six hours and the mean clock o.set of Slaves to the Grand Master is less-than one microsecond. The common strobe pulse used to verify the clock o.set data yields a positive result with a maximum variation between units of less-than two microseconds and a mean value of less-than one microsecond. The camera triggering functionality is verified by connecting the trigger pulse output of each board to a four-channel digital oscilloscope and setting each unit to output a 100Hz periodic pulse with a common start time. The resulting waveform shows a maximum variation between the rising-edges of the pulses of approximately 39¥ìs, well below its target of 1ms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The suitability of Role Based Access Control (RBAC) is being challenged in dynamic environments like healthcare. In an RBAC system, a user's legitimate access may be denied if their need has not been anticipated by the security administrator at the time of policy specification. Alternatively, even when the policy is correctly specified an authorised user may accidentally or intentionally misuse the granted permission. The heart of the challenge is the intrinsic unpredictability of users' operational needs as well as their incentives to misuse permissions. In this paper we propose a novel Budget-aware Role Based Access Control (B-RBAC) model that extends RBAC with the explicit notion of budget and cost, where users are assigned a limited budget through which they pay for the cost of permissions they need. We propose a model where the value of resources are explicitly defined and an RBAC policy is used as a reference point to discriminate the price of access permissions, as opposed to representing hard and fast rules for making access decisions. This approach has several desirable properties. It enables users to acquire unassigned permissions if they deem them necessary. However, users misuse capability is always bounded by their allocated budget and is further adjustable through the discrimination of permission prices. Finally, it provides a uniform mechanism for the detection and prevention of misuses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Implementation Guide for the Hospital Surveillance of SAB has been produced by the Healthcare Associated Infection (HAI) Technical Working Group of the Australian Commission on Safety and Quality in Health Care (ACSQHC), and endorsed by the HAI Advisory Group. The Technical Working Group is made up of representatives invited from surveillance units and the ACSQHC, who have had input into the preparation of this Guide. The Guide has been developed to ensure consistency in reporting of SAB across public and private hospitals to enable accurate national reporting and benchmarking. It is intended to be used by Australian hospitals and organisations to support the implementation of healthcare associated Staphylococcus aureus bacteraemia(SAB) surveillance using the endorsed case definition1 in the box below and further detail in the Data Set Specification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There has been a recent surge of interest in cooking skills in a diverse range of fields, such as health, education and public policy. There appears to be an assumption that cooking skills are in decline and that this is having an adverse impact on individual health and well-being, and family wholesomeness. The problematisation of cooking skills is not new, and can be seen in a number of historical developments that have specified particular pedagogies about food and eating. The purpose of this paper is to examine pedagogies on cooking skills and the importance accorded them. The paper draws on Foucault’s work on governmentality. By using examples from the USA, UK and Australia, the paper demonstrates the ways that authoritative discourses on the know how and the know what about food and cooking – called here ‘savoir fare’ – are developed and promulgated. These discourses, and the moral panics in which they are embedded, require individuals to make choices about what to cook and how to cook, and in doing so establish moral pedagogies concerning good and bad cooking. The development of food literacy programmes, which see cooking skills as life skills, further extends the obligations to ‘cook properly’ to wider populations. The emphasis on cooking knowledge and skills has ushered in new forms of government, firstly, through a relationship between expertise and politics which is readily visible through the authority that underpins the need to develop skills in food provisioning and preparation; secondly, through a new pluralisation of ‘social’ technologies which invites a range of private-public interest through, for example, television cooking programmes featuring cooking skills, albeit it set in a particular milieu of entertainment; and lastly, through a new specification of the subject can be seen in the formation of a choosing subject, one which has to problematise food choice in relation to expert advice and guidance. A governmentality focus shows that as discourses develop about what is the correct level of ‘savoir fare’, new discursive subject positions are opened up. Armed with the understanding of what is considered expert-endorsed acceptable food knowledge, subjects judge themselves through self-surveillance. The result is a powerful food and family morality that is both disciplined and disciplinary.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Educators are faced with many challenging questions in designing an effective curriculum. What prerequisite knowledge do students have before commencing a new subject? At what level of mastery? What is the spread of capabilities between bare-passing students vs. the top performing group? How does the intended learning specification compare to student performance at the end of a subject? In this paper we present a conceptual model that helps in answering some of these questions. It has the following main capabilities: capturing the learning specification in terms of syllabus topics and outcomes; capturing mastery levels to model progression; capturing the minimal vs. aspirational learning design; capturing confidence and reliability metrics for each of these mappings; and finally, comparing and reflecting on the learning specification against actual student performance. We present a web-based implementation of the model, and validate it by mapping the final exams from four programming subjects against the ACM/IEEE CS2013 topics and outcomes, using Bloom's Taxonomy as the mastery scale. We then import the itemised exam grades from 632 students across the four subjects and compare the demonstrated student performance against the expected learning for each of these. Key contributions of this work are the validated conceptual model for capturing and comparing expected learning vs. demonstrated performance, and a web-based implementation of this model, which is made freely available online as a community resource.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The well-known difficulties students exhibit when learning to program are often characterised as either difficulties in understanding the problem to be solved or difficulties in devising and coding a computational solution. It would therefore be helpful to understand which of these gives students the greatest trouble. Unit testing is a mainstay of large-scale software development and maintenance. A unit test suite serves not only for acceptance testing, but is also a form of requirements specification, as exemplified by agile programming methodologies in which the tests are developed before the corresponding program code. In order to better understand students’ conceptual difficulties with programming, we conducted a series of experiments in which students were required to write both unit tests and program code for non-trivial problems. Their code and tests were then assessed separately for correctness and ‘coverage’, respectively. The results allowed us to directly compare students’ abilities to characterise a computational problem, as a unit test suite, and develop a corresponding solution, as executable code. Since understanding a problem is a pre-requisite to solving it, we expected students’ unit testing skills to be a strong predictor of their ability to successfully implement the corresponding program. Instead, however, we found that students’testing abilities lag well behind their coding skills.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This note examines the productive efficiency of 62 starting guards during the 2011/12 National Basketball Association (NBA) season. This period coincides with the phenomenal and largely unanticipated performance of New York Knicks’ starting point guard Jeremy Lin and the attendant public and media hype known as Linsanity. We employ a data envelopment analysis (DEA) approach that includes allowance for an undesirable output, here turnovers per game, with the desirable outputs of points, rebounds, assists, steals and blocks per game and an input of minutes per game. The results indicate that depending upon the specification, between 29% and 42% of NBA guards are fully efficient, including Jeremy Lin, with a mean inefficiency of 3.7% and 19.2%. However, while Jeremy Lin is technically efficient, he seldom serves as a benchmark for inefficient players, at least when compared with established players such as Chris Paul and Dwayne Wade. This suggests the uniqueness of Jeremy Lin's productive solution and may explain why his unique style of play, encompassing individual brilliance, unselfish play and team leadership, is of such broad public appeal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Immigration has played an important role in the historical development of Australia. Thus, it is no surprise that a large body of empirical work has developed, which focuses upon how migrants fare in the land of opportunity. Much of the literature is comparatively recent, i.e. the last ten years or so, encouraged by the advent of public availability of Australian crosssection micro data. Several different aspects of migrant welfare have been addressed, with major emphasis being placed upon earnings and unemployment experience. For recent examples see Haig (1980), Stromback (1984), Chiswick and Miller (1985), Tran-Nam and Nevile (1988) and Beggs and Chapman (1988). The present paper contributes to the literature by providing additional empirical evidence on the native/migrant earnings differential. The data utilised are from the rather neglected Australian Bureau of Statistics, ABS Special Supplementary Survey No.4. 1982, otherwise known as the Family Survey. The paper also examines the importance of distinguishing between the wage and salary sector and the self-employment sector when discussing native/migrant differentials. Separate earnings equations for the two labour market groups are estimated and the native/migrant earnings differential is broken down by employment status. This is a novel application in the Australian context and provides some insight into the earnings of the selfemployed, a group that despite its size (around 20 per cent of the labour force) is frequently ignored by economic research. Most previous empirical research fails to examine the effect of employment status on earnings. Stromback (1984) includes a dummy variable representing self-employment status in an earnings equation estimated over a pooled sample of paid and self-employed workers. The variable is found to be highly significant, which leads Stromback to question the efficacy of including the self-employed in the estimation sample. The suggestion is that part of self-employed earnings represent a return to non-human capital investment, i.e. investments in machinery, buildings etc, the structural determinants of earnings differ significantly from those for paid employees. Tran-Nam and Nevile (1988) deal with differences between paid employees and the selfemployed by deleting the latter from their sample. However, deleting the self-employed from the estimation sample may lead to bias in the OLS estimation method (see Heckman 1979). The desirable properties of OLS are dependent upon estimation on a random sample. Thus, the 'Ran-Nam and Nevile results are likely to suffer from bias unless individuals are randomly allocated between self-employment and paid employment. The current analysis extends Tran-Nam and Nevile (1988) by explicitly treating the choice of paid employment versus self-employment as being endogenously determined. This allows an explicit test for the appropriateness of deleting self-employed workers from the sample. Earnings equations that are corrected for sample selection are estimated for both natives and migrants in the paid employee sector. The Heckman (1979) two-step estimator is employed. The paper is divided into five major sections. The next section presents the econometric model incorporating the specification of the earnings generating process together with an explicit model determining an individual's employment status. In Section 111 the data are described. Section IV draws together the main econometric results of the paper. First, the probit estimates of the labour market status equation are documented. This is followed by presentation and discussion of the Heckman two-stage estimates of the earnings specification for both native and migrant Australians. Separate earnings equations are estimated for paid employees and the self-employed. Section V documents estimates of the nativelmigrant earnings differential for both categories of employees. To aid comparison with earlier work, the Oaxaca decomposition of the earnings differential for paid-employees is carried out for both the simple OLS regression results as well as the parameter estimates corrected for sample selection effects. These differentials are interpreted and compared with previous Australian findings. A short section concludes the paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The success of contemporary organizations depends on their ability to make appropriate decisions. Making appropriate decisions is inevitably bound to the availability and provision of relevant information. Information systems should be able to provide information in an efficient way. Thus, within information systems development a detailed analysis of information supply and information demands has to prevail. Based on Syperski’s information set and subset-model we will give an epistemological foundation of information modeling in general and show, why conceptual modeling in particular is capable of specifying effective and efficient information systems. Furthermore, we derive conceptual modeling requirements based on our findings. A short example illustrates the usefulness of a conceptual data modeling technique for the specification of information systems.