230 resultados para Log conformance
Resumo:
The purpose of this study was to contrast the role of parental and non-parental (sibling, other family and non-family) supervisors in the supervision of learner drivers in graduated driver licensing systems. The sample consisted of 522 supervisors from the Australian states of Queensland (n = 204, 39%) and New South Wales (n = 318, 61%). The learner licence requirements in these two states are similar, although learners in Queensland are required to accrue 100 h of supervision in a log book while those in New South Wales are required to accrue 120 h. Approximately 50 per cent of the sample (n = 255) were parents of the learner driver while the remainder of the sample were either siblings (n = 72, 13.8%), other family members (n = 153, 29.3%) or non-family (n = 114, 21.8%). Parents were more likely than siblings, other family or non-family members to be the primary supervisor of the learner driver. Siblings provided fewer hours of practice when compared with other supervisor types while the median and mode suggest that parents provided the most hours of practice to learner drivers. This study demonstrates that non-parental supervisors, such as siblings, other family members and non-family, at least in jurisdictions that require 100 or 120 h of practice, are important in facilitating learner drivers to accumulate sufficient supervised driving practice.
Resumo:
Using Media-Access-Control (MAC) address for data collection and tracking is a capable and cost effective approach as the traditional ways such as surveys and video surveillance have numerous drawbacks and limitations. Positioning cell-phones by Global System for Mobile communication was considered an attack on people's privacy. MAC addresses just keep a unique log of a WiFi or Bluetooth enabled device for connecting to another device that has not potential privacy infringements. This paper presents the use of MAC address data collection approach for analysis of spatio-temporal dynamics of human in terms of shared space utilization. This paper firstly discuses the critical challenges and key benefits of MAC address data as a tracking technology for monitoring human movement. Here, proximity-based MAC address tracking is postulated as an effective methodology for analysing the complex spatio-temporal dynamics of human movements at shared zones such as lounge and office areas. A case study of university staff lounge area is described in detail and results indicates a significant added value of the methodology for human movement tracking. By analysis of MAC address data in the study area, clear statistics such as staff’s utilisation frequency, utilisation peak periods, and staff time spent is obtained. The analyses also reveal staff’s socialising profiles in terms of group and solo gathering. The paper is concluded with a discussion on why MAC address tracking offers significant advantages for tracking human behaviour in terms of shared space utilisation with respect to other and more prominent technologies, and outlines some of its remaining deficiencies.
Resumo:
The growing dominance of project planning cycles and results-based management in development over the past 20 years has significant implications for the effective evaluation of communication for development and social change and the sustainability of these processes. These approaches to development and evaluation usually give priority to the linear, logical framework (or log frame) approach promoted by many development institutions. This tends to emphasize upward accountability approaches to development and its evaluation, so that development is driven by exogenous rather than endogenous models of development and social change. Such approaches are underpinned by ideas of preplanning, and predetermination of what successful out -comes look like. In this way, outcomes of complex interventions tend to be reduced to simple, cause-effect processes and the categorization of things, including people (Chambers and Pettit 2004; Eyben 2011). This runs counter to communication for development approaches, which prioritize engagement, relationships, empowerment and dialogue as important components for positive social change.
Resumo:
Automated process discovery techniques aim at extracting process models from information system logs. Existing techniques in this space are effective when applied to relatively small or regular logs, but generate spaghetti-like and sometimes inaccurate models when confronted to logs with high variability. In previous work, trace clustering has been applied in an attempt to reduce the size and complexity of automatically discovered process models. The idea is to split the log into clusters and to discover one model per cluster. This leads to a collection of process models – each one representing a variant of the business process – as opposed to an all-encompassing model. Still, models produced in this way may exhibit unacceptably high complexity and low fitness. In this setting, this paper presents a two-way divide-and-conquer process discovery technique, wherein the discovered process models are split on the one hand by variants and on the other hand hierarchically using subprocess extraction. Splitting is performed in a controlled manner in order to achieve user-defined complexity or fitness thresholds. Experiments on real-life logs show that the technique produces collections of models substantially smaller than those extracted by applying existing trace clustering techniques, while allowing the user to control the fitness of the resulting models.
Resumo:
Analysis of behavioural consistency is an important aspect of software engineering. In process and service management, consistency verification of behavioural models has manifold applications. For instance, a business process model used as system specification and a corresponding workflow model used as implementation have to be consistent. Another example would be the analysis to what degree a process log of executed business operations is consistent with the corresponding normative process model. Typically, existing notions of behaviour equivalence, such as bisimulation and trace equivalence, are applied as consistency notions. Still, these notions are exponential in computation and yield a Boolean result. In many cases, however, a quantification of behavioural deviation is needed along with concepts to isolate the source of deviation. In this article, we propose causal behavioural profiles as the basis for a consistency notion. These profiles capture essential behavioural information, such as order, exclusiveness, and causality between pairs of activities of a process model. Consistency based on these profiles is weaker than trace equivalence, but can be computed efficiently for a broad class of models. In this article, we introduce techniques for the computation of causal behavioural profiles using structural decomposition techniques for sound free-choice workflow systems if unstructured net fragments are acyclic or can be traced back to S- or T-nets. We also elaborate on the findings of applying our technique to three industry model collections.
Resumo:
This paper gives an overview of the INEX 2008 Ad Hoc Track. The main goals of the Ad Hoc Track were two-fold. The first goal was to investigate the value of the internal document structure (as provided by the XML mark-up) for retrieving relevant information. This is a continuation of INEX 2007 and, for this reason, the retrieval results are liberalized to arbitrary passages and measures were chosen to fairly compare systems retrieving elements, ranges of elements, and arbitrary passages. The second goal was to compare focused retrieval to article retrieval more directly than in earlier years. For this reason, standard document retrieval rankings have been derived from all runs, and evaluated with standard measures. In addition, a set of queries targeting Wikipedia have been derived from a proxy log, and the runs are also evaluated against the clicked Wikipedia pages. The INEX 2008 Ad Hoc Track featured three tasks: For the Focused Task a ranked-list of nonoverlapping results (elements or passages) was needed. For the Relevant in Context Task non-overlapping results (elements or passages) were returned grouped by the article from which they came. For the Best in Context Task a single starting point (element start tag or passage start) for each article was needed. We discuss the results for the three tasks, and examine the relative effectiveness of element and passage retrieval. This is examined in the context of content only (CO, or Keyword) search as well as content and structure (CAS, or structured) search. Finally, we look at the ability of focused retrieval techniques to rank articles, using standard document retrieval techniques, both against the judged topics as well as against queries and clicks from a proxy log.
Resumo:
Accurate prediction of incident duration is not only important information of Traffic Incident Management System, but also an ffective input for travel time prediction. In this paper, the hazard based prediction odels are developed for both incident clearance time and arrival time. The data are obtained from the Queensland Department of Transport and Main Roads’ STREAMS Incident Management System (SIMS) for one year ending in November 2010. The best fitting distributions are drawn for both clearance and arrival time for 3 types of incident: crash, stationary vehicle, and hazard. The results show that Gamma, Log-logistic, and Weibull are the best fit for crash, stationary vehicle, and hazard incident, respectively. The obvious impact factors are given for crash clearance time and arrival time. The quantitative influences for crash and hazard incident are presented for both clearance and arrival. The model accuracy is analyzed at the end.
Resumo:
Whilst native French speakers oftentimes collapse accountability to account giving, this paper outlines the shape of an accountability ala française. Reading Tocqueville’s (1835) work highlights that accountability as practiced in Anglo-Saxonc countries has been an offspring of American democracy. An accountability a la française would be characterised by conformance to a set or universal values, the submission of minorities to choices made by the majority, a means obligation as well as the rejection of transparency. [Alors que le francophone réduit généralement l’accountability à la reddition de comptes, cet article esquisse les contours d’une véritable accountability à la française. La lecture de Tocqueville (1835) révèle que l’accountability pratiquée dans les pays anglo-saxons trouve ses origines dans les fondements de la démocratie américaine. En France, l’accountability serait caractérisée par le respect d’un ensemble de valeurs universelles, l’adhésion des minorités aux choix majoritaires, l’absence de discriminations, une obligation de moyens et un rejet de la transparence.]
Resumo:
We consider the problem of combining opinions from different experts in an explicitly model-based way to construct a valid subjective prior in a Bayesian statistical approach. We propose a generic approach by considering a hierarchical model accounting for various sources of variation as well as accounting for potential dependence between experts. We apply this approach to two problems. The first problem deals with a food risk assessment problem involving modelling dose-response for Listeria monocytogenes contamination of mice. Two hierarchical levels of variation are considered (between and within experts) with a complex mathematical situation due to the use of an indirect probit regression. The second concerns the time taken by PhD students to submit their thesis in a particular school. It illustrates a complex situation where three hierarchical levels of variation are modelled but with a simpler underlying probability distribution (log-Normal).
Resumo:
Universal One-Way Hash Functions (UOWHFs) may be used in place of collision-resistant functions in many public-key cryptographic applications. At Asiacrypt 2004, Hong, Preneel and Lee introduced the stronger security notion of higher order UOWHFs to allow construction of long-input UOWHFs using the Merkle-Damgård domain extender. However, they did not provide any provably secure constructions for higher order UOWHFs. We show that the subset sum hash function is a kth order Universal One-Way Hash Function (hashing n bits to m < n bits) under the Subset Sum assumption for k = O(log m). Therefore we strengthen a previous result of Impagliazzo and Naor, who showed that the subset sum hash function is a UOWHF under the Subset Sum assumption. We believe our result is of theoretical interest; as far as we are aware, it is the first example of a natural and computationally efficient UOWHF which is also a provably secure higher order UOWHF under the same well-known cryptographic assumption, whereas this assumption does not seem sufficient to prove its collision-resistance. A consequence of our result is that one can apply the Merkle-Damgård extender to the subset sum compression function with ‘extension factor’ k+1, while losing (at most) about k bits of UOWHF security relative to the UOWHF security of the compression function. The method also leads to a saving of up to m log(k+1) bits in key length relative to the Shoup XOR-Mask domain extender applied to the subset sum compression function.
Resumo:
Objectives To evaluate quality of care delivered to patients presenting to the emergency department (ED) with pain and managed by emergency nurse practitioners by measuring: 1) Evaluate time to analgesia from initial presentation 2) Evaluate time from being seen to next analgesia 3) Pain score documentation Background The delivery of quality care in the emergency department (ED) is emerging as one of the most important service indicators being measured by health services. Emergency nurse practitioner services are designed to improve timely, quality care for patients. One of the goals of quality emergency care is the timely and effective delivery of analgesia for patients. Timely analgesia is an important indicator of ED service performance. Methods A retrospective explicit chart review of 128 consecutive patients with pain and managed by emergency nurse practitioners was conducted. Data collected included demographics, presenting complaint, pain scores, and time to first dose of analgesia. Patients were identified from the ED Patient Information System (Cerner log) and data were extracted from electronic medical records Results Pain scores were documented in 67 (52.3%; 95% CI: 43.3-61.2) patients. The median time to analgesia from presentation was 60.5 (IQR 30-87) minutes, with 34 (26.6%; 95% CI: 19.1-35.1) patients receiving analgesia within 30 minutes of presentation to hospital. There were 22 (17.2%; 95% CI: 11.1-24.9) patients who received analgesia prior to assessment by a nurse practitioner. Among patients that received analgesia after assessment by a nurse practitioner, the median time to analgesia after assessment was 25 (IQR 12-50) minutes, with 65 (61.3%; 95% CI: 51.4-70.6) patients receiving analgesia within 30 minutes of assessment. Conclusions The majority of patients assessed by nurse practitioners received analgesia within 30 minutes after assessment. However, opportunities for substantial improvement in such times along with documentation of pain scores were identified and will be targeted in future research.
Resumo:
An anonymous membership broadcast scheme is a method in which a sender broadcasts the secret identity of one out of a set of n receivers, in such a way that only the right receiver knows that he is the intended receiver, while the others can not determine any information about this identity (except that they know that they are not the intended ones). In a w-anonymous membership broadcast scheme no coalition of up to w receivers, not containing the selected receiver, is able to determine any information about the identity of the selected receiver. We present two new constructions of w-anonymous membership broadcast schemes. The first construction is based on error-correcting codes and we show that there exist schemes that allow a flexible choice of w while keeping the complexities for broadcast communication, user storage and required randomness polynomial in log n,. The second construction is based on the concept of collision-free arrays, which is introduced in this paper. The construction results in more flexible schemes, allowing trade-offs between different complexities.
Resumo:
The increased interest in the area of process improvement persuaded Rabobank Group ICT in examining its own Change-process in order to improve its competitiveness. The group is looking for answers about the effectiveness of changes applied as part of this process, with particular interest toward the presence of predictive patterns and their parameters. We conducted an analysis of the log using well established process mining techniques (i.e. Fuzzy Miner). The results of the analysis conducted on the log of the process show that a visible impact is missing.
Resumo:
Abstract: Social network technologies, as we know them today have become a popular feature of everyday life for many people. As their name suggests, their underlying premise is to enable people to connect with each other for a variety of purposes. These purposes however, are generally thought of in a positive fashion. Based on a multi-method study of two online environments, Habbo Hotel and Second Life, which incorporate social networking functionality, we she light on forms of what can be conceptualized as antisocial behaviours and the rationales for these. Such behaviours included: scamming, racist/homophobic attacks, sim attacks, avatar attacks, non-conformance to contextual norms, counterfeiting and unneighbourly behaviour. The rationales for sub behaviours included: profit, fun, status building, network disruption, accidental acts and prejudice. Through our analysis we are able to comment upon the difficulties of defining antisocial behaviour in such environments, particularly when such environments are subject to interpretation vis their use and expected norms. We also point to the problems we face in conducting our public and private lives given the role ICTs are playing in the convergence of these two spaces and also the convergence of ICTs themselves.
Resumo:
Spatial data are now prevalent in a wide range of fields including environmental and health science. This has led to the development of a range of approaches for analysing patterns in these data. In this paper, we compare several Bayesian hierarchical models for analysing point-based data based on the discretization of the study region, resulting in grid-based spatial data. The approaches considered include two parametric models and a semiparametric model. We highlight the methodology and computation for each approach. Two simulation studies are undertaken to compare the performance of these models for various structures of simulated point-based data which resemble environmental data. A case study of a real dataset is also conducted to demonstrate a practical application of the modelling approaches. Goodness-of-fit statistics are computed to compare estimates of the intensity functions. The deviance information criterion is also considered as an alternative model evaluation criterion. The results suggest that the adaptive Gaussian Markov random field model performs well for highly sparse point-based data where there are large variations or clustering across the space; whereas the discretized log Gaussian Cox process produces good fit in dense and clustered point-based data. One should generally consider the nature and structure of the point-based data in order to choose the appropriate method in modelling a discretized spatial point-based data.