970 resultados para Stratigraphic log
Resumo:
In this paper we present truncated differential analysis of reduced-round LBlock by computing the differential distribution of every nibble of the state. LLR statistical test is used as a tool to apply the distinguishing and key-recovery attacks. To build the distinguisher, all possible differences are traced through the cipher and the truncated differential probability distribution is determined for every output nibble. We concatenate additional rounds to the beginning and end of the truncated differential distribution to apply the key-recovery attack. By exploiting properties of the key schedule, we obtain a large overlap of key bits used in the beginning and final rounds. This allows us to significantly increase the differential probabilities and hence reduce the attack complexity. We validate the analysis by implementing the attack on LBlock reduced to 12 rounds. Finally, we apply single-key and related-key attacks on 18 and 21-round LBlock, respectively.
Resumo:
The purpose of this study was to contrast the role of parental and non-parental (sibling, other family and non-family) supervisors in the supervision of learner drivers in graduated driver licensing systems. The sample consisted of 522 supervisors from the Australian states of Queensland (n = 204, 39%) and New South Wales (n = 318, 61%). The learner licence requirements in these two states are similar, although learners in Queensland are required to accrue 100 h of supervision in a log book while those in New South Wales are required to accrue 120 h. Approximately 50 per cent of the sample (n = 255) were parents of the learner driver while the remainder of the sample were either siblings (n = 72, 13.8%), other family members (n = 153, 29.3%) or non-family (n = 114, 21.8%). Parents were more likely than siblings, other family or non-family members to be the primary supervisor of the learner driver. Siblings provided fewer hours of practice when compared with other supervisor types while the median and mode suggest that parents provided the most hours of practice to learner drivers. This study demonstrates that non-parental supervisors, such as siblings, other family members and non-family, at least in jurisdictions that require 100 or 120 h of practice, are important in facilitating learner drivers to accumulate sufficient supervised driving practice.
Resumo:
Using Media-Access-Control (MAC) address for data collection and tracking is a capable and cost effective approach as the traditional ways such as surveys and video surveillance have numerous drawbacks and limitations. Positioning cell-phones by Global System for Mobile communication was considered an attack on people's privacy. MAC addresses just keep a unique log of a WiFi or Bluetooth enabled device for connecting to another device that has not potential privacy infringements. This paper presents the use of MAC address data collection approach for analysis of spatio-temporal dynamics of human in terms of shared space utilization. This paper firstly discuses the critical challenges and key benefits of MAC address data as a tracking technology for monitoring human movement. Here, proximity-based MAC address tracking is postulated as an effective methodology for analysing the complex spatio-temporal dynamics of human movements at shared zones such as lounge and office areas. A case study of university staff lounge area is described in detail and results indicates a significant added value of the methodology for human movement tracking. By analysis of MAC address data in the study area, clear statistics such as staff’s utilisation frequency, utilisation peak periods, and staff time spent is obtained. The analyses also reveal staff’s socialising profiles in terms of group and solo gathering. The paper is concluded with a discussion on why MAC address tracking offers significant advantages for tracking human behaviour in terms of shared space utilisation with respect to other and more prominent technologies, and outlines some of its remaining deficiencies.
Resumo:
The growing dominance of project planning cycles and results-based management in development over the past 20 years has significant implications for the effective evaluation of communication for development and social change and the sustainability of these processes. These approaches to development and evaluation usually give priority to the linear, logical framework (or log frame) approach promoted by many development institutions. This tends to emphasize upward accountability approaches to development and its evaluation, so that development is driven by exogenous rather than endogenous models of development and social change. Such approaches are underpinned by ideas of preplanning, and predetermination of what successful out -comes look like. In this way, outcomes of complex interventions tend to be reduced to simple, cause-effect processes and the categorization of things, including people (Chambers and Pettit 2004; Eyben 2011). This runs counter to communication for development approaches, which prioritize engagement, relationships, empowerment and dialogue as important components for positive social change.
Resumo:
Automated process discovery techniques aim at extracting process models from information system logs. Existing techniques in this space are effective when applied to relatively small or regular logs, but generate spaghetti-like and sometimes inaccurate models when confronted to logs with high variability. In previous work, trace clustering has been applied in an attempt to reduce the size and complexity of automatically discovered process models. The idea is to split the log into clusters and to discover one model per cluster. This leads to a collection of process models – each one representing a variant of the business process – as opposed to an all-encompassing model. Still, models produced in this way may exhibit unacceptably high complexity and low fitness. In this setting, this paper presents a two-way divide-and-conquer process discovery technique, wherein the discovered process models are split on the one hand by variants and on the other hand hierarchically using subprocess extraction. Splitting is performed in a controlled manner in order to achieve user-defined complexity or fitness thresholds. Experiments on real-life logs show that the technique produces collections of models substantially smaller than those extracted by applying existing trace clustering techniques, while allowing the user to control the fitness of the resulting models.
Resumo:
Process compliance measurement is getting increasing attention in companies due to stricter legal requirements and market pressure for operational excellence. In order to judge on compliance of the business processing, the degree of behavioural deviation of a case, i.e., an observed execution sequence, is quantified with respect to a process model (referred to as fitness, or recall). Recently, different compliance measures have been proposed. Still, nearly all of them are grounded on state-based techniques and the trace equivalence criterion, in particular. As a consequence, these approaches have to deal with the state explosion problem. In this paper, we argue that a behavioural abstraction may be leveraged to measure the compliance of a process log – a collection of cases. To this end, we utilise causal behavioural profiles that capture the behavioural characteristics of process models and cases, and can be computed efficiently. We propose different compliance measures based on these profiles, discuss the impact of noise in process logs on our measures, and show how diagnostic information on non-compliance is derived. As a validation, we report on findings of applying our approach in a case study with an international service provider.
Resumo:
Analysis of behavioural consistency is an important aspect of software engineering. In process and service management, consistency verification of behavioural models has manifold applications. For instance, a business process model used as system specification and a corresponding workflow model used as implementation have to be consistent. Another example would be the analysis to what degree a process log of executed business operations is consistent with the corresponding normative process model. Typically, existing notions of behaviour equivalence, such as bisimulation and trace equivalence, are applied as consistency notions. Still, these notions are exponential in computation and yield a Boolean result. In many cases, however, a quantification of behavioural deviation is needed along with concepts to isolate the source of deviation. In this article, we propose causal behavioural profiles as the basis for a consistency notion. These profiles capture essential behavioural information, such as order, exclusiveness, and causality between pairs of activities of a process model. Consistency based on these profiles is weaker than trace equivalence, but can be computed efficiently for a broad class of models. In this article, we introduce techniques for the computation of causal behavioural profiles using structural decomposition techniques for sound free-choice workflow systems if unstructured net fragments are acyclic or can be traced back to S- or T-nets. We also elaborate on the findings of applying our technique to three industry model collections.
Resumo:
This paper gives an overview of the INEX 2008 Ad Hoc Track. The main goals of the Ad Hoc Track were two-fold. The first goal was to investigate the value of the internal document structure (as provided by the XML mark-up) for retrieving relevant information. This is a continuation of INEX 2007 and, for this reason, the retrieval results are liberalized to arbitrary passages and measures were chosen to fairly compare systems retrieving elements, ranges of elements, and arbitrary passages. The second goal was to compare focused retrieval to article retrieval more directly than in earlier years. For this reason, standard document retrieval rankings have been derived from all runs, and evaluated with standard measures. In addition, a set of queries targeting Wikipedia have been derived from a proxy log, and the runs are also evaluated against the clicked Wikipedia pages. The INEX 2008 Ad Hoc Track featured three tasks: For the Focused Task a ranked-list of nonoverlapping results (elements or passages) was needed. For the Relevant in Context Task non-overlapping results (elements or passages) were returned grouped by the article from which they came. For the Best in Context Task a single starting point (element start tag or passage start) for each article was needed. We discuss the results for the three tasks, and examine the relative effectiveness of element and passage retrieval. This is examined in the context of content only (CO, or Keyword) search as well as content and structure (CAS, or structured) search. Finally, we look at the ability of focused retrieval techniques to rank articles, using standard document retrieval techniques, both against the judged topics as well as against queries and clicks from a proxy log.
Resumo:
Accurate prediction of incident duration is not only important information of Traffic Incident Management System, but also an ffective input for travel time prediction. In this paper, the hazard based prediction odels are developed for both incident clearance time and arrival time. The data are obtained from the Queensland Department of Transport and Main Roads’ STREAMS Incident Management System (SIMS) for one year ending in November 2010. The best fitting distributions are drawn for both clearance and arrival time for 3 types of incident: crash, stationary vehicle, and hazard. The results show that Gamma, Log-logistic, and Weibull are the best fit for crash, stationary vehicle, and hazard incident, respectively. The obvious impact factors are given for crash clearance time and arrival time. The quantitative influences for crash and hazard incident are presented for both clearance and arrival. The model accuracy is analyzed at the end.
Resumo:
We consider the problem of combining opinions from different experts in an explicitly model-based way to construct a valid subjective prior in a Bayesian statistical approach. We propose a generic approach by considering a hierarchical model accounting for various sources of variation as well as accounting for potential dependence between experts. We apply this approach to two problems. The first problem deals with a food risk assessment problem involving modelling dose-response for Listeria monocytogenes contamination of mice. Two hierarchical levels of variation are considered (between and within experts) with a complex mathematical situation due to the use of an indirect probit regression. The second concerns the time taken by PhD students to submit their thesis in a particular school. It illustrates a complex situation where three hierarchical levels of variation are modelled but with a simpler underlying probability distribution (log-Normal).
Resumo:
Universal One-Way Hash Functions (UOWHFs) may be used in place of collision-resistant functions in many public-key cryptographic applications. At Asiacrypt 2004, Hong, Preneel and Lee introduced the stronger security notion of higher order UOWHFs to allow construction of long-input UOWHFs using the Merkle-Damgård domain extender. However, they did not provide any provably secure constructions for higher order UOWHFs. We show that the subset sum hash function is a kth order Universal One-Way Hash Function (hashing n bits to m < n bits) under the Subset Sum assumption for k = O(log m). Therefore we strengthen a previous result of Impagliazzo and Naor, who showed that the subset sum hash function is a UOWHF under the Subset Sum assumption. We believe our result is of theoretical interest; as far as we are aware, it is the first example of a natural and computationally efficient UOWHF which is also a provably secure higher order UOWHF under the same well-known cryptographic assumption, whereas this assumption does not seem sufficient to prove its collision-resistance. A consequence of our result is that one can apply the Merkle-Damgård extender to the subset sum compression function with ‘extension factor’ k+1, while losing (at most) about k bits of UOWHF security relative to the UOWHF security of the compression function. The method also leads to a saving of up to m log(k+1) bits in key length relative to the Shoup XOR-Mask domain extender applied to the subset sum compression function.
Resumo:
A catchment-scale multivariate statistical analysis of hydrochemistry enabled assessment of interactions between alluvial groundwater and Cressbrook Creek, an intermittent drainage system in southeast Queensland, Australia. Hierarchical cluster analyses and principal component analysis were applied to time-series data to evaluate the hydrochemical evolution of groundwater during periods of extreme drought and severe flooding. A simple three-dimensional geological model was developed to conceptualise the catchment morphology and the stratigraphic framework of the alluvium. The alluvium forms a two-layer system with a basal coarse-grained layer overlain by a clay-rich low-permeability unit. In the upper and middle catchment, alluvial groundwater is chemically similar to streamwater, particularly near the creek (reflected by high HCO3/Cl and K/Na ratios and low salinities), indicating a high degree of connectivity. In the lower catchment, groundwater is more saline with lower HCO3/Cl and K/Na ratios, notably during dry periods. Groundwater salinity substantially decreased following severe flooding in 2011, notably in the lower catchment, confirming that flooding is an important mechanism for both recharge and maintaining groundwater quality. The integrated approach used in this study enabled effective interpretation of hydrological processes and can be applied to a variety of hydrological settings to synthesise and evaluate large hydrochemical datasets.
Resumo:
Objectives To evaluate quality of care delivered to patients presenting to the emergency department (ED) with pain and managed by emergency nurse practitioners by measuring: 1) Evaluate time to analgesia from initial presentation 2) Evaluate time from being seen to next analgesia 3) Pain score documentation Background The delivery of quality care in the emergency department (ED) is emerging as one of the most important service indicators being measured by health services. Emergency nurse practitioner services are designed to improve timely, quality care for patients. One of the goals of quality emergency care is the timely and effective delivery of analgesia for patients. Timely analgesia is an important indicator of ED service performance. Methods A retrospective explicit chart review of 128 consecutive patients with pain and managed by emergency nurse practitioners was conducted. Data collected included demographics, presenting complaint, pain scores, and time to first dose of analgesia. Patients were identified from the ED Patient Information System (Cerner log) and data were extracted from electronic medical records Results Pain scores were documented in 67 (52.3%; 95% CI: 43.3-61.2) patients. The median time to analgesia from presentation was 60.5 (IQR 30-87) minutes, with 34 (26.6%; 95% CI: 19.1-35.1) patients receiving analgesia within 30 minutes of presentation to hospital. There were 22 (17.2%; 95% CI: 11.1-24.9) patients who received analgesia prior to assessment by a nurse practitioner. Among patients that received analgesia after assessment by a nurse practitioner, the median time to analgesia after assessment was 25 (IQR 12-50) minutes, with 65 (61.3%; 95% CI: 51.4-70.6) patients receiving analgesia within 30 minutes of assessment. Conclusions The majority of patients assessed by nurse practitioners received analgesia within 30 minutes after assessment. However, opportunities for substantial improvement in such times along with documentation of pain scores were identified and will be targeted in future research.
Resumo:
An anonymous membership broadcast scheme is a method in which a sender broadcasts the secret identity of one out of a set of n receivers, in such a way that only the right receiver knows that he is the intended receiver, while the others can not determine any information about this identity (except that they know that they are not the intended ones). In a w-anonymous membership broadcast scheme no coalition of up to w receivers, not containing the selected receiver, is able to determine any information about the identity of the selected receiver. We present two new constructions of w-anonymous membership broadcast schemes. The first construction is based on error-correcting codes and we show that there exist schemes that allow a flexible choice of w while keeping the complexities for broadcast communication, user storage and required randomness polynomial in log n,. The second construction is based on the concept of collision-free arrays, which is introduced in this paper. The construction results in more flexible schemes, allowing trade-offs between different complexities.
Resumo:
Marine sediments around volcanic islands contain an archive of volcaniclastic deposits, which can be used to reconstruct the volcanic history of an area. Such records hold many advantages over often incomplete terrestrial datasets. This includes the potential for precise and continuous dating of intervening sediment packages, which allow a correlatable and temporally-constrained stratigraphic framework to be constructed across multiple marine sediment cores. Here, we discuss a marine record of eruptive and mass-wasting events spanning ~250 ka offshore of Montserrat, using new data from IODP Expedition 340, as well as previously collected cores. By using a combination of high-resolution oxygen isotope stratigraphy, AMS radiocarbon dating, biostratigraphy of foraminifera and calcareous nannofossils and clast componentry, we identify five major events at Soufriere Hills volcano since 250 ka. Lateral correlation of these events across sediment cores collected offshore of the south and south west of Montserrat, have improved our understanding of the timing, extent and associations between events in this area. Correlations reveal that powerful and potentially erosive density-currents travelled at least 33 km offshore, and demonstrate that marine deposits, produced by eruption-fed and mass-wasting events on volcanic islands, are heterogeneous in their spatial distribution. Thus, multiple drilling/coring sites are needed to reconstruct the full chronostratigraphy of volcanic islands. This multidisciplinary study will be vital to interpreting the chaotic records of submarine landslides at other sites drilled during Expedition 340 and provides a framework that can be applied to the stratigraphic analysis of sediments surrounding other volcanic islands.