95 resultados para Response time (computer systems)
Resumo:
Composite web services comprise several component web services. When a composite web service is executed centrally, a single web service engine is responsible for coordinating the execution of the components, which may create a bottleneck and degrade the overall throughput of the composite service when there are a large number of service requests. Potentially this problem can be handled by decentralizing execution of the composite web service, but this raises the issue of how to partition a composite service into groups of component services such that each group can be orchestrated by its own execution engine while ensuring acceptable overall throughput of the composite service. Here we present a novel penalty-based genetic algorithm to solve the composite web service partitioning problem. Empirical results show that our new algorithm outperforms existing heuristic-based solutions.
Resumo:
This research used the Queensland Police Service, Australia, as a major case study. Information on principles, techniques and processes used, and the reason for the recording, storing and release of audit information for evidentiary purposes is reported. It is shown that Law Enforcement Agencies have a two-fold interest in, and legal obligation pertaining to, audit trails. The first interest relates to the situation where audit trails are actually used by criminals in the commission of crime and the second to where audit trails are generated by the information systems used by the police themselves in support of the recording and investigation of crime. Eleven court cases involving Queensland Police Service audit trails used in evidence in Queensland courts were selected for further analysis. It is shown that, of the cases studied, none of the evidence presented was rejected or seriously challenged from a technical perspective. These results were further analysed and related to normal requirements for trusted maintenance of audit trail information in sensitive environments with discussion on the ability and/or willingness of courts to fully challenge, assess or value audit evidence presented. Managerial and technical frameworks for firstly what is considered as an environment where a computer system may be considered to be operating “properly” and, secondly, what aspects of education, training, qualifications, expertise and the like may be considered as appropriate for persons responsible within that environment, are both proposed. Analysis was undertaken to determine if audit and control of information in a high security environment, such as law enforcement, could be judged as having improved, or not, in the transition from manual to electronic processes. Information collection, control of processing and audit in manual processes used by the Queensland Police Service, Australia, in the period 1940 to 1980 was assessed against current electronic systems essentially introduced to policing in the decades of the 1980s and 1990s. Results show that electronic systems do provide for faster communications with centrally controlled and updated information readily available for use by large numbers of users who are connected across significant geographical locations. However, it is clearly evident that the price paid for this is a lack of ability and/or reluctance to provide improved audit and control processes. To compare the information systems audit and control arrangements of the Queensland Police Service with other government departments or agencies, an Australia wide survey was conducted. Results of the survey were contrasted with the particular results of a survey, conducted by the Australian Commonwealth Privacy Commission four years previous, to this survey which showed that security in relation to the recording of activity against access to information held on Australian government computer systems has been poor and a cause for concern. However, within this four year period there is evidence to suggest that government organisations are increasingly more inclined to generate audit trails. An attack on the overall security of audit trails in computer operating systems was initiated to further investigate findings reported in relation to the government systems survey. The survey showed that information systems audit trails in Microsoft Corporation's “Windows” operating system environments are relied on quite heavily. An audit of the security for audit trails generated, stored and managed in the Microsoft “Windows 2000” operating system environment was undertaken and compared and contrasted with similar such audit trail schemes in the “UNIX” and “Linux” operating systems. Strength of passwords and exploitation of any security problems in access control were targeted using software tools that are freely available in the public domain. Results showed that such security for the “Windows 2000” system is seriously flawed and the integrity of audit trails stored within these environments cannot be relied upon. An attempt to produce a framework and set of guidelines for use by expert witnesses in the information technology (IT) profession is proposed. This is achieved by examining the current rules and guidelines related to the provision of expert evidence in a court environment, by analysing the rationale for the separation of distinct disciplines and corresponding bodies of knowledge used by the Medical Profession and Forensic Science and then by analysing the bodies of knowledge within the discipline of IT itself. It is demonstrated that the accepted processes and procedures relevant to expert witnessing in a court environment are transferable to the IT sector. However, unlike some discipline areas, this analysis has clearly identified two distinct aspects of the matter which appear particularly relevant to IT. These two areas are; expertise gained through the application of IT to information needs in a particular public or private enterprise; and expertise gained through accepted and verifiable education, training and experience in fundamental IT products and system.
Resumo:
Hydrocarbon spills on roads are a major safety concern for the driving public and can have severe cost impacts both on pavement maintenance and to the economy through disruption to services. The time taken to clean-up spills and re-open roads in a safe driving condition is an issue of increasing concern given traffic levels on major urban arterials. Thus, the primary aim of the research was to develop a sorbent material that facilitates rapid clean-up of road spills. The methodology involved extensive research into a range of materials (organic, inorganic and synthetic sorbents), comprehensive testing in the laboratory, scale-up and field, and product design (i.e. concept to prototype). The study also applied chemometrics to provide consistent, comparative methods of sorbent evaluation and performance. In addition, sorbent materials at every stage were compared against a commercial benchmark. For the first time, the impact of diesel on asphalt pavement has been quantified and assessed in a systematic way. Contrary to conventional thinking and anecdotal observations, the study determined that the action of diesel on asphalt was quite rapid (i.e. hours rather than weeks or months). This significant finding demonstrates the need to minimise the impact of hydrocarbon spills and the potential application of the sorbent option. To better understand the adsorption phenomenon, surface characterisation techniques were applied to selected sorbent materials (i.e. sand, organo-clay and cotton fibre). Brunauer Emmett Teller (BET) and thermal analysis indicated that the main adsorption mechanism for the sorbents occurred on the external surface of the material in the diffusion region (sand and organo-clay) and/or capillaries (cotton fibre). Using environmental scanning electron microscopy (ESEM), it was observed that adsorption by the interfibre capillaries contributed to the high uptake of hydrocarbons by the cotton fibre. Understanding the adsorption mechanism for these sorbents provided some guidance and scientific basis for the selection of materials. The study determined that non-woven cotton mats were ideal sorbent materials for clean-up of hydrocarbon spills. The prototype sorbent was found to perform significantly better than the commercial benchmark, displaying the following key properties: • superior hydrocarbon pick-up from the road pavement; • high hydrocarbon retention capacity under an applied load; • adequate field skid resistance post treatment; • functional and easy to use in the field (e.g. routine handling, transportation, application and recovery); • relatively inexpensive to produce due to the use of raw cotton fibre and simple production process; • environmentally friendly (e.g. renewable materials, non-toxic to environment and operators, and biodegradable); and • rapid response time (e.g. two minutes total clean-up time compared with thirty minutes for reference sorbents). The major outcomes of the research project include: a) development of a specifically designed sorbent material suitable for cleaning up hydrocarbon spills on roads; b) submission of patent application (serial number AU2005905850) for the prototype product; and c) preparation of Commercialisation Strategy to advance the sorbent product to the next phase (i.e. R&D to product commercialisation).
Resumo:
Music making affects relationships with self and others by generating a sense of belonging to a culture or ideology (Bamford, 2006; Barovick, 2001; Dillon & Stewart, 2006; Fiske, 2000; Hallam, 2001). Whilst studies from arts education research present compelling examples of these relationships, others argue that they do not present sufficiently validated evidence of a causal link between music making experiences and cognitive or social change (Winner & Cooper, 2000; Winner & Hetland, 2000a, 2000b, 2001). I have suggested elsewhere that this disconnection between compelling evidence and observations of the effects of music making are in part due to the lack of rigor in research and the incapacity of many methods to capture these experiences in meaningful ways (Dillon, 2006). Part of the answer to these questions about rigor and causality lay in the creative use of new media technologies that capture the results of relationships in music artefacts. Crucially, it is the effective management of these artefacts within computer systems that allows researchers and practitioners to collect, organize, analyse and then theorise such music making experiences.
Resumo:
Hazard perception in driving is the one of the few driving-specific skills associated with crash involvement. However, this relationship has only been examined in studies where the majority of individuals were younger than 65. We present the first data revealing an association between hazard perception and self-reported crash involvement in drivers aged 65 and over. In a sample of 271 drivers, we found that individuals whose mean response time to traffic hazards was slower than 6.68 seconds (the ROC-curve derived pass mark for the test) were 2.32 times (95% CI 1.46, 3.22) more likely to have been involved in a self-reported crash within the previous five years than those with faster response times. This likelihood ratio became 2.37 (95% CI 1.49, 3.28) when driving exposure was controlled for. As a comparison, individuals who failed a test of useful field of view were 2.70 (95% CI 1.44, 4.44) times more likely to crash than those who passed. The hazard perception test and the useful field of view measure accounted for separate variance in crash involvement. These findings indicate that hazard perception testing and training could be potentially useful for road safety interventions for this age group.
Resumo:
Most research on numerical development in children is behavioural, focusing on accuracy and response time in different problem formats. However, Temple and Posner (1998) used ERPs and the numerical distance task with 5-year-olds to show that the development of numerical representations is difficult to disentangle from the development of the executive components of response organization and execution. Here we use the numerical Stroop paradigm (NSP) and ERPs to study possible executive interference in numerical processing tasks in 6–8-year-old children. In the NSP, the numerical magnitude of the digits is task-relevant and the physical size of the digits is task-irrelevant. We show that younger children are highly susceptible to interference from irrelevant physical information such as digit size, but that access to the numerical representation is almost as fast in young children as in adults. We argue that the developmental trajectories for executive function and numerical processing may act together to determine numerical development in young children.
Resumo:
IEC Technical Committee 57 (TC57) published a series of standards and technical reports for “Communication networks and systems for power utility automation” as the IEC 61850 series. Sampled value (SV) process buses allow for the removal of potentially lethal voltages and damaging currents inside substation control rooms and marshalling kiosks, reduce the amount of cabling required in substations, and facilitate the adoption of non-conventional instrument transformers. IEC 61850-9-2 provides an inter-operable solution to support multi-vendor process bus solutions. A time synchronisation system is required for a SV process bus, however the details are not defined in IEC 61850-9-2. IEEE Std 1588-2008, Precision Time Protocol version 2 (PTPv2), provides the greatest accuracy of network based time transfer systems, with timing errors of less than 100 ns achievable. PTPv2 is proposed by the IEC Smart Grid Strategy Group to synchronise IEC 61850 based substation automation systems. IEC 61850-9-2, PTPv2 and Ethernet are three complementary protocols that together define the future of sampled value digital process connections in substations. The suitability of PTPv2 for use with SV is evaluated, with preliminary results indicating that steady state performance is acceptable (jitter < 300 ns), and that extremely stable grandmaster oscillators are required to ensure SV timing requirements are met when recovering from loss of external synchronisation (such as GPS).
Resumo:
Scientists need to transfer semantically similar queries across multiple heterogeneous linked datasets. These queries may require data from different locations and the results are not simple to combine due to differences between datasets. A query model was developed to make it simple to distribute queries across different datasets using RDF as the result format. The query model, based on the concept of publicly recognised namespaces for parts of each scientific dataset, was implemented with a configuration that includes a large number of current biological and chemical datasets. The configuration is flexible, providing the ability to transparently use both private and public datasets in any query. A prototype implementation of the model was used to resolve queries for the Bio2RDF website, including both Bio2RDF datasets and other datasets that do not follow the Bio2RDF URI conventions.
Resumo:
In this paper we identify the origins of stop-and-go (or slow-and-go) driving and measure microscopic features of their propagations by analyzing vehicle trajectories via Wavelet Transform. Based on 53 oscillation cases analyzed, we find that oscillations can be originated by either lane-changing maneuvers (LCMs) or car-following behavior (CF). LCMs were predominantly responsible for oscillation formations in the absence of considerable horizontal or vertical curves, whereas oscillations formed spontaneously near roadside work on an uphill segment. Regardless of the trigger, the features of oscillation propagations were similar in terms of propagation speed, oscillation duration, and amplitude. All observed cases initially exhibited a precursor phase, in which slow-and-go motions were localized. Some of them eventually transitioned into a well developed phase, in which oscillations propagated upstream in queue. LCMs were primarily responsible for the transition, although some transitions occurred without LCMs. Our findings also suggest that an oscillation has a regressive effect on car following behavior: a deceleration wave of an oscillation affects a timid driver (with larger response time and minimum spacing) to become less timid and an aggressive driver less aggressive, although this change may be short-lived. An extended framework of Newell’s CF is able to describe the regressive effects with two additional parameters with reasonable accuracy, as verified using vehicle trajectory data.
Resumo:
Web service technology is increasingly being used to build various e-Applications, in domains such as e-Business and e-Science. Characteristic benefits of web service technology are its inter-operability, decoupling and just-in-time integration. Using web service technology, an e-Application can be implemented by web service composition — by composing existing individual web services in accordance with the business process of the application. This means the application is provided to customers in the form of a value-added composite web service. An important and challenging issue of web service composition, is how to meet Quality-of-Service (QoS) requirements. This includes customer focused elements such as response time, price, throughput and reliability as well as how to best provide QoS results for the composites. This in turn best fulfils customers’ expectations and achieves their satisfaction. Fulfilling these QoS requirements or addressing the QoS-aware web service composition problem is the focus of this project. From a computational point of view, QoS-aware web service composition can be transformed into diverse optimisation problems. These problems are characterised as complex, large-scale, highly constrained and multi-objective problems. We therefore use genetic algorithms (GAs) to address QoS-based service composition problems. More precisely, this study addresses three important subproblems of QoS-aware web service composition; QoS-based web service selection for a composite web service accommodating constraints on inter-service dependence and conflict, QoS-based resource allocation and scheduling for multiple composite services on hybrid clouds, and performance-driven composite service partitioning for decentralised execution. Based on operations research theory, we model the three problems as a constrained optimisation problem, a resource allocation and scheduling problem, and a graph partitioning problem, respectively. Then, we present novel GAs to address these problems. We also conduct experiments to evaluate the performance of the new GAs. Finally, verification experiments are performed to show the correctness of the GAs. The major outcomes from the first problem are three novel GAs: a penaltybased GA, a min-conflict hill-climbing repairing GA, and a hybrid GA. These GAs adopt different constraint handling strategies to handle constraints on interservice dependence and conflict. This is an important factor that has been largely ignored by existing algorithms that might lead to the generation of infeasible composite services. Experimental results demonstrate the effectiveness of our GAs for handling the QoS-based web service selection problem with constraints on inter-service dependence and conflict, as well as their better scalability than the existing integer programming-based method for large scale web service selection problems. The major outcomes from the second problem has resulted in two GAs; a random-key GA and a cooperative coevolutionary GA (CCGA). Experiments demonstrate the good scalability of the two algorithms. In particular, the CCGA scales well as the number of composite services involved in a problem increases, while no other algorithms demonstrate this ability. The findings from the third problem result in a novel GA for composite service partitioning for decentralised execution. Compared with existing heuristic algorithms, the new GA is more suitable for a large-scale composite web service program partitioning problems. In addition, the GA outperforms existing heuristic algorithms, generating a better deployment topology for a composite web service for decentralised execution. These effective and scalable GAs can be integrated into QoS-based management tools to facilitate the delivery of feasible, reliable and high quality composite web services.
Resumo:
Many modern business environments employ software to automate the delivery of workflows; whereas, workflow design and generation remains a laborious technical task for domain specialists. Several differ- ent approaches have been proposed for deriving workflow models. Some approaches rely on process data mining approaches, whereas others have proposed derivations of workflow models from operational struc- tures, domain specific knowledge or workflow model compositions from knowledge-bases. Many approaches draw on principles from automatic planning, but conceptual in context and lack mathematical justification. In this paper we present a mathematical framework for deducing tasks in workflow models from plans in mechanistic or strongly controlled work environments, with a focus around automatic plan generations. In addition, we prove an associative composition operator that permits crisp hierarchical task compositions for workflow models through a set of mathematical deduction rules. The result is a logical framework that can be used to prove tasks in workflow hierarchies from operational information about work processes and machine configurations in controlled or mechanistic work environments.
Resumo:
Objective: We explore how accurately and quickly nurses can identify melodic medical equipment alarms when no mnemonics are used, when alarms may overlap, and when concurrent tasks are performed. Background: The international standard IEC 60601-1-8 (International Electrotechnical Commission, 2005) has proposed simple melodies to distinguish seven alarm sources. Previous studies with nonmedical participants reveal poor learning of melodic alarms and persistent confusions between some of them. The effects of domain expertise, concurrent tasks, and alarm overlaps are unknown. Method: Fourteen intensive care and general medical unit nurses learned the melodic alarms without mnemonics in two sessions on separate days. In the second half of Day 2 the nurses identified single alarms or pairs of alarms played in sequential, partially overlapping, or nearly completely overlapping configurations. For half the experimental blocks nurses performed a concurrent mental arithmetic task. Results: Nurses' learning was poor and was no better than the learning of nonnurses in a previous study. Nurses showed the previously noted confusions between alarms. Overlapping alarms were exceptionally difficult to identify. The concurrent task affected response time but not accuracy. Conclusion: Because of a failure of auditory stream segregation, the melodic alarms cannot be discriminated when they overlap. Directives to sequence the sounding of alarms in medical electrical equipment must be strictly adhered to, or the alarms must redesigned to support better auditory streaming. Application: Actual or potential uses of this research include the implementation of IEC 60601-1-8 alarms in medical electrical equipment.
Resumo:
Monitoring environmental health is becoming increasingly important as human activity and climate change place greater pressure on global biodiversity. Acoustic sensors provide the ability to collect data passively, objectively and continuously across large areas for extended periods. While these factors make acoustic sensors attractive as autonomous data collectors, there are significant issues associated with large-scale data manipulation and analysis. We present our current research into techniques for analysing large volumes of acoustic data efficiently. We provide an overview of a novel online acoustic environmental workbench and discuss a number of approaches to scaling analysis of acoustic data; online collaboration, manual, automatic and human-in-the loop analysis.
Resumo:
This paper investigates the effects of lane-changing in driver behavior by measuring (i) the induced transient behavior and (ii) the change in driver characteristics, i.e., changes in driver response time and minimum spacing. We find that the transition largely consists of a pre-insertion transition and a relaxation process. These two processes are different but can be reasonably captured with a single model. The findings also suggest that lane-changing induces a regressive effect on driver characteristics: a timid driver (characterized by larger response time and minimum spacing) tends to become less timid and an aggressive driver less aggressive. We offer an extension to Newell’s car-following model to describe this regressive effect and verify it using vehicle trajectory data.