843 resultados para routing protocols
Resumo:
As Web searching becomes more prolific for information access worldwide, we need to better understand users’ Web searching behaviour and develop better models of their interaction with Web search systems. Web search modelling is a significant and important area of Web research. Searching on the Web is an integral element of information behaviour and human–computer interaction. Web searching includes multitasking processes, the allocation of cognitive resources among several tasks, and shifts in cognitive, problem and knowledge states. In addition to multitasking, cognitive coordination and cognitive shifts are also important, but are under-explored aspects of Web searching. During the Web searching process, beyond physical actions, users experience various cognitive activities. Interactive Web searching involves many users’ cognitive shifts at different information behaviour levels. Cognitive coordination allows users to trade off the dependences among multiple information tasks and the resources available. Much research has been conducted into Web searching. However, few studies have modelled the nature of and relationship between multitasking, cognitive coordination and cognitive shifts in the Web search context. Modelling how Web users interact with Web search systems is vital for the development of more effective Web IR systems. This study aims to model the relationship between multitasking, cognitive coordination and cognitive shifts during Web searching. A preliminary theoretical model is presented based on previous studies. The research is designed to validate the preliminary model. Forty-two study participants were involved in the empirical study. A combination of data collection instruments, including pre- and post-questionnaires, think-aloud protocols, search logs, observations and interviews were employed to obtain users’ comprehensive data during Web search interactions. Based on the grounded theory approach, qualitative analysis methods including content analysis and verbal protocol analysis were used to analyse the data. The findings were inferred through an analysis of questionnaires, a transcription of think-aloud protocols, the Web search logs, and notes on observations and interviews. Five key findings emerged. (1) Multitasking during Web searching was demonstrated as a two-dimensional behaviour. The first dimension was represented as multiple information problems searching by task switching. Users’ Web searching behaviour was a process of multiple tasks switching, that is, from searching on one information problem to searching another. The second dimension of multitasking behaviour was represented as an information problem searching within multiple Web search sessions. Users usually conducted Web searching on a complex information problem by submitting multiple queries, using several Web search systems and opening multiple windows/tabs. (2) Cognitive shifts were the brain’s internal response to external stimuli. Cognitive shifts were found as an essential element of searching interactions and users’ Web searching behaviour. The study revealed two kinds of cognitive shifts. The first kind, the holistic shift, included users’ perception on the information problem and overall information evaluation before and after Web searching. The second kind, the state shift, reflected users’ changes in focus between the different cognitive states during the course of Web searching. Cognitive states included users’ focus on the states of topic, strategy, evaluation, view and overview. (3) Three levels of cognitive coordination behaviour were identified: the information task coordination level, the coordination mechanism level, and the strategy coordination level. The three levels of cognitive coordination behaviour interplayed to support multiple information tasks switching. (4) An important relationship existed between multitasking, cognitive coordination and cognitive shifts during Web searching. Cognitive coordination as a management mechanism bound together other cognitive processes, including multitasking and cognitive shifts, in order to move through users’ Web searching process. (5) Web search interaction was shown to be a multitasking process which included information problems ordering, task switching and task and mental coordinating; also, at a deeper level, cognitive shifts took place. Cognitive coordination was the hinge behaviour linking multitasking and cognitive shifts. Without cognitive coordination, neither multitasking Web searching behaviour nor the complicated mental process of cognitive shifting could occur. The preliminary model was revisited with these empirical findings. A revised theoretical model (MCC Model) was built to illustrate the relationship between multitasking, cognitive coordination and cognitive shifts during Web searching. Implications and limitations of the study are also discussed, along with future research work.
Resumo:
In this research I have examined how ePortfolios can be designed for Music postgraduate study through a practice led research enquiry. This process involved designing two Web 2.0 ePortfolio systems for a group of five post graduate music research students. The design process revolved around the application of an iterative methodology called Software Develop as Research (SoDaR) that seeks to simultaneously develop design and pedagogy. The approach to designing these ePortfolio systems applied four theoretical protocols to examine the use of digitised artefacts in ePortfolio systems to enable a dynamic and inclusive dialogue around representations of the students work. The research and design process involved an analysis of existing software and literature with a focus upon identifying the affordances of available Web 2.0 software and the applications of these ideas within 21st Century life. The five post graduate music students each posed different needs in relation to the management of digitised artefacts and the communication of their work amongst peers, supervisors and public display. An ePortfolio was developed for each of them that was flexible enough to address their needs within the university setting. However in this first SoDaR iteration data gathering phase I identified aspects of the university context that presented a negative case that impacted upon the design and usage of the ePortfolios and prevented uptake. Whilst the portfolio itself functioned effectively, the university policies and technical requirements prevented serious use. The negative case analysis of the case study found revealed that Access and Control and Implementation, Technical and Policy Constraints protocols where limiting user uptake. From the semistructured interviews carried out as part of this study participant feedback revealed that whilst the participants did not use the ePortfolio system I designed, each student was employing Web 2.0 social networking and storage processes in their lives and research. In the subsequent iterations I then designed a more ‘ideal’ system that could be applied outside of the University context that draws upon the employment of these resources. In conclusion I suggest recommendations about ePortfolio design that considers what the applications of the theoretical protocols reveal about creative arts settings. The transferability of these recommendations are of course dependent upon the reapplication of the theoretical protocols in a new context. To address the mobility of ePortfolio design between Institutions and wider settings I have also designed a prototype for a business card sized USB portal for the artists’ ePortfolio. This research project is not a static one; it stands as an evolving design for a Web 2.0 ePortfolio that seeks to refer to users needs, institutional and professional contexts and the development of software that can be incorporated within the design. What it potentially provides to creative artist is an opportunity to have a dialogue about art with artefacts of the artist products and processes in that discussion.
Resumo:
The material presented in this thesis may be viewed as comprising two key parts, the first part concerns batch cryptography specifically, whilst the second deals with how this form of cryptography may be applied to security related applications such as electronic cash for improving efficiency of the protocols. The objective of batch cryptography is to devise more efficient primitive cryptographic protocols. In general, these primitives make use of some property such as homomorphism to perform a computationally expensive operation on a collective input set. The idea is to amortise an expensive operation, such as modular exponentiation, over the input. Most of the research work in this field has concentrated on its employment as a batch verifier of digital signatures. It is shown that several new attacks may be launched against these published schemes as some weaknesses are exposed. Another common use of batch cryptography is the simultaneous generation of digital signatures. There is significantly less previous work on this area, and the present schemes have some limited use in practical applications. Several new batch signatures schemes are introduced that improve upon the existing techniques and some practical uses are illustrated. Electronic cash is a technology that demands complex protocols in order to furnish several security properties. These typically include anonymity, traceability of a double spender, and off-line payment features. Presently, the most efficient schemes make use of coin divisibility to withdraw one large financial amount that may be progressively spent with one or more merchants. Several new cash schemes are introduced here that make use of batch cryptography for improving the withdrawal, payment, and deposit of electronic coins. The devised schemes apply both to the batch signature and verification techniques introduced, demonstrating improved performance over the contemporary divisible based structures. The solutions also provide an alternative paradigm for the construction of electronic cash systems. Whilst electronic cash is used as the vehicle for demonstrating the relevance of batch cryptography to security related applications, the applicability of the techniques introduced extends well beyond this.
Resumo:
Many large coal mining operations in Australia rely heavily on the rail network to transport coal from mines to coal terminals at ports for shipment. Over the last few years, due to the fast growing demand, the coal rail network is becoming one of the worst industrial bottlenecks in Australia. As a result, this provides great incentives for pursuing better optimisation and control strategies for the operation of the whole rail transportation system under network and terminal capacity constraints. This PhD research aims to achieve a significant efficiency improvement in a coal rail network on the basis of the development of standard modelling approaches and generic solution techniques. Generally, the train scheduling problem can be modelled as a Blocking Parallel- Machine Job-Shop Scheduling (BPMJSS) problem. In a BPMJSS model for train scheduling, trains and sections respectively are synonymous with jobs and machines and an operation is regarded as the movement/traversal of a train across a section. To begin, an improved shifting bottleneck procedure algorithm combined with metaheuristics has been developed to efficiently solve the Parallel-Machine Job- Shop Scheduling (PMJSS) problems without the blocking conditions. Due to the lack of buffer space, the real-life train scheduling should consider blocking or hold-while-wait constraints, which means that a track section cannot release and must hold a train until the next section on the routing becomes available. As a consequence, the problem has been considered as BPMJSS with the blocking conditions. To develop efficient solution techniques for BPMJSS, extensive studies on the nonclassical scheduling problems regarding the various buffer conditions (i.e. blocking, no-wait, limited-buffer, unlimited-buffer and combined-buffer) have been done. In this procedure, an alternative graph as an extension of the classical disjunctive graph is developed and specially designed for the non-classical scheduling problems such as the blocking flow-shop scheduling (BFSS), no-wait flow-shop scheduling (NWFSS), and blocking job-shop scheduling (BJSS) problems. By exploring the blocking characteristics based on the alternative graph, a new algorithm called the topological-sequence algorithm is developed for solving the non-classical scheduling problems. To indicate the preeminence of the proposed algorithm, we compare it with two known algorithms (i.e. Recursive Procedure and Directed Graph) in the literature. Moreover, we define a new type of non-classical scheduling problem, called combined-buffer flow-shop scheduling (CBFSS), which covers four extreme cases: the classical FSS (FSS) with infinite buffer, the blocking FSS (BFSS) with no buffer, the no-wait FSS (NWFSS) and the limited-buffer FSS (LBFSS). After exploring the structural properties of CBFSS, we propose an innovative constructive algorithm named the LK algorithm to construct the feasible CBFSS schedule. Detailed numerical illustrations for the various cases are presented and analysed. By adjusting only the attributes in the data input, the proposed LK algorithm is generic and enables the construction of the feasible schedules for many types of non-classical scheduling problems with different buffer constraints. Inspired by the shifting bottleneck procedure algorithm for PMJSS and characteristic analysis based on the alternative graph for non-classical scheduling problems, a new constructive algorithm called the Feasibility Satisfaction Procedure (FSP) is proposed to obtain the feasible BPMJSS solution. A real-world train scheduling case is used for illustrating and comparing the PMJSS and BPMJSS models. Some real-life applications including considering the train length, upgrading the track sections, accelerating a tardy train and changing the bottleneck sections are discussed. Furthermore, the BPMJSS model is generalised to be a No-Wait Blocking Parallel- Machine Job-Shop Scheduling (NWBPMJSS) problem for scheduling the trains with priorities, in which prioritised trains such as express passenger trains are considered simultaneously with non-prioritised trains such as freight trains. In this case, no-wait conditions, which are more restrictive constraints than blocking constraints, arise when considering the prioritised trains that should traverse continuously without any interruption or any unplanned pauses because of the high cost of waiting during travel. In comparison, non-prioritised trains are allowed to enter the next section immediately if possible or to remain in a section until the next section on the routing becomes available. Based on the FSP algorithm, a more generic algorithm called the SE algorithm is developed to solve a class of train scheduling problems in terms of different conditions in train scheduling environments. To construct the feasible train schedule, the proposed SE algorithm consists of many individual modules including the feasibility-satisfaction procedure, time-determination procedure, tune-up procedure and conflict-resolve procedure algorithms. To find a good train schedule, a two-stage hybrid heuristic algorithm called the SE-BIH algorithm is developed by combining the constructive heuristic (i.e. the SE algorithm) and the local-search heuristic (i.e. the Best-Insertion- Heuristic algorithm). To optimise the train schedule, a three-stage algorithm called the SE-BIH-TS algorithm is developed by combining the tabu search (TS) metaheuristic with the SE-BIH algorithm. Finally, a case study is performed for a complex real-world coal rail network under network and terminal capacity constraints. The computational results validate that the proposed methodology would be very promising because it can be applied as a fundamental tool for modelling and solving many real-world scheduling problems.
Resumo:
Defining the precise promoter DNA sequence motifs where nuclear receptors and other transcription factors bind is an essential prerequisite for understanding how these proteins modulate the expression of their specific target genes. The purpose of this chapter is to provide the reader with a detailed guide with respect to the materials and the key methods required to perform this type of DNA-binding analysis. Irrespective of whether starting with purified DNA-binding proteins or somewhat crude cellular extracts, the tried-and-true procedures described here will enable one to accurately access the capacity of specific proteins to bind to DNA as well as to determine the exact sequences and DNA contact nucleotides involved. For illustrative purposes, we primarily have used the interaction of the androgen receptor with the rat probasin proximal promoter as our model system.
Resumo:
Component software has many benefits, most notably increased software re-use; however, the component software process places heavy burdens on programming language technology, which modern object-oriented programming languages do not address. In particular, software components require specifications that are both sufficiently expressive and sufficiently abstract, and, where possible, these specifications should be checked formally by the programming language. This dissertation presents a programming language called Mentok that provides two novel programming language features enabling improved specification of stateful component roles. Negotiable interfaces are interface types extended with protocols, and allow specification of changing method availability, including some patterns of out-calls and re-entrance. Type layers are extensions to module signatures that allow specification of abstract control flow constraints through the interfaces of a component-based application. Development of Mentok's unique language features included creation of MentokC, the Mentok compiler, and formalization of key properties of Mentok in mini-languages called MentokP and MentokL.
Resumo:
Establishing a nationwide Electronic Health Record system has become a primary objective for many countries around the world, including Australia, in order to improve the quality of healthcare while at the same time decreasing its cost. Doing so will require federating the large number of patient data repositories currently in use throughout the country. However, implementation of EHR systems is being hindered by several obstacles, among them concerns about data privacy and trustworthiness. Current IT solutions fail to satisfy patients’ privacy desires and do not provide a trustworthiness measure for medical data. This thesis starts with the observation that existing EHR system proposals suer from six serious shortcomings that aect patients’ privacy and safety, and medical practitioners’ trust in EHR data: accuracy and privacy concerns over linking patients’ existing medical records; the inability of patients to have control over who accesses their private data; the inability to protect against inferences about patients’ sensitive data; the lack of a mechanism for evaluating the trustworthiness of medical data; and the failure of current healthcare workflow processes to capture and enforce patient’s privacy desires. Following an action research method, this thesis addresses the above shortcomings by firstly proposing an architecture for linking electronic medical records in an accurate and private way where patients are given control over what information can be revealed about them. This is accomplished by extending the structure and protocols introduced in federated identity management to link a patient’s EHR to his existing medical records by using pseudonym identifiers. Secondly, a privacy-aware access control model is developed to satisfy patients’ privacy requirements. The model is developed by integrating three standard access control models in a way that gives patients access control over their private data and ensures that legitimate uses of EHRs are not hindered. Thirdly, a probabilistic approach for detecting and restricting inference channels resulting from publicly-available medical data is developed to guard against indirect accesses to a patient’s private data. This approach is based upon a Bayesian network and the causal probabilistic relations that exist between medical data fields. The resulting definitions and algorithms show how an inference channel can be detected and restricted to satisfy patients’ expressed privacy goals. Fourthly, a medical data trustworthiness assessment model is developed to evaluate the quality of medical data by assessing the trustworthiness of its sources (e.g. a healthcare provider or medical practitioner). In this model, Beta and Dirichlet reputation systems are used to collect reputation scores about medical data sources and these are used to compute the trustworthiness of medical data via subjective logic. Finally, an extension is made to healthcare workflow management processes to capture and enforce patients’ privacy policies. This is accomplished by developing a conceptual model that introduces new workflow notions to make the workflow management system aware of a patient’s privacy requirements. These extensions are then implemented in the YAWL workflow management system.
Resumo:
As the acceptance and popularity of wireless networking technologies has proliferated, the security of the IEEE 802.11 wireless local area network (WLAN) has advanced in leaps and bounds. From tenuous beginnings, where the only safe way to deploy a WLAN was to assume it was hostile and employ higherlayer information security controls, to the current state of the art, all manner of improvements have been conceived and many implemented. This work investigates some of the remaining issues surrounding IEEE 802.11 WLAN operation. While the inherent issues in WLAN deployments and the problems of the original Wired Equivalent Privacy (WEP) provisions are well known and widely documented, there still exist a number of unresolved security issues. These include the security of management and control frames and the data link layer protocols themselves. This research introduces a novel proposal to enhance security at the link layer of IEEE 802.11 WLANs and then conducts detailed theoretical and empirical investigation and analysis of the eects of such proposals. This thesis �rst de�nes the state of the art in WLAN technology and deployment, including an overview of the current and emerging standards, the various threats, numerous vulnerabilities and current exploits. The IEEE 802.11i MAC security enhancements are discussed in detail, along with the likely outcomes of the IEEE 802.11 Task Group W1, looking into protected management frames. The problems of the remaining unprotected management frames, the unprotected control frames and the unprotected link layer headers are reviewed and a solution is hypothesised, to encrypt the entire MAC Protocol Data Unit (MPDU), including the MAC headers, not just the MAC Service Data Unit (MSDU) commonly performed by existing protocols. The proposal is not just to encrypt a copy of the headers while still using cleartext addresses to deliver the frame, as used by some existing protocols to support the integrity and authenticity of the headers, but to pass the entire MPDU only as ciphertext to also support the con�dentiality of the frame header information. This necessitates the decryption of every received frame using every available key before a station can determine if it is the intended recipient. As such, this raises serious concerns as to the viability of any such proposal due to the likely impact on throughput and scalability. The bulk of the research investigates the impacts of such proposals on the current WLAN protocols. Some possible variations to the proposal are also provided to enhance both utility and speed. The viability this proposal with respect to the eect on network throughput is then tested using a well known and respected network simulation tool, along with a number of analysis tools developed speci�cally for the data generated here. The simulator's operation is �rst validated against recognised test outputs, before a comprehensive set of control data is established, and then the proposal is tested and and compared against the controls. This detailed analysis of the various simulations should be of bene�t to other researchers who need to validate simulation results. The analysis of these tests indicate areas of immediate improvement and so the protocols are adjusted and a further series of experiments conducted. These �nal results are again analysed in detail and �nal appraisals provided.
Resumo:
Introduction: The purpose of this study was to assess the capacity of a written intervention, in this case a patient information brochure, to improve patient satisfaction during an Emergency Department (ED) visit. For the purpose of measuring the effect of the intervention the ED journey was conceptualised as a series of distinct areas of service comprising waiting time, service by the triage nurse, care from doctors and nurses and information giving Background of study: Research into patient satisfaction has become a widespread activity endorsed by both governments and hospital administrations. The literature on ED patient satisfaction has consistently indicated three primary areas of patient dissatisfaction: waiting time, nursing care and communication. Recent developments in the literature on patient satisfaction studies however have highlighted the relationship between patients. expectations of a service encounter and their consequent assessment of the experience as dissatisfying or satisfying. Disconfirmation theory posits that the degree to which expectations are confirmed will affect subsequent levels of satisfaction. The conceptual framework utilised in this study is Coye.s (2004) model of disconfirmation. Coye while reiterating satisfaction is a consequence of the degree expectations are either confirmed or disconfirmed also posits that expectations can be modified by interventions. Coye.s work conceptualises these interventions as intra encounter experiences (cues) which function to adjust expectations. Coye suggests some cues are unintended and may have a negative impact which also reinforces the value of planned cues intended to meet or exceed consumer expectations. Consequently the brochure can be characterized as a potentially positive cue, encouraging the patient to understand processes and to orient them in what can be a confronting environment. Only a limited number of studies have examined the effect of written interventions within an ED. No studies could be located which have tested the effect of ED interventions using a conceptual framework which relates the effect of the degree to which expectations are confirmed or disconfirmed in terms of satisfaction with services. Method: Two studies were conducted. Study One used qualitative methods to explore patients. expectations of the ED from the perspective of both patients and health care professionals. Study One was used in part to direct the development of the intervention (brochure) in Study Two. The brochure was an intervention designed to modify patients. expectations thus increasing their satisfaction with the provision of ED service. As there was no existing tools to measure ED patients. expectations and satisfaction a new tool was also developed based on the findings and the literature of Study One. Study Two used a non-randomised, quasi-experimental approach using a non-equivalent post-test only comparison group design used to investigate the effect of the patient education brochure (Stommel and Wills, 2004). The brochure was disseminated to one of two study groups (the intervention group). The effect of the brochure was assessed by comparing the data obtained from both the intervention and control group. These two groups consisted of 150 participants each. It was expected that any differences in the relevant domains selected for examination would indicate the effect of the brochure both on expectation and potentially satisfaction. Results: Study One revealed several areas of common ground between patients and nurses in terms of relevant content for the written intervention, including the need for information on the triage system and waiting times. Areas of difference were also found with patients emphasizing communication issues, whereas focus group members expressed concern that patients were often unable to assimilate verbal information. The findings suggested the potential utility of written material to reinforce verbal communication particularly in terms of the triage process and other ED protocols. This material was synthesized within the final version of the written intervention. Overall the results of Study Two indicated no significant differences between the two groups. The intervention group did indicate a significant number of participants who viewed the brochure of having changed their expectations. The effect of the brochure may have been obscured by a lack of parity between the two groups as the control group presented with statistically significantly higher levels of acuity and experienced significantly shorter waiting times. In terms of disconfirmation theory this would suggest expectations that had been met or exceeded. The results confirmed the correlation of expectations with satisfaction. Several domains also indicated age as a significant predictor with older patients tending to score higher satisfaction results. Other significant predictors of satisfaction established were waiting time and care from nurses, reinforcing the combination of efficient service and positive interpersonal experiences as being valued by patients. Conclusions: Information presented in written form appears to benefit a significant number of ED users in terms of orientation and explaining systems and procedures. The degree to which these effects may interact with other dimensions of satisfaction however is likely to be limited. Waiting time and interpersonal behaviours from staff also provide influential cues in determining satisfaction. Written material is likely to be one element in a series of coordinated strategies to improve patient satisfaction during periods of peak demand.
Resumo:
This paper provides a review of the state of the art relevant work on the use of public mobile data networks for aircraft telemetry and control proposes. Moreover, it describes the characterisation for airborne uses of the public mobile data communication systems known broadly as 3G. The motivation for this study was the explore how this mature public communication systems could be used for aviation purposes. An experimental system was fitted to a light aircraft to record communication latency, line speed, RF level, packet loss and cell tower identifier. Communications was established using internet protocols and connection was made to a local server. The aircraft was flown in both remote and populous areas at altitudes up to 8500 ft in a region located in South East Queensland, Australia. Results show that the average airborne RF levels are better than those on the ground by 21% and in the order of - 77dbm. Latencies were in the order of 500ms (1/2 the latency of Iridium), an average download speed of 0.48Mb/s, average uplink speed of 0.85Mb/s, a packet of information loss of 6.5%. The maximum communication range was also observed to be 70km from a single cell station. The paper also describes possible limitations and utility of using such communications architecture for both manned and unmanned aircraft systems.
Resumo:
The QUT-NOISE-TIMIT corpus consists of 600 hours of noisy speech sequences designed to enable a thorough evaluation of voice activity detection (VAD) algorithms across a wide variety of common background noise scenarios. In order to construct the final mixed-speech database, a collection of over 10 hours of background noise was conducted across 10 unique locations covering 5 common noise scenarios, to create the QUT-NOISE corpus. This background noise corpus was then mixed with speech events chosen from the TIMIT clean speech corpus over a wide variety of noise lengths, signal-to-noise ratios (SNRs) and active speech proportions to form the mixed-speech QUT-NOISE-TIMIT corpus. The evaluation of five baseline VAD systems on the QUT-NOISE-TIMIT corpus is conducted to validate the data and show that the variety of noise available will allow for better evaluation of VAD systems than existing approaches in the literature.