982 resultados para Advanced Encryption Standard
Resumo:
Due to the development of XML and other data models such as OWL and RDF, sharing data is an increasingly common task since these data models allow simple syntactic translation of data between applications. However, in order for data to be shared semantically, there must be a way to ensure that concepts are the same. One approach is to employ commonly usedschemas—called standard schemas —which help guarantee that syntactically identical objects have semantically similar meanings. As a result of the spread of data sharing, there has been widespread adoption of standard schemas in a broad range of disciplines and for a wide variety of applications within a very short period of time. However, standard schemas are still in their infancy and have not yet matured or been thoroughly evaluated. It is imperative that the data management research community takes a closer look at how well these standard schemas have fared in real-world applications to identify not only their advantages, but also the operational challenges that real users face. In this paper, we both examine the usability of standard schemas in a comparison that spans multiple disciplines, and describe our first step at resolving some of these issues in our Semantic Modeling System. We evaluate our Semantic Modeling System through a careful case study of the use of standard schemas in architecture, engineering, and construction, which we conducted with domain experts. We discuss how our Semantic Modeling System can help the broader problem and also discuss a number of challenges that still remain.
Resumo:
Good daylighting design in buildings not only provides a comfortable luminous environment, but also delivers energy savings and comfortable and healthy environments for building occupants. Yet, there is still no consensus on how to assess what constitutes good daylighting design. Currently amongst building performance guidelines, Daylighting factors (DF) or minimum illuminance values are the standard; however, previous research has shown the shortcomings of these metrics. New computer software for daylighting analysis contains new more advanced metrics for daylighting (Climate Base Daylight Metrics-CBDM). Yet, these tools (new metrics or simulation tools) are not currently understood by architects and are not used within architectural firms in Australia. A survey of architectural firms in Brisbane showed the most relevant tools used by industry. The purpose of this paper is to assess and compare these computer simulation tools and new tools available architects and designers for daylighting. The tools are assessed in terms of their ease of use (e.g. previous knowledge required, complexity of geometry input, etc.), efficiency (e.g. speed, render capabilities, etc.) and outcomes (e.g. presentation of results, etc. The study shows tools that are most accessible for architects, are those that import a wide variety of files, or can be integrated into the current 3d modelling software or package. These software’s need to be able to calculate for point in times simulations, and annual analysis. There is a current need in these software solutions for an open source program able to read raw data (in the form of spreadsheets) and show that graphically within a 3D medium. Currently, development into plug-in based software’s are trying to solve this need through third party analysis, however some of these packages are heavily reliant and their host program. These programs however which allow dynamic daylighting simulation, which will make it easier to calculate accurate daylighting no matter which modelling platform the designer uses, while producing more tangible analysis today, without the need to process raw data.
Resumo:
Aim his study reports the use of exploratory factor analysis to determine construct validity of a modified advanced practice role delineation tool. Background Little research exists on specific activities and domains of practice within advanced practice nursing roles, making it difficult to define service parameters of this level of nursing practice. A valid and reliable tool would assist those responsible for employing or deploying advanced practice nurses by identifying and defining their service profile. This is the third paper from a multi-phase Australian study aimed at assigning advanced practice roles. Methods A postal survey was conducted of a random sample of state government employed Registered nurses and midwives, across various levels and grades of practice in the state of Queensland, Australia, using the modified Advanced Practice Role Delineation tool. Exploratory factor analysis, using principal axis factoring was undertaken to examine factors in the modified tool. Cronbach’s alpha coefficient determined reliability of the overall scale and identified factors. Results There were 658 responses (42% response rate). The five factors found with loadings of ≥.400 for 40 of the 41 APN activities were similar to the five domains in the Strong model. Cronbach’s alpha coefficient was .94 overall and for the factors ranged from 0.83 to 0.95. Conclusion Exploratory factor analysis of the modified tool supports validity of the five domains of the original tool. Further investigation will identify use of the tool in a broader healthcare environment.
Resumo:
This paper investigates advanced channel compensation techniques for the purpose of improving i-vector speaker verification performance in the presence of high intersession variability using the NIST 2008 and 2010 SRE corpora. The performance of four channel compensation techniques: (a) weighted maximum margin criterion (WMMC), (b) source-normalized WMMC (SN-WMMC), (c) weighted linear discriminant analysis (WLDA), and; (d) source-normalized WLDA (SN-WLDA) have been investigated. We show that, by extracting the discriminatory information between pairs of speakers as well as capturing the source variation information in the development i-vector space, the SN-WLDA based cosine similarity scoring (CSS) i-vector system is shown to provide over 20% improvement in EER for NIST 2008 interview and microphone verification and over 10% improvement in EER for NIST 2008 telephone verification, when compared to SN-LDA based CSS i-vector system. Further, score-level fusion techniques are analyzed to combine the best channel compensation approaches, to provide over 8% improvement in DCF over the best single approach, (SN-WLDA), for NIST 2008 interview/ telephone enrolment-verification condition. Finally, we demonstrate that the improvements found in the context of CSS also generalize to state-of-the-art GPLDA with up to 14% relative improvement in EER for NIST SRE 2010 interview and microphone verification and over 7% relative improvement in EER for NIST SRE 2010 telephone verification.
Resumo:
Bioceramics play an important role in repairing and regenerating bone defects. Annually, more than 500,000 bone graft procedures are performed in the United states and approximately 2.2 million are conducted worldwide. The estimated cost of these procedures approaches $2.5billion per year. Around 60% of the bone graft substitutes available on the market involve bioceramics. It is reported that bioceramics in the world market increase by 9% per year. For this reason, the research of bioceramics has been one of the most active areas during, the past several years. Considering the significant importance of bioceramics, our goal was to compile this book to review the latest research advances in the field of bioceramics. The text also summarizes our work during the past 10 years in an effort to share innovative concepts, design of bioceramisc, and methods for material synthesis and drug delivery. We anticipate that this text will provide some useful information and guidance in the bioceramics field for biomedical engineering researchers and material scientists. Information on novel mesoporous bioactive glasses and silicate-based ceramics for bone regeneration and drug delivery are presented. Mesoporous bioactive glasses have shown multifunctional characteristics of bone regeneration and drug delivery due to their special mesopore structures,whereas silicated-based bioceramics, as typical third-generation biomaterials,possess significant osteostimulation properties. Silica nanospheres with a core-shell structure and specific properties for controllable drug delivery have been carefully reviewed-a variety of advanced synthetic strategies have been developed to construct functional mesoporous silica nanoparticles with a core-shell structure, including hollow, magnetic, or luminescent, and other multifunctional core-shell mesoporous silica nanoparticles. In addition, multifunctional drug delivery systems based on these nanoparticles have been designed and optimized to deliver the drugs into the targeted organs or cells,with a controllable release fashioned by virtue of various internal and external triggers. The novel 3D-printing technique to prepare advanced bioceramic scaffolds for bone tissue engineering applications has been highlighted, including the preparation, mechanical strength, and biological properties of 3D-printed porous scaffolds of calcium phosphate cement and silicate bioceramics. Three-dimensional printing techniques offer improved large-pore structure and mechanical strength. In addition , biomimetic preparation and controllable crystal growth as well as biomineralization of bioceramics are summarized, showing the latest research progress in this area. Finally, inorganic and organic composite materials are reviewed for bone regeneration and gene delivery. Bioactive inorganic and organic composite materials offer unique biological, electrical, and mechanical properties for designing excellent bone regeneration or gene delivery systems. It is our sincere hope that this book will updated the reader as to the research progress of bioceramics and their applications in bone repair and regeneration. It will be the best reward to all the contributors of this book if their efforts herein in some way help reader in any part of their study, research, and career development.
Resumo:
The Australian Government and most Australian road authorities have set ambitious greenhouse gas emission (GHGe) reduction targets for the near future, many of which have translated into action plans. However, previous research has shown that the various Australian state road authorities are at different stages of implementing ‘green’ initiatives in construction planning and development, with considerable gaps in their monitoring, tendering, and contracting. This study illustrates the differences between procurement standards and project specific practices that aim to reduce GHGe from road construction projects in three of the largest Australian road construction clients, with a focus on the tools used, contract type and incentives for better performance.
Resumo:
Obesity and type 2 diabetes are recognised risk factors for the development of some cancers and, increasingly, predict more aggressive disease, treatment failure, and cancer-specific mortality. Many factors may contribute to this clinical observation. Hyperinsulinaemia, dyslipidaemia, hypoxia, ER stress, and inflammation associated with expanded adipose tissue are thought to be among the main culprits driving malignant growth and cancer advancement. This observation has led to the proposal of the potential utility of “old players” for the treatment of type 2 diabetes and metabolic syndrome as new cancer adjuvant therapeutics. Androgen-regulated pathways drive proliferation, differentiation, and survival of benign and malignant prostate tissue. Androgen deprivation therapy (ADT) exploits this dependence to systemically treat advanced prostate cancer resulting in anticancer response and improvement of cancer symptoms. However, the initial therapeutic response from ADT eventually progresses to castrate resistant prostate cancer (CRPC) which is currently incurable. ADT rapidly induces hyperinsulinaemia which is associated with more rapid treatment failure. We discuss current observations of cancer in the context of obesity, diabetes, and insulin-lowering medication. We provide an update on current treatments for advanced prostate cancer and discuss whether metabolic dysfunction, developed during ADT, provides a unique therapeutic window for rapid translation of insulin-sensitising medication as combination therapy with antiandrogen targeting agents for the management of advanced prostate cancer.
Resumo:
Background: Fatigue is a distressing symptom experienced by approximately 74-88% of patients with advanced cancer. Although there have been advances in managing fatigue with the use of a range of pharmacologic and non-pharmacologic strategies, fatigue is not well-managed in patients with advanced cancer. Objectives: For patients with advanced cancer, the aims of the study were to examine the self-management (SM) behaviours associated with fatigue; the perceived effectiveness of these SM behaviours, and the socio-demographic and clinical factors influencing the effectiveness of these SM behaviours. Methodology: A prospective longitudinal study was undertaken with 152 patients with metastatic breast, lung, colorectal and prostate cancer experiencing fatigue (>3/10) over a two month period. SM behaviours associated with fatigue, medical/demographic characteristics, social support, depression, anxiety, self-efficacy and other symptoms were assessed. Results: Findings indicate that on most fatigue severity measures, levels of fatigue increased slightly over time. On average, participants used nine fatigue SM behaviours at each time point. Participants reported that the most effective SM behaviours were ‘pacing their activities during the day’, ‘planning activities to make the most of energy’, ‘taking short sleeps’, ‘doing things that distract them from their fatigue’, and ‘doing things to improve sleep at night’. Factors associated with the increased effectiveness of fatigue SM behaviours included higher self-efficacy, higher education level, lower levels of depressive symptoms, and lower functional status. These results can be used to inform the design of future interventions to support the use of effective fatigue SM behaviours in this population.
Resumo:
Authenticated Encryption (AE) is the cryptographic process of providing simultaneous confidentiality and integrity protection to messages. This approach is more efficient than applying a two-step process of providing confidentiality for a message by encrypting the message, and in a separate pass providing integrity protection by generating a Message Authentication Code (MAC). AE using symmetric ciphers can be provided by either stream ciphers with built in authentication mechanisms or block ciphers using appropriate modes of operation. However, stream ciphers have the potential for higher performance and smaller footprint in hardware and/or software than block ciphers. This property makes stream ciphers suitable for resource constrained environments, where storage and computational power are limited. There have been several recent stream cipher proposals that claim to provide AE. These ciphers can be analysed using existing techniques that consider confidentiality or integrity separately; however currently there is no existing framework for the analysis of AE stream ciphers that analyses these two properties simultaneously. This thesis introduces a novel framework for the analysis of AE using stream cipher algorithms. This thesis analyzes the mechanisms for providing confidentiality and for providing integrity in AE algorithms using stream ciphers. There is a greater emphasis on the analysis of the integrity mechanisms, as there is little in the public literature on this, in the context of authenticated encryption. The thesis has four main contributions as follows. The first contribution is the design of a framework that can be used to classify AE stream ciphers based on three characteristics. The first classification applies Bellare and Namprempre's work on the the order in which encryption and authentication processes take place. The second classification is based on the method used for accumulating the input message (either directly or indirectly) into the into the internal states of the cipher to generate a MAC. The third classification is based on whether the sequence that is used to provide encryption and authentication is generated using a single key and initial vector, or two keys and two initial vectors. The second contribution is the application of an existing algebraic method to analyse the confidentiality algorithms of two AE stream ciphers; namely SSS and ZUC. The algebraic method is based on considering the nonlinear filter (NLF) of these ciphers as a combiner with memory. This method enables us to construct equations for the NLF that relate the (inputs, outputs and memory of the combiner) to the output keystream. We show that both of these ciphers are secure from this type of algebraic attack. We conclude that using a keydependent SBox in the NLF twice, and using two different SBoxes in the NLF of ZUC, prevents this type of algebraic attack. The third contribution is a new general matrix based model for MAC generation where the input message is injected directly into the internal state. This model describes the accumulation process when the input message is injected directly into the internal state of a nonlinear filter generator. We show that three recently proposed AE stream ciphers can be considered as instances of this model; namely SSS, NLSv2 and SOBER-128. Our model is more general than a previous investigations into direct injection. Possible forgery attacks against this model are investigated. It is shown that using a nonlinear filter in the accumulation process of the input message when either the input message or the initial states of the register is unknown prevents forgery attacks based on collisions. The last contribution is a new general matrix based model for MAC generation where the input message is injected indirectly into the internal state. This model uses the input message as a controller to accumulate a keystream sequence into an accumulation register. We show that three current AE stream ciphers can be considered as instances of this model; namely ZUC, Grain-128a and Sfinks. We establish the conditions under which the model is susceptible to forgery and side-channel attacks.
Resumo:
Availability has become a primary goal of information security and is as significant as other goals, in particular, confidentiality and integrity. Maintaining availability of essential services on the public Internet is an increasingly difficult task in the presence of sophisticated attackers. Attackers may abuse limited computational resources of a service provider and thus managing computational costs is a key strategy for achieving the goal of availability. In this thesis we focus on cryptographic approaches for managing computational costs, in particular computational effort. We focus on two cryptographic techniques: computational puzzles in cryptographic protocols and secure outsourcing of cryptographic computations. This thesis contributes to the area of cryptographic protocols in the following ways. First we propose the most efficient puzzle scheme based on modular exponentiations which, unlike previous schemes of the same type, involves only a few modular multiplications for solution verification; our scheme is provably secure. We then introduce a new efficient gradual authentication protocol by integrating a puzzle into a specific signature scheme. Our software implementation results for the new authentication protocol show that our approach is more efficient and effective than the traditional RSA signature-based one and improves the DoSresilience of Secure Socket Layer (SSL) protocol, the most widely used security protocol on the Internet. Our next contributions are related to capturing a specific property that enables secure outsourcing of cryptographic tasks in partial-decryption. We formally define the property of (non-trivial) public verifiability for general encryption schemes, key encapsulation mechanisms (KEMs), and hybrid encryption schemes, encompassing public-key, identity-based, and tag-based encryption avors. We show that some generic transformations and concrete constructions enjoy this property and then present a new public-key encryption (PKE) scheme having this property and proof of security under the standard assumptions. Finally, we combine puzzles with PKE schemes for enabling delayed decryption in applications such as e-auctions and e-voting. For this we first introduce the notion of effort-release PKE (ER-PKE), encompassing the well-known timedrelease encryption and encapsulated key escrow techniques. We then present a security model for ER-PKE and a generic construction of ER-PKE complying with our security notion.
Resumo:
This paper examines charity regulatory systems, including accounting standard setting, across five jurisdictions in varying stages of adoption of International Financial Reporting Standards, and identifies the challenges of this process. Design/methodology/approach Using a regulatory space approach, we rely on publicly available archival evidence from charity regulators and accounting standard setters in five common-law jurisdictions in advanced capitalist economies, all with vibrant charity sectors: United Kingdom, United States of America, Canada, Australia and New Zealand. Findings The study reveals the importance of co-operative interdependence and dialogue between charity regulators and accounting standard setters, indicating that jurisdictions with such inter-relationships will better manage the transition to IFRS. It also highlights the need for those jurisdictions with not-for-profit or charity-specific accounting standards to reconfigure those provisions as IFRSs are adopted. Research limitations/implications The study is limited to five jurisdictions, concentrating specifically on key charity regulators and accounting standard setters. Future research could widen the scope to other jurisdictions, or track changes in the jurisdictions longitudinally. Practical implications We provide a timely international perspective of charity regulation and accounting developments for regulators, accounting standard setters and charities, specifically of regulatory responses to IFRS adoption. Originality/value: The paper contributes fresh insights into the dynamics of charity accounting regulation in an international context by using regulatory space as an organising framework. While accounting regulation literature provides a rich interpretation of regulatory issues within the accounting arena, little attention has been paid to charity accounting regulation.
Resumo:
The use of intelligent transport systems is proliferating across the Australian road network, particularly on major freeways. New technology allows a greater range of signs and messages to be displayed to drivers. While there has been a long history of human factors analyses of signage, no evaluation has been conducted on this novel, sometimes dynamic, signage or potential interactions when co-located. The purpose of this driving simulator study was to investigate drivers’ behavioural changes and comprehension resulting from the co-location of Lane Use Management Systems with static signs and (Enhanced) Variable Message Signs on Queensland motorways. A section of motorway was simulated, and nine scenarios were developed which presented a combination of signage cases across levels of driving task complexity. Two higher-risk road user groups were targeted for this research on an advanced driving simulator: older (65+ years, N=21) and younger (18-22 years, N=20) drivers. Changes in sign co-location and task complexity had small effect on driver comprehension of the signs and vehicle dynamics variables, including difference with the posted speed limit, headway, standard deviation of lane keeping and brake jerks. However, increasing the amount of information provided to drivers at a given location (by co-locating several signs) increased participants’ gaze duration on the signs. With co-location of signs and without added task complexity, a single gaze was over 2s for more than half of the population tested for both groups, and up to 6 seconds for some individuals.
Resumo:
Predicate encryption is a new primitive that supports flexible control over access to encrypted data. We study predicate encryption systems, evaluating a wide class of predicates. Our systems are more expressive than the existing attribute-hiding systems in the sense that the proposed constructions support not only all existing predicate evaluations but also arbitrary conjunctions and disjunctions of comparison and subset queries. Toward our goal, we propose encryption schemes supporting multi-inner-product predicate and provide formal security analysis. We show how to apply the proposed schemes to achieve all those predicate evaluations.
Resumo:
TO THE EDITOR: It was with great interest that I read two recent articles by de Raaf et al1, and Bruera et al2. These authors are to be congratulated for completing two of the very few high quality randomized trials that evaluate complex interventions for managing fatigue in patients with advanced cancer. de Raaf et al conducted a non-blinded RCT with 152 patients with advanced cancer and reported significant reduction of fatigue in patients who received a nurse-led monitoring and protocol-guided treatment of physical symptoms compared with those who received usual care1. Patients who received this intervention experienced a significant improvement over time in general fatigue, at one-month follow-up and two-month follow-up. Another recent RCT conducted with 141 patients with advanced cancer by Bruera et al2 did not find any benefits of a nursing telephone intervention that involved systematic symptom assessment/management, medication review, psychosocial support and patient education in fatigue reduction, compared to those who received a control telephone intervention conducted by a non-professional...
Resumo:
We present a method for optical encryption of information, based on the time-dependent dynamics of writing and erasure of refractive index changes in a bulk lithium niobate medium. Information is written into the photorefractive crystal with a spatially amplitude modulated laser beam which when overexposed significantly degrades the stored data making it unrecognizable. We show that the degradation can be reversed and that a one-to-one relationship exists between the degradation and recovery rates. It is shown that this simple relationship can be used to determine the erasure time required for decrypting the scrambled index patterns. In addition, this method could be used as a straightforward general technique for determining characteristic writing and erasure rates in photorefractive media.