902 resultados para GENERIC SIMPLICITY
Resumo:
The generic alliance game considers players in an alliance who fight against an external enemy. After victory, the alliance may break up, and its members may fight against each other over the spoils of the victory. Our experimental analysis of this game shows: In-group solidarity vanishes after the break-up of the alliance. Former ‘brothers in arms’ fight even more vigorously against each other than strangers do. Furthermore, this vigorous internal fighting is anticipated and reduces the ability of the alliance to mobilize the joint fighting effort, compared to a situation in which victorious alliance members share the spoils of victory equally and peacefully
Resumo:
A key derivation function is used to generate one or more cryptographic keys from a private (secret) input value. This paper proposes a new method for constructing a generic stream cipher based key derivation function. We show that our proposed key derivation function based on stream ciphers is secure if the underlying stream cipher is secure. We simulate instances of this stream cipher based key derivation function using three eStream finalist: Trivium, Sosemanuk and Rabbit. The simulation results show these stream cipher based key derivation functions offer efficiency advantages over the more commonly used key derivation functions based on block ciphers and hash functions.
Resumo:
A switch-mode assisted linear amplifier (SMALA) combining a linear (Class B) and a switch-mode (Class D) amplifier is presented. The usual single hysteretic controlled half-bridge current dumping stage is replaced by two parallel buck converter stages, in a parallel voltage controlled topology. These operate independently: one buck converter sources current to assist the upper Class B output device, and a complementary converter sinks current to assist the lower device. This topology lends itself to a novel control approach of a dead-band at low power levels where neither class D amplifier assists, allowing the class B amplifier to supply the load without interference, ensuring high fidelity. A 20 W implementation demonstrates 85% efficiency, with distortion below 0.08% measured across the full audio bandwidth at 15 W. The class D amplifier begins assisting at 2 W, and below this value, the distortion was below 0.03%. Complete circuitry is given, showing the simplicity of the additional class D amplifier and its corresponding control circuitry.
Resumo:
The Hepatitis C virus (HCV) affects some 150 million people worldwide. However, unlike hepatitis A and B there is no vaccination for HCV and approximately 75% of people exposed to HCV develop chronic hepatitis. In Australia, around 226,700 people live with chronic HCV infection costing the government approximately $252 million per year. Historically, the standard approved/licenced treatment for HCV is pegylated interferon with ribavirin. There are major drawbacks with interferon-based therapy including side effects, long duration of therapy, limited access and affordability. Our previous survey of an at-risk population reported HCV treatment coverage of only 5%. Since April 2013, a new class of interferon-free treatments for chronic HCV is subsidised under the Pharmaceutical Benefits Scheme: boceprevir and telaprevir - estimated to cost the Australian Government in excess of $220 million over five years. Other biologic interferon-free therapeutic agents are scheduled to enter the Australian market. Use of small molecule generic pharmaceuticals has been advocated as a means of public cost savings. However, with the new biologic agents, generics (biosimilars) may not be feasible or straightforward, due to long patent life; marketing exclusivity; and regulatory complexity for these newer products.
Resumo:
Generating nano-sized materials of a controlled size and chemical composition is essential for the manufacturing of materials with enhanced properties on an industrial scale, as well as for research purposes, such as toxicological studies. Among the generation methods for airborne nanoparticles (also known as aerosolisation methods), liquid-phase techniques have been widely applied due to the simplicity of their use and their high particle production rate. The use of a collison nebulizer is one such technique, in which the atomisation takes place as a result of the liquid being sucked into the air stream and injected toward the inner walls of the nebulizer reservoir via nozzles, before the solution is dispersed. Despite the above-mentioned benefits, this method also falls victim to various sources of impurities (Knight and Petrucci 2003; W. LaFranchi, Knight et al. 2003). Since these impurities can affect the characterization of the generated nanoparticles, it is crucial to understand and minimize their effect.
Resumo:
Bioreactors are defined as devices in which biological and/or biochemical processes develop under closely monitored and tightly controlled environmental and operating conditions (e.g. pH, temperature, mechanical conditions, nutrient supply and waste removal). In functional tissue engineering of musculoskeletal tissues, a bioreactor capable of controlling dynamic loading plays a determinant role. It has been shown that mechanical stretching promotes the expression of type I and III collagens, fibronectin, tenascin-C in cultured ligament fibroblasts (J.C.-H. Goh et al., Tissue Eng. 9 (2003), S31) and that human bone marrow mesenchymal stem cells (hBMMSC) – even in the absence of biochemical regulators – could be induced to differentiate into ligament-like fibroblast by the application of physiologically relevant cyclic strains (G. Vunjak-Novakovic et al., Ann. Rev. Biomed. Eng. 6 (2004), 131; H.A. Awad et al., Tissue Eng. 5 (1999), 267; R.G. Young et al., J. Orthop. Res. 16 (1998), 406). Different bioreactors are commercially available but they are too generic to be used for a given tissue, each tissue showing specific mechanical loading properties. In the case of ligament tissue engineering, the design of a bioreactor is still an open question. Our group proposes a bioreactor allowing cyclic traction–torsion on a scaffold seeded with stem cells.
Resumo:
Streamciphers are common cryptographic algorithms used to protect the confidentiality of frame-based communications like mobile phone conversations and Internet traffic. Streamciphers are ideal cryptographic algorithms to encrypt these types of traffic as they have the potential to encrypt them quickly and securely, and have low error propagation. The main objective of this thesis is to determine whether structural features of keystream generators affect the security provided by stream ciphers.These structural features pertain to the state-update and output functions used in keystream generators. Using linear sequences as keystream to encrypt messages is known to be insecure. Modern keystream generators use nonlinear sequences as keystream.The nonlinearity can be introduced through a keystream generator's state-update function, output function, or both. The first contribution of this thesis relates to nonlinear sequences produced by the well-known Trivium stream cipher. Trivium is one of the stream ciphers selected in a final portfolio resulting from a multi-year project in Europe called the ecrypt project. Trivium's structural simplicity makes it a popular cipher to cryptanalyse, but to date, there are no attacks in the public literature which are faster than exhaustive keysearch. Algebraic analyses are performed on the Trivium stream cipher, which uses a nonlinear state-update and linear output function to produce keystream. Two algebraic investigations are performed: an examination of the sliding property in the initialisation process and algebraic analyses of Trivium-like streamciphers using a combination of the algebraic techniques previously applied separately by Berbain et al. and Raddum. For certain iterations of Trivium's state-update function, we examine the sets of slid pairs, looking particularly to form chains of slid pairs. No chains exist for a small number of iterations.This has implications for the period of keystreams produced by Trivium. Secondly, using our combination of the methods of Berbain et al. and Raddum, we analysed Trivium-like ciphers and improved on previous on previous analysis with regards to forming systems of equations on these ciphers. Using these new systems of equations, we were able to successfully recover the initial state of Bivium-A.The attack complexity for Bivium-B and Trivium were, however, worse than exhaustive keysearch. We also show that the selection of stages which are used as input to the output function and the size of registers which are used in the construction of the system of equations affect the success of the attack. The second contribution of this thesis is the examination of state convergence. State convergence is an undesirable characteristic in keystream generators for stream ciphers, as it implies that the effective session key size of the stream cipher is smaller than the designers intended. We identify methods which can be used to detect state convergence. As a case study, theMixer streamcipher, which uses nonlinear state-update and output functions to produce keystream, is analysed. Mixer is found to suffer from state convergence as the state-update function used in its initialisation process is not one-to-one. A discussion of several other streamciphers which are known to suffer from state convergence is given. From our analysis of these stream ciphers, three mechanisms which can cause state convergence are identified.The effect state convergence can have on stream cipher cryptanalysis is examined. We show that state convergence can have a positive effect if the goal of the attacker is to recover the initial state of the keystream generator. The third contribution of this thesis is the examination of the distributions of bit patterns in the sequences produced by nonlinear filter generators (NLFGs) and linearly filtered nonlinear feedback shift registers. We show that the selection of stages used as input to a keystream generator's output function can affect the distribution of bit patterns in sequences produced by these keystreamgenerators, and that the effect differs for nonlinear filter generators and linearly filtered nonlinear feedback shift registers. In the case of NLFGs, the keystream sequences produced when the output functions take inputs from consecutive register stages are less uniform than sequences produced by NLFGs whose output functions take inputs from unevenly spaced register stages. The opposite is true for keystream sequences produced by linearly filtered nonlinear feedback shift registers.
Resumo:
This paper aims to address the knowledge gap in regards to the potential intermediary role tertiary institutions can play in developing generic design thinking/design led innovation capabilities in non-designers. Specifically, it investigates the value derived from the contribution of postgraduate design students as facilitators/educators for undergraduate non-design student cohorts. It examines a design immersion workshop designed to encourage the use of design thinking capabilities for project brief development for undergraduate multi-disciplinary student teams involved in a community service learning project for a social enterprise. The workshop was facilitated by design led innovation masters students embedded in industry organisations to research the integration of design led innovation capabilities in business. Data was collected from participating non-design students and postgraduate facilitators’ in the form of reflective journals and semi-structured interviews. The thematic analysis provided insight into the value of design thinking/design led innovation immersion programs for both the postgraduate facilitators and the undergraduate non-design students. The research results will inform a tentative foundation prototype framework to allow for ongoing program developments and research in design thinking/design led innovation integration in higher education, facilitating the development of generic capabilities required to empower future generations for business innovation and active citizenship in the 21st century knowledge economy.
Resumo:
Bioacoustic data can provide an important base for environmental monitoring. To explore a large amount of field recordings collected, an automated similarity search algorithm is presented in this paper. A region of an audio defined by frequency and time bounds is provided by a user; the content of the region is used to construct a query. In the retrieving process, our algorithm will automatically scan through recordings to search for similar regions. In detail, we present a feature extraction approach based on the visual content of vocalisations – in this case ridges, and develop a generic regional representation of vocalisations for indexing. Our feature extraction method works best for bird vocalisations showing ridge characteristics. The regional representation method allows the content of an arbitrary region of a continuous recording to be described in a compressed format.
Resumo:
Over the last decade, the majority of existing search techniques is either keyword- based or category-based, resulting in unsatisfactory effectiveness. Meanwhile, studies have illustrated that more than 80% of users preferred personalized search results. As a result, many studies paid a great deal of efforts (referred to as col- laborative filtering) investigating on personalized notions for enhancing retrieval performance. One of the fundamental yet most challenging steps is to capture precise user information needs. Most Web users are inexperienced or lack the capability to express their needs properly, whereas the existent retrieval systems are highly sensitive to vocabulary. Researchers have increasingly proposed the utilization of ontology-based tech- niques to improve current mining approaches. The related techniques are not only able to refine search intentions among specific generic domains, but also to access new knowledge by tracking semantic relations. In recent years, some researchers have attempted to build ontological user profiles according to discovered user background knowledge. The knowledge is considered to be both global and lo- cal analyses, which aim to produce tailored ontologies by a group of concepts. However, a key problem here that has not been addressed is: how to accurately match diverse local information to universal global knowledge. This research conducts a theoretical study on the use of personalized ontolo- gies to enhance text mining performance. The objective is to understand user information needs by a \bag-of-concepts" rather than \words". The concepts are gathered from a general world knowledge base named the Library of Congress Subject Headings. To return desirable search results, a novel ontology-based mining approach is introduced to discover accurate search intentions and learn personalized ontologies as user profiles. The approach can not only pinpoint users' individual intentions in a rough hierarchical structure, but can also in- terpret their needs by a set of acknowledged concepts. Along with global and local analyses, another solid concept matching approach is carried out to address about the mismatch between local information and world knowledge. Relevance features produced by the Relevance Feature Discovery model, are determined as representatives of local information. These features have been proven as the best alternative for user queries to avoid ambiguity and consistently outperform the features extracted by other filtering models. The two attempt-to-proposed ap- proaches are both evaluated by a scientific evaluation with the standard Reuters Corpus Volume 1 testing set. A comprehensive comparison is made with a num- ber of the state-of-the art baseline models, including TF-IDF, Rocchio, Okapi BM25, the deploying Pattern Taxonomy Model, and an ontology-based model. The gathered results indicate that the top precision can be improved remarkably with the proposed ontology mining approach, where the matching approach is successful and achieves significant improvements in most information filtering measurements. This research contributes to the fields of ontological filtering, user profiling, and knowledge representation. The related outputs are critical when systems are expected to return proper mining results and provide personalized services. The scientific findings have the potential to facilitate the design of advanced preference mining models, where impact on people's daily lives.
Resumo:
With approximately half of Australian university teaching now performed by sessional academics, there has been growing recognition of the contribution they make to student learning. At the same time, sector-wide research and institutional audits continue to raise concerns about academic development, quality assurance, recognition and belonging. In response, universities have increasingly begun to offer academic development programs for sessional academics. However, such programs may be centrally delivered, generic in nature, and contained within the moment of delivery, while the Faculty contexts and cultures that sessional academics work within are diverse, and the need for support unfolds in ad-hoc and often unpredictable ways. In this paper we present the Sessional Academic Success (SAS) program–a new framework that complements and extends the central academic development program for sessional academic staff at Queensland University of Technology. This program recognises that experienced sessional academics have much to contribute to the advancement of learning and teaching, and harnesses their expertise to provide school-based academic development opportunities, peer-to-peer support, and locally contextualized community building. We describe the program’s implementation and explain how Sessional Academic Success Advisors (SASAs) are employed, trained and supported to provide advice and mentorship and, through a co-design methodology, to develop local development opportunities and communities of teaching practice within their schools. Besides anticipated benefits to new sessional academics in terms of timely and contextual support and improved sense of belonging, we explain how SAS provides a pathway for building leadership capacity and academic advancement for experienced sessional academics. We take a collaborative, dialogic and reflective practice approach to this paper, interlacing insights from the Associate Director, Academic: Sessional Development who designed the program, and two Sessional Academic Success Advisors who have piloted it within their schools.
Resumo:
A generic method for the synthesis of metal-7,7,8,8-tetracyanoquinodimethane (TCNQ) charge-transfer complexes on both conducting and nonconducting substrates is achieved by photoexcitation of TCNQ in acetonitrile in the presence of a sacrificial electron donor and the relevant metal cation. The photochemical reaction leads to reduction of TCNQ to the TCNQ- monoanion. In the presence of Mx+(MeCN), reaction with TCNQ-(MeCN) leads to deposition of Mx+[TCNQ]x crystals onto a solid substrate with morphologies that are dependent on the metal cation. Thus, CuTCNQ phase I photocrystallizes as uniform microrods, KTCNQ as microrods with a random size distribution, AgTCNQ as very long nanowires up to 30 μm in length and with diameters of less than 180 nm, and Co[TCNQ]2(H2O)2 as nanorods and wires. The described charge-transfer complexes have been characterized by optical and scanning electron microscopy and IR and Raman spectroscopy. The CuTCNQ and AgTCNQ complexes are of particular interest for use in memory storage and switching devices. In principle, this simple technique can be employed to generate all classes of metal−TCNQ complexes and opens up the possibility to pattern them in a controlled manner on any type of substrate.
Resumo:
One of the defences within Part 3-5 of the Australian Consumer Law is the state of the art, or development risk defence. This defence, although significant, has often been neglected in Australian jurisprudential analysis and has triggered at most generic academic analysis. However, with the rise of pharmaceutical and medical device litigation in Australia, it could become a vital weapon for Australian manufacturers against product liability claims. This paper will firstly review the two ways this defence could operate. It will also discuss the three types of defects which the defence could apply to. This paper aims to determine exactly when and how this defence should apply in Australia, in the context of pharmaceutical product liability claims.
Resumo:
Today, the majority of semiconductor fabrication plants (fabs) conduct equipment preventive maintenance based on statistically-derived time- or wafer-count-based intervals. While these practices have had relative success in managing equipment availability and product yield, the cost, both in time and materials, remains high. Condition-based maintenance has been successfully adopted in several industries, where costs associated with equipment downtime range from potential loss of life to unacceptable affects to companies’ bottom lines. In this paper, we present a method for the monitoring of complex systems in the presence of multiple operating regimes. In addition, the new representation of degradation processes will be used to define an optimization procedure that facilitates concurrent maintenance and operational decision-making in a manufacturing system. This decision-making procedure metaheuristically maximizes a customizable cost function that reflects the benefits of production uptime, and the losses incurred due to deficient quality and downtime. The new degradation monitoring method is illustrated through the monitoring of a deposition tool operating over a prolonged period of time in a major fab, while the operational decision-making is demonstrated using simulated operation of a generic cluster tool.
Resumo:
In many applications, where encrypted traffic flows from an open (public) domain to a protected (private) domain, there exists a gateway that bridges the two domains and faithfully forwards the incoming traffic to the receiver. We observe that indistinguishability against (adaptive) chosen-ciphertext attacks (IND-CCA), which is a mandatory goal in face of active attacks in a public domain, can be essentially relaxed to indistinguishability against chosen-plaintext attacks (IND-CPA) for ciphertexts once they pass the gateway that acts as an IND-CCA/CPA filter by first checking the validity of an incoming IND-CCA ciphertext, then transforming it (if valid) into an IND-CPA ciphertext, and forwarding the latter to the recipient in the private domain. “Non-trivial filtering'' can result in reduced decryption costs on the receivers' side. We identify a class of encryption schemes with publicly verifiable ciphertexts that admit generic constructions of (non-trivial) IND-CCA/CPA filters. These schemes are characterized by existence of public algorithms that can distinguish between valid and invalid ciphertexts. To this end, we formally define (non-trivial) public verifiability of ciphertexts for general encryption schemes, key encapsulation mechanisms, and hybrid encryption schemes, encompassing public-key, identity-based, and tag-based encryption flavours. We further analyze the security impact of public verifiability and discuss generic transformations and concrete constructions that enjoy this property.