358 resultados para LACTAMASE-PRODUCING ENTEROBACTERIACEAE
Resumo:
The process of compiling a studio vocal performance from many takes can often result in the performer producing a new complete performance once this new "best of" assemblage is heard back. This paper investigates the ways that the physical process of recording can alter vocal performance techniques, and in particular, the establishing of a definitive melodic and rhythmic structure. Drawing on his many years of experience as a commercially successful producer, including the attainment of a Grammy award, the author will analyse the process of producing a “credible” vocal performance in depth, with specific case studies and examples. The question of authenticity in rock and pop will also be discussed and, in this context, the uniqueness of the producer’s role as critical arbiter – what gives the producer the authority to make such performance evaluations? Techniques for creating conditions in the studio that are conducive to vocal performances, in many ways a very unnatural performance environment, will be discussed, touching on areas such as the psycho-acoustic properties of headphone mixes, the avoidance of intimidatory practices, and a methodology for inducing the perception of a “familiar” acoustic environment.
Resumo:
Fermentation feedstocks in the sugar industry are based on cane juice, B molasses or final molasses. Brazil has been producing ethanol by directing sugarcane juice to fermentation directly or using lower quality juice as a diluent with B molasses to prepare the fermentation broth. One issue that has received only limited interest particularly from outside Brazil is the most appropriate conditions for clarification of the juice going to fermentation. Irrespective of whether the juice supply is the total flow from the milling tandem or a diffuser station or a part of the total flow, removal of the insoluble solids is essential. However, the standard defecation process used by sugar factories around the world to clarify juice can introduce unwanted calcium ions and remove other nutrients such as phosphorus and nitrogen that are considered essential for the fermentation process. An investigation was undertaken by SRI to assess the effects on the constituents of cane juice when subjected to the typical clarification process in an Australian factory and what conditions would be needed to provide a clarified juice suitable for fermentation. Typical juices from one factory were clarified in laboratory trials under a range of pH conditions and the resulting clarified juices analysed. The results indicated that pH had a major effect on the residual concentrations of key constituents in the clarified juice and that the selected clarification conditions are determined by the nominated quality criteria of clarified juice feedstock for fermentation. Further trials were conducted in overseas factories to confirm the results obtained in Australia. It became apparent that the preferred specifications for clarified juice going to fermentation varied from country to country. Each supplier of fermentation technology had criteria applying to clarified juice feedstock that would have a major impact on the standard of clarification required to achieve compliance with the criteria.
Resumo:
A major challenge of the 21st century will be to generate transportation fuels using feedstocks such as lignocellulosic waste materials as a substitute for existing fossil and nuclear fuels. The advantages of lignocellulosics as a feedstock material are that they are abundant, sustainable and carbon-neutral. To improve the economics of producing liquid transportation fuels from lignocellulosic biomass, the development of value-added products from lignin, a major component of lignocellulosics, is necessary. Lignins produced from black liquor through the fractionation of sugarcane bagasse with soda and organic solvents have been characterised by physical, chemical and thermal means. The soda lignin fractions have different physico-chemical and thermal properties from one another. Some of these properties have been compared to bagasse lignin extracted with aqueous ethanol.
Resumo:
This thesis is devoted to the study of linear relationships in symmetric block ciphers. A block cipher is designed so that the ciphertext is produced as a nonlinear function of the plaintext and secret master key. However, linear relationships within the cipher can still exist if the texts and components of the cipher are manipulated in a number of ways, as shown in this thesis. There are four main contributions of this thesis. The first contribution is the extension of the applicability of integral attacks from word-based to bitbased block ciphers. Integral attacks exploit the linear relationship between texts at intermediate stages of encryption. This relationship can be used to recover subkey bits in a key recovery attack. In principle, integral attacks can be applied to bit-based block ciphers. However, specific tools to define the attack on these ciphers are not available. This problem is addressed in this thesis by introducing a refined set of notations to describe the attack. The bit patternbased integral attack is successfully demonstrated on reduced-round variants of the block ciphers Noekeon, Present and Serpent. The second contribution is the discovery of a very small system of equations that describe the LEX-AES stream cipher. LEX-AES is based heavily on the 128-bit-key (16-byte) Advanced Encryption Standard (AES) block cipher. In one instance, the system contains 21 equations and 17 unknown bytes. This is very close to the upper limit for an exhaustive key search, which is 16 bytes. One only needs to acquire 36 bytes of keystream to generate the equations. Therefore, the security of this cipher depends on the difficulty of solving this small system of equations. The third contribution is the proposal of an alternative method to measure diffusion in the linear transformation of Substitution-Permutation-Network (SPN) block ciphers. Currently, the branch number is widely used for this purpose. It is useful for estimating the possible success of differential and linear attacks on a particular SPN cipher. However, the measure does not give information on the number of input bits that are left unchanged by the transformation when producing the output bits. The new measure introduced in this thesis is intended to complement the current branch number technique. The measure is based on fixed points and simple linear relationships between the input and output words of the linear transformation. The measure represents the average fraction of input words to a linear diffusion transformation that are not effectively changed by the transformation. This measure is applied to the block ciphers AES, ARIA, Serpent and Present. It is shown that except for Serpent, the linear transformations used in the block ciphers examined do not behave as expected for a random linear transformation. The fourth contribution is the identification of linear paths in the nonlinear round function of the SMS4 block cipher. The SMS4 block cipher is used as a standard in the Chinese Wireless LAN Wired Authentication and Privacy Infrastructure (WAPI) and hence, the round function should exhibit a high level of nonlinearity. However, the findings in this thesis on the existence of linear relationships show that this is not the case. It is shown that in some exceptional cases, the first four rounds of SMS4 are effectively linear. In these cases, the effective number of rounds for SMS4 is reduced by four, from 32 to 28. The findings raise questions about the security provided by SMS4, and might provide clues on the existence of a flaw in the design of the cipher.
Resumo:
Experts in injection molding often refer to previous solutions to find a mold design similar to the current mold and use previous successful molding process parameters with intuitive adjustment and modification as a start for the new molding application. This approach saves a substantial amount of time and cost in experimental based corrective actions which are required in order to reach optimum molding conditions. A Case-Based Reasoning (CBR) System can perform the same task by retrieving a similar case which is applied to the new case from the case library and uses the modification rules to adapt a solution to the new case. Therefore, a CBR System can simulate human e~pertise in injection molding process design. This research is aimed at developing an interactive Hybrid Expert System to reduce expert dependency needed on the production floor. The Hybrid Expert System (HES) is comprised of CBR, flow analysis, post-processor and trouble shooting systems. The HES can provide the first set of operating parameters in order to achieve moldability condition and producing moldings free of stress cracks and warpage. In this work C++ programming language is used to implement the expert system. The Case-Based Reasoning sub-system is constructed to derive the optimum magnitude of process parameters in the cavity. Toward this end the Flow Analysis sub-system is employed to calculate the pressure drop and temperature difference in the feed system to determine the required magnitude of parameters at the nozzle. The Post-Processor is implemented to convert the molding parameters to machine setting parameters. The parameters designed by HES are implemented using the injection molding machine. In the presence of any molding defect, a trouble shooting subsystem can determine which combination of process parameters must be changed iii during the process to deal with possible variations. Constraints in relation to the application of this HES are as follows. - flow length (L) constraint: 40 mm < L < I 00 mm, - flow thickness (Th) constraint: -flow type: - material types: I mm < Th < 4 mm, unidirectional flow, High Impact Polystyrene (HIPS) and Acrylic. In order to test the HES, experiments were conducted and satisfactory results were obtained.
Resumo:
The concept of radar was developed for the estimation of the distance (range) and velocity of a target from a receiver. The distance measurement is obtained by measuring the time taken for the transmitted signal to propagate to the target and return to the receiver. The target's velocity is determined by measuring the Doppler induced frequency shift of the returned signal caused by the rate of change of the time- delay from the target. As researchers further developed conventional radar systems it become apparent that additional information was contained in the backscattered signal and that this information could in fact be used to describe the shape of the target itself. It is due to the fact that a target can be considered to be a collection of individual point scatterers, each of which has its own velocity and time- delay. DelayDoppler parameter estimation of each of these point scatterers thus corresponds to a mapping of the target's range and cross range, thus producing an image of the target. Much research has been done in this area since the early radar imaging work of the 1960s. At present there are two main categories into which radar imaging falls. The first of these is related to the case where the backscattered signal is considered to be deterministic. The second is related to the case where the backscattered signal is of a stochastic nature. In both cases the information which describes the target's scattering function is extracted by the use of the ambiguity function, a function which correlates the backscattered signal in time and frequency with the transmitted signal. In practical situations, it is often necessary to have the transmitter and the receiver of the radar system sited at different locations. The problem in these situations is 'that a reference signal must then be present in order to calculate the ambiguity function. This causes an additional problem in that detailed phase information about the transmitted signal is then required at the receiver. It is this latter problem which has led to the investigation of radar imaging using time- frequency distributions. As will be shown in this thesis, the phase information about the transmitted signal can be extracted from the backscattered signal using time- frequency distributions. The principle aim of this thesis was in the development, and subsequent discussion into the theory of radar imaging, using time- frequency distributions. Consideration is first given to the case where the target is diffuse, ie. where the backscattered signal has temporal stationarity and a spatially white power spectral density. The complementary situation is also investigated, ie. where the target is no longer diffuse, but some degree of correlation exists between the time- frequency points. Computer simulations are presented to demonstrate the concepts and theories developed in the thesis. For the proposed radar system to be practically realisable, both the time- frequency distributions and the associated algorithms developed must be able to be implemented in a timely manner. For this reason an optical architecture is proposed. This architecture is specifically designed to obtain the required time and frequency resolution when using laser radar imaging. The complex light amplitude distributions produced by this architecture have been computer simulated using an optical compiler.
Resumo:
This study is conducted within the IS-Impact Research Track at Queensland University of Technology (QUT). The goal of the IS-Impact Track is, "to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice" (Gable et al, 2006). IS-Impact is defined as "a measure at a point in time, of the stream of net benefits from the IS [Information System], to date and anticipated, as perceived by all key-user-groups" (Gable Sedera and Chan, 2008). Track efforts have yielded the bicameral IS-Impact measurement model; the "impact" half includes Organizational-Impact and Individual-Impact dimensions; the "quality" half includes System-Quality and Information-Quality dimensions. The IS-Impact model, by design, is intended to be robust, simple and generalisable, to yield results that are comparable across time, stakeholders, different systems and system contexts. The model and measurement approach employs perceptual measures and an instrument that is relevant to key stakeholder groups, thereby enabling the combination or comparison of stakeholder perspectives. Such a validated and widely accepted IS-Impact measurement model has both academic and practical value. It facilitates systematic operationalisation of a main dependent variable in research (IS-Impact), which can also serve as an important independent variable. For IS management practice it provides a means to benchmark and track the performance of information systems in use. From examination of the literature, the study proposes that IS-Impact is an Analytic Theory. Gregor (2006) defines Analytic Theory simply as theory that ‘says what is’, base theory that is foundational to all other types of theory. The overarching research question thus is "Does IS-Impact positively manifest the attributes of Analytic Theory?" In order to address this question, we must first answer the question "What are the attributes of Analytic Theory?" The study identifies the main attributes of analytic theory as: (1) Completeness, (2) Mutual Exclusivity, (3) Parsimony, (4) Appropriate Hierarchy, (5) Utility, and (6) Intuitiveness. The value of empirical research in Information Systems is often assessed along the two main dimensions - rigor and relevance. Those Analytic Theory attributes associated with the ‘rigor’ of the IS-Impact model; namely, completeness, mutual exclusivity, parsimony and appropriate hierarchy, have been addressed in prior research (e.g. Gable et al, 2008). Though common tests of rigor are widely accepted and relatively uniformly applied (particularly in relation to positivist, quantitative research), attention to relevance has seldom been given the same systematic attention. This study assumes a mainly practice perspective, and emphasises the methodical evaluation of the Analytic Theory ‘relevance’ attributes represented by the Utility and Intuitiveness of the IS-Impact model. Thus, related research questions are: "Is the IS-Impact model intuitive to practitioners?" and "Is the IS-Impact model useful to practitioners?" March and Smith (1995), identify four outputs of Design Science: constructs, models, methods and instantiations (Design Science research may involve one or more of these). IS-Impact can be viewed as a design science model, composed of Design Science constructs (the four IS-Impact dimensions and the two model halves), and instantiations in the form of management information (IS-Impact data organised and presented for management decision making). In addition to methodically evaluating the Utility and Intuitiveness of the IS-Impact model and its constituent constructs, the study aims to also evaluate the derived management information. Thus, further research questions are: "Is the IS-Impact derived management information intuitive to practitioners?" and "Is the IS-Impact derived management information useful to practitioners? The study employs a longitudinal design entailing three surveys over 4 years (the 1st involving secondary data) of the Oracle-Financials application at QUT, interspersed with focus groups involving senior financial managers. The study too entails a survey of Financials at four other Australian Universities. The three focus groups respectively emphasise: (1) the IS-Impact model, (2) the 2nd survey at QUT (descriptive), and (3) comparison across surveys within QUT, and between QUT and the group of Universities. Aligned with the track goal of producing IS-Impact scores that are highly comparable, the study also addresses the more specific utility-related questions, "Is IS-Impact derived management information a useful comparator across time?" and "Is IS-Impact derived management information a useful comparator across universities?" The main contribution of the study is evidence of the utility and intuitiveness of IS-Impact to practice, thereby further substantiating the practical value of the IS-Impact approach; and also thereby motivating continuing and further research on the validity of IS-Impact, and research employing the ISImpact constructs in descriptive, predictive and explanatory studies. The study also has value methodologically as an example of relatively rigorous attention to relevance. A further key contribution is the clarification and instantiation of the full set of analytic theory attributes.
Resumo:
The Mount Isa Basin is a new concept used to describe the area of Palaeo- to Mesoproterozoic rocks south of the Murphy Inlier and inappropriately described presently as the Mount Isa Inlier. The new basin concept presented in this thesis allows for the characterisation of basin-wide structural deformation, correlation of mineralisation with particular lithostratigraphic and seismic stratigraphic packages, and the recognition of areas with petroleum exploration potential. The northern depositional margin of the Mount Isa Basin is the metamorphic, intrusive and volcanic complex here referred to as the Murphy Inlier (not the "Murphy Tectonic Ridge"). The eastern, southern and western boundaries of the basin are obscured by younger basins (Carpentaria, Eromanga and Georgina Basins). The Murphy Inlier rocks comprise the seismic basement to the Mount Isa Basin sequence. Evidence for the continuity of the Mount Isa Basin with the McArthur Basin to the northwest and the Willyama Block (Basin) at Broken Hill to the south is presented. These areas combined with several other areas of similar age are believed to have comprised the Carpentarian Superbasin (new term). The application of seismic exploration within Authority to Prospect (ATP) 423P at the northern margin of the basin was critical to the recognition and definition of the Mount Isa Basin. The Mount Isa Basin is structurally analogous to the Palaeozoic Arkoma Basin of Illinois and Arkansas in southern USA but, as with all basins it contains unique characteristics, a function of its individual development history. The Mount Isa Basin evolved in a manner similar to many well described, Phanerozoic plate tectonic driven basins. A full Wilson Cycle is recognised and a plate tectonic model proposed. The northern Mount Isa Basin is defined as the Proterozoic basin area northwest of the Mount Gordon Fault. Deposition in the northern Mount Isa Basin began with a rift sequence of volcaniclastic sediments followed by a passive margin drift phase comprising mostly carbonate rocks. Following the rift and drift phases, major north-south compression produced east-west thrusting in the south of the basin inverting the older sequences. This compression produced an asymmetric epi- or intra-cratonic clastic dominated peripheral foreland basin provenanced in the south and thinning markedly to a stable platform area (the Murphy Inlier) in the north. The fmal major deformation comprised east-west compression producing north-south aligned faults that are particularly prominent at Mount Isa. Potential field studies of the northern Mount Isa Basin, principally using magnetic data (and to a lesser extent gravity data, satellite images and aerial photographs) exhibit remarkable correlation with the reflection seismic data. The potential field data contributed significantly to the unravelling of the northern Mount Isa Basin architecture and deformation. Structurally, the Mount Isa Basin consists of three distinct regions. From the north to the south they are the Bowthorn Block, the Riversleigh Fold Zone and the Cloncurry Orogen (new names). The Bowthom Block, which is located between the Elizabeth Creek Thrust Zone and the Murphy Inlier, consists of an asymmetric wedge of volcanic, carbonate and clastic rocks. It ranges from over 10 000 m stratigraphic thickness in the south to less than 2000 min the north. The Bowthorn Block is relatively undeformed: however, it contains a series of reverse faults trending east-west that are interpreted from seismic data to be down-to-the-north normal faults that have been reactivated as thrusts. The Riversleigh Fold Zone is a folded and faulted region south of the Bowthorn Block, comprising much of the area formerly referred to as the Lawn Hill Platform. The Cloncurry Orogen consists of the area and sequences equivalent to the former Mount Isa Orogen. The name Cloncurry Orogen clearly distinguishes this area from the wider concept of the Mount Isa Basin. The South Nicholson Group and its probable correlatives, the Pilpah Sandstone and Quamby Conglomerate, comprise a later phase of now largely eroded deposits within the Mount Isa Basin. The name South Nicholson Basin is now outmoded as this terminology only applied to the South Nicholson Group unlike the original broader definition in Brown et al. (1968). Cored slimhole stratigraphic and mineral wells drilled by Amoco, Esso, Elf Aquitaine and Carpentaria Exploration prior to 1986, penetrated much of the stratigraphy and intersected both minor oil and gas shows plus excellent potential source rocks. The raw data were reinterpreted and augmented with seismic stratigraphy and source rock data from resampled mineral and petroleum stratigraphic exploration wells for this study. Since 1986, Comalco Aluminium Limited, as operator of a joint venture with Monument Resources Australia Limited and Bridge Oil Limited, recorded approximately 1000 km of reflection seismic data within the basin and drilled one conventional stratigraphic petroleum well, Beamesbrook-1. This work was the first reflection seismic and first conventional petroleum test of the northern Mount Isa Basin. When incorporated into the newly developed foreland basin and maturity models, a grass roots petroleum exploration play was recognised and this led to the present thesis. The Mount Isa Basin was seen to contain excellent source rocks coupled with potential reservoirs and all of the other essential aspects of a conventional petroleum exploration play. This play, although high risk, was commensurate with the enormous and totally untested petroleum potential of the basin. The basin was assessed for hydrocarbons in 1992 with three conventional exploration wells, Desert Creek-1, Argyle Creek-1 and Egilabria-1. These wells also tested and confrrmed the proposed basin model. No commercially viable oil or gas was encountered although evidence of its former existence was found. In addition to the petroleum exploration, indeed as a consequence of it, the association of the extensive base metal and other mineralisation in the Mount Isa Basin with hydrocarbons could not be overlooked. A comprehensive analysis of the available data suggests a link between the migration and possible generation or destruction of hydrocarbons and metal bearing fluids. Consequently, base metal exploration based on hydrocarbon exploration concepts is probably. the most effective technique in such basins. The metal-hydrocarbon-sedimentary basin-plate tectonic association (analogous to Phanerozoic models) is a compelling outcome of this work on the Palaeo- to Mesoproterozoic Mount lsa Basin. Petroleum within the Bowthom Block was apparently destroyed by hot brines that produced many ore deposits elsewhere in the basin.
Resumo:
Automatic spoken Language Identi¯cation (LID) is the process of identifying the language spoken within an utterance. The challenge that this task presents is that no prior information is available indicating the content of the utterance or the identity of the speaker. The trend of globalization and the pervasive popularity of the Internet will amplify the need for the capabilities spoken language identi¯ca- tion systems provide. A prominent application arises in call centers dealing with speakers speaking di®erent languages. Another important application is to index or search huge speech data archives and corpora that contain multiple languages. The aim of this research is to develop techniques targeted at producing a fast and more accurate automatic spoken LID system compared to the previous National Institute of Standards and Technology (NIST) Language Recognition Evaluation. Acoustic and phonetic speech information are targeted as the most suitable fea- tures for representing the characteristics of a language. To model the acoustic speech features a Gaussian Mixture Model based approach is employed. Pho- netic speech information is extracted using existing speech recognition technol- ogy. Various techniques to improve LID accuracy are also studied. One approach examined is the employment of Vocal Tract Length Normalization to reduce the speech variation caused by di®erent speakers. A linear data fusion technique is adopted to combine the various aspects of information extracted from speech. As a result of this research, a LID system was implemented and presented for evaluation in the 2003 Language Recognition Evaluation conducted by the NIST.
Resumo:
Assessment of the condition of connectors in the overhead electricity network has traditionally relied on the heat dissipation or voltage drop from existing load current (50Hz) as a measurable parameter to differentiate between satisfactory and failing connectors. This research has developed a technique which does not rely on the 50Hz current and a prototype connector tester has been developed. In this system a high frequency signal is injected into the section of line under test and measures the resistive voltage drop and the current at the test frequency to yield the resistance in micro-ohms. From the value of resistance a decision as to whether a connector is satisfactory or approaching failure can be made. Determining the resistive voltage drop in the presence of a large induced voltage was achieved by the innovative approach of using a representative sample of the magnetic flux producing the induced voltage as the phase angle reference for the signal processing rather than the phase angle of the current, which can be affected by the presence of nearby metal objects. Laboratory evaluation of the connector tester has validated the measurement technique. The magnitude of the load current (50Hz) has minimal effect on the measurement accuracy. Addition of a suitable battery based power supply system and isolated communications, probably radio and refinement of the printed circuit board design and software are the remaining development steps to a production instrument.
Resumo:
In this study, the feasibility of difference imaging for improving the contrast of electronic portal imaging device (EPID) images is investigated. The difference imaging technique consists of the acquisition of two EPID images (with and without the placement of an additional layer of attenuating medium on the surface of the EPID)and the subtraction of one of these images from the other. The resulting difference image shows improved contrast, compared to a standard EPID image, since it is generated by lower-energy photons. Results of this study show that, ¯rstly, this method can produce images exhibiting greater contrast than is seen in standard megavoltage EPID images and that, secondly, the optimal thickness of attenuating material for producing a maximum contrast enhancement may vary with phantom thickness and composition. Further studies of the possibilities and limitations of the di®erence imaging technique, and the physics behind it, are therefore recommended.
Resumo:
The integration of computer technologies into everyday classroom life continues to provide pedagogical challenges for school systems, teachers and administrators. Data from an exploratory case study of one teacher and a multiage class of children in the first years of schooling in Australia show that when young children are using computers for set tasks in small groups, they require ongoing support from teachers, and to engage in peer interactions that are meaningful and productive. Classroom organization and the nature of teacher-child talk are key factors in engaging children in set tasks and producing desirable learning and teaching outcomes.
Resumo:
Poor patient compliance with peritoneal dialysis (PD) has significant adverse effects on morbidity and mortality rates in individuals with chronic kidney disease (CKD). It also adds to the resource burdens of healthcare services and providers. This paper explores the notion of PD compliance in patients with CKD with reference to the relevant published literature. The analysis of the literature reveals that ‘PD compliance’ is a complex and challenging construct for both patients and health professionals. There is no universal definition of compliance that is widely adopted in practice and research, and therefore a lack of consensus on how to determine ‘compliant’ patient outcomes. There are also multiple and interconnected determinants of PD compliance that are context-bound, which healthcare professionals must be aware of, and which makes producing consensus of measuring PD compliance difficult. The complexity of the interventions required to produce even a modest improvement in PD compliance, which are described in this paper, are significant. Compliance with PD and other treatments for CKD is a multidimensional, context-bound concept, that to date has tended to efface the role and needs of the renal patient. We conclude the paper with the implications for contemporary practice.
Resumo:
Our paper, “HCI & Sustainable Food Culture: A Design Framework for Engagement,” presented at the 2010 NordiCHI conference, introduced a design framework for understanding engagement between people and sustainable food cultures (Choi and Blevis, 2010). Our goal for this chapter “Advancing Design for Sustainable Food Cultures” is to expand our notion of this design framework and the programme of research it implies. This chapter presents the three elements of design framework for sustainability: (i) engagement across disciplines; (ii) engagement with and amongst users/non-users and; (iii) engagement for sustained usability. The uses a corresponding sample of photographic records of experiences that reflect three key issues in the current sustainable food domain: respectively, (i) context of food cultures, (ii) farmers’ markets, and (iii) producing food.