836 resultados para Forward osmosis


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Discusses the contentious issues surrounding computer software patents and patenting in connection with the Peer-to-Patent Australia project, a joint initiative of Queensland University of Technology (QUT) and New York Law School (NYLS) that operates with the support and endorsement of IP Australia, the government body housing Australia's patent office. Explains that the project is based on the successful Peer-to-Patent pilots run recently in the USA and Japan that are designed to improve the quality of issued patents and the patent examination process by facilitating community participation in that process. Describes how members of the public are allowed to put forward prior art references that will be considered by IP Australia's patent examiners when determining whether participating applications are novel and inventive, and therefore deserving of a patent. Concludes that, while Peer-to-Patent Australia is not a complete solution to the problems besetting patent law, the model has considerable advantages over the traditional model of patent examination

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Uninhabited aerial vehicles (UAVs) are a cutting-edge technology that is at the forefront of aviation/aerospace research and development worldwide. Many consider their current military and defence applications as just a token of their enormous potential. Unlocking and fully exploiting this potential will see UAVs in a multitude of civilian applications and routinely operating alongside piloted aircraft. The key to realising the full potential of UAVs lies in addressing a host of regulatory, public relation, and technological challenges never encountered be- fore. Aircraft collision avoidance is considered to be one of the most important issues to be addressed, given its safety critical nature. The collision avoidance problem can be roughly organised into three areas: 1) Sense; 2) Detect; and 3) Avoid. Sensing is concerned with obtaining accurate and reliable information about other aircraft in the air; detection involves identifying potential collision threats based on available information; avoidance deals with the formulation and execution of appropriate manoeuvres to maintain safe separation. This thesis tackles the detection aspect of collision avoidance, via the development of a target detection algorithm that is capable of real-time operation onboard a UAV platform. One of the key challenges of the detection problem is the need to provide early warning. This translates to detecting potential threats whilst they are still far away, when their presence is likely to be obscured and hidden by noise. Another important consideration is the choice of sensors to capture target information, which has implications for the design and practical implementation of the detection algorithm. The main contributions of the thesis are: 1) the proposal of a dim target detection algorithm combining image morphology and hidden Markov model (HMM) filtering approaches; 2) the novel use of relative entropy rate (RER) concepts for HMM filter design; 3) the characterisation of algorithm detection performance based on simulated data as well as real in-flight target image data; and 4) the demonstration of the proposed algorithm's capacity for real-time target detection. We also consider the extension of HMM filtering techniques and the application of RER concepts for target heading angle estimation. In this thesis we propose a computer-vision based detection solution, due to the commercial-off-the-shelf (COTS) availability of camera hardware and the hardware's relatively low cost, power, and size requirements. The proposed target detection algorithm adopts a two-stage processing paradigm that begins with an image enhancement pre-processing stage followed by a track-before-detect (TBD) temporal processing stage that has been shown to be effective in dim target detection. We compare the performance of two candidate morphological filters for the image pre-processing stage, and propose a multiple hidden Markov model (MHMM) filter for the TBD temporal processing stage. The role of the morphological pre-processing stage is to exploit the spatial features of potential collision threats, while the MHMM filter serves to exploit the temporal characteristics or dynamics. The problem of optimising our proposed MHMM filter has been examined in detail. Our investigation has produced a novel design process for the MHMM filter that exploits information theory and entropy related concepts. The filter design process is posed as a mini-max optimisation problem based on a joint RER cost criterion. We provide proof that this joint RER cost criterion provides a bound on the conditional mean estimate (CME) performance of our MHMM filter, and this in turn establishes a strong theoretical basis connecting our filter design process to filter performance. Through this connection we can intelligently compare and optimise candidate filter models at the design stage, rather than having to resort to time consuming Monte Carlo simulations to gauge the relative performance of candidate designs. Moreover, the underlying entropy concepts are not constrained to any particular model type. This suggests that the RER concepts established here may be generalised to provide a useful design criterion for multiple model filtering approaches outside the class of HMM filters. In this thesis we also evaluate the performance of our proposed target detection algorithm under realistic operation conditions, and give consideration to the practical deployment of the detection algorithm onboard a UAV platform. Two fixed-wing UAVs were engaged to recreate various collision-course scenarios to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. Based on this collected data, our proposed detection approach was able to detect targets out to distances ranging from about 400m to 900m. These distances, (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning ahead of impact that approaches the 12.5 second response time recommended for human pilots. Furthermore, readily available graphic processing unit (GPU) based hardware is exploited for its parallel computing capabilities to demonstrate the practical feasibility of the proposed target detection algorithm. A prototype hardware-in- the-loop system has been found to be capable of achieving data processing rates sufficient for real-time operation. There is also scope for further improvement in performance through code optimisations. Overall, our proposed image-based target detection algorithm offers UAVs a cost-effective real-time target detection capability that is a step forward in ad- dressing the collision avoidance issue that is currently one of the most significant obstacles preventing widespread civilian applications of uninhabited aircraft. We also highlight that the algorithm development process has led to the discovery of a powerful multiple HMM filtering approach and a novel RER-based multiple filter design process. The utility of our multiple HMM filtering approach and RER concepts, however, extend beyond the target detection problem. This is demonstrated by our application of HMM filters and RER concepts to a heading angle estimation problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Human error, its causes and consequences, and the ways in which it can be prevented, remain of great interest to road safety practitioners. This paper presents the findings derived from an on-road study of driver errors in which 25 participants drove a pre-determined route using MUARC's On-Road Test Vehicle (ORTeV). In-vehicle observers recorded the different errors made, and a range of other data was collected, including driver verbal protocols, forward, cockpit and driver video, and vehicle data (speed, braking, steering wheel angle, lane tracking etc). Participants also completed a post trial cognitive task analysis interview. The drivers tested made a range of different errors, with speeding violations, both intentional and unintentional, being the most common. Further more detailed analysis of a sub-set of specific error types indicates that driver errors have various causes, including failures in the wider road 'system' such as poor roadway design, infrastructure failures and unclear road rules. In closing, a range of potential error prevention strategies, including intelligent speed adaptation and road infrastructure design, are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Obesity represents a major health, social and economic burden to many developing and Westernized communities, with the prevalence increasing at a rate exceeding almost all other medical conditions. Despite major recent advances in our understanding of adipose tissue metabolism and dynamics, we still have limited insight into the regulation of adipose tissue mass in humans. Any significant increase in adipose tissue mass requires proliferation and differentiation of precursor cells (preadipocytes) present in the stromo-vascular compartment of adipose tissue. These processes are very complex and an increasing number of growth factors and hormones have been shown to modulate the expression of genes involved in preadipocyte proliferation and differentiation. A number of transcription factors, including the C/EBP family and PP ARy, have been identified as integral to adipose tissue development and preadipocyte differentiation. Together PP ARy and C/EBPa regulate important events in the activation and maintenance of the terminally differentiated phenotype. The ability of PP ARy to increase transcription through its DNA recognition site is dependent on the binding of ligands. This suggests that an endogenous PP ARy ligand may be an important regulator of adipogenesis. Adipose tissue functions as both the major site of energy storage in the body and as an endocrine organ synthesizing and secreting a number of important molecules involved in regulation of energy balance. For optimum functioning therefore, adipose tissue requires extensive vascularization and previous studies have shown that growth of adipose tissue is preceded by development of a microvascular network. This suggests that paracrine interactions between constituent cells in adipose tissue may be involved in both new capillary formation and fat cell growth. To address this hypothesis the work in this project was aimed at (a) further development of a method for inducing preadipocyte differentiation in subcultured human cells; (b) establishing a method for simultaneous isolation and separate culture of both preadipocytes and microvascular endothelial cells from the same adipose tissue biopsies; (c) to determine, using conditioned medium and co-culture techniques, if endothelial cell-derived factors influence the proliferation and/or differentiation of human preadipocytes; and (d) commence characterization of factors that may be responsible for any observed paracrine effects on aspects of human adipogenesis. Major findings of these studies were as follows: (A) Inclusion of either linoleic acid (a long-chain fatty acid reported to be a naturally occurring ligand for PP ARy) or Rosiglitazone (a member of the thiazolidinedione class of insulin-sensitizing drugs and a synthetic PPARy ligand) in differentiation medium had markedly different effects on preadipocyte differentiation. These studies showed that human preadipocytes have the potential to accumulate triacylglycerol irrespective of their stage of biochemical differentiation, and that thiazolidinediones and fatty acids may exert their adipogenic and lipogenic effects via different biochemical pathways. It was concluded that Rosiglitazone is a more potent inducer of human preadipocyte differentiation than linoleic acid. (B) A method for isolation and culture of both endothelial cells and preadipocytes from the same adipose tissue biopsy was developed. Adipose-derived microvascular endothelial cells were found to produce factor/s, which enhance both proliferation and differentiation of human preadipocytes. (C) The adipogenic effects of microvascular endothelial cells can be mimicked by exposure of preadipocytes to members of the Fibroblast Growth Factor family, specifically ~-ECGF and FGF-1. (D) Co-culture of human preadipocytes with endothelial cells or exposure of preadipocytes to either ~-ECGF or FGF-1 were found to 'prime' human preadipocytes, during their proliferative phase of growth, for thiazolidinedione-induced differentiation. (E) FGF -1 was not found to be acting as a ligand for PP ARy in this system. Findings from this project represent a significant step forward in our understanding of factors involved in growth of human adipose tissue and may lead to the development of therapeutic strategies aimed at modifying the process. Such strategies would have potential clinical utility in the treatment of obesity and obesity related disorders such as Type II Diabetes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

William Gibson’s The Miracle Worker was staged at the Brisbane Powerhouse June 2009 by Crossbow Productions. In this adaption, people with hearing impairment were privileged through the use of shadow-signing, unscripted signing and the appropriation of signing as a theatrical language in itself. 250 people living with hearing impairment attended the production, 70 had never attended a theatrical event before. During the post-performance discussions hearing audience members expressed feelings of displacement through experiencing the culture of the deaf society and not grasping some of the ideas. This paper argues that this inversion enhanced meaning making for all and illustrates a way forward to encourage the signing of more theatrical events.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The overarching aim of this study is to create new knowledge about how playful interactions (re)create the city via ubiquitous technologies, with an outlook to apply the knowledge for pragmatic innovations in relevant fields such as urban planning and technology development in the future. The study looks at the case of transyouth, the in-between demographic bridging youth and adulthood in Seoul, one of the most connected, densely populated, and quickly transforming metropolises in the world. To unravel the elusiveness of ‘play’ as a subject and the complexity of urban networks, this study takes a three-tier transdisciplinary approach comprised of an extensive literature review, Shared Visual Ethnography (SVE), and interviews with leading industry representatives who design and develop the playscape for Seoul transyouth. Through these methodological tools, the study responds to the following four research aims: 1. Examine the sociocultural, technological, and architectural context of Seoul 2. Investigate Seoul transyouth’s perception of the self and their technosocial environment 3. Identify the pattern of their playful interaction through which meanings of the self and the city are recreated 4. Develop an analytical framework for enactment of play This thesis argues that the city is a contested space that continuously changes through multiple interactions among its constituents on the seam of control and freedom. At the core of this interactive (re)creation process is play. Play is a phenomenon that is enacted at the centre of three inter-related elements of pressure, possibility, and pleasure, the analytical framework this thesis puts forward as a conceptual apparatus for studying play across disciplines. The thesis concludes by illustrating possible trajectories for pragmatic application of the framework for envisioning and building the creative, sustainable, and seductive city.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper puts forward a proposal for reviewing the role and purpose of standards in the context of national curriculum and assessment reform more generally. It seeks to commence the much-needed conversation about standards in the work of teachers as distinct from large-scale testing companies and the policy personnel responsible for reporting. Four key conditions that relate to the effective use of standards to measure improvement and support learning are analysed: clarity about purpose and function; understanding of the representation of standards; moderation practice; and the assessment community. The Queensland experience of the use of standards, teacher judgement and moderation is offered to identify what is educationally preferable in terms of their use and their relationships to curriculum, improvement and accountability. The article illustrates how these practices have recently been challenged by emerging political constraints related to the Australian Government’s implementation of national testing and national partnership funding arrangements tied to the performance of students at or below minimum standards.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A group key exchange (GKE) protocol allows a set of parties to agree upon a common secret session key over a public network. In this thesis, we focus on designing efficient GKE protocols using public key techniques and appropriately revising security models for GKE protocols. For the purpose of modelling and analysing the security of GKE protocols we apply the widely accepted computational complexity approach. The contributions of the thesis to the area of GKE protocols are manifold. We propose the first GKE protocol that requires only one round of communication and is proven secure in the standard model. Our protocol is generically constructed from a key encapsulation mechanism (KEM). We also suggest an efficient KEM from the literature, which satisfies the underlying security notion, to instantiate the generic protocol. We then concentrate on enhancing the security of one-round GKE protocols. A new model of security for forward secure GKE protocols is introduced and a generic one-round GKE protocol with forward security is then presented. The security of this protocol is also proven in the standard model. We also propose an efficient forward secure encryption scheme that can be used to instantiate the generic GKE protocol. Our next contributions are to the security models of GKE protocols. We observe that the analysis of GKE protocols has not been as extensive as that of two-party key exchange protocols. Particularly, the security attribute of key compromise impersonation (KCI) resilience has so far been ignored for GKE protocols. We model the security of GKE protocols addressing KCI attacks by both outsider and insider adversaries. We then show that a few existing protocols are not secure against KCI attacks. A new proof of security for an existing GKE protocol is given under the revised model assuming random oracles. Subsequently, we treat the security of GKE protocols in the universal composability (UC) framework. We present a new UC ideal functionality for GKE protocols capturing the security attribute of contributiveness. An existing protocol with minor revisions is then shown to realize our functionality in the random oracle model. Finally, we explore the possibility of constructing GKE protocols in the attribute-based setting. We introduce the concept of attribute-based group key exchange (AB-GKE). A security model for AB-GKE and a one-round AB-GKE protocol satisfying our security notion are presented. The protocol is generically constructed from a new cryptographic primitive called encapsulation policy attribute-based KEM (EP-AB-KEM), which we introduce in this thesis. We also present a new EP-AB-KEM with a proof of security assuming generic groups and random oracles. The EP-AB-KEM can be used to instantiate our generic AB-GKE protocol.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a critical review of past research in the work-related driving field in light vehicle fleets (e.g., vehicles < 4.5 tonnes) and an intervention framework that provides future direction for practitioners and researchers. Although work-related driving crashes have become the most common cause of death, injury, and absence from work in Australia and overseas, very limited research has progressed in establishing effective strategies to improve safety outcomes. In particular, the majority of past research has been data-driven, and therefore, limited attention has been given to theoretical development in establishing the behavioural mechanism underlying driving behaviour. As such, this paper argues that to move forward in the field of work-related driving safety, practitioners and researchers need to gain a better understanding of the individual and organisational factors influencing safety through adopting relevant theoretical frameworks, which in turn will inform the development of specifically targeted theory-driven interventions. This paper presents an intervention framework that is based on relevant theoretical frameworks and sound methodological design, incorporating interventions that can be directed at the appropriate level, individual and driving target group.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We advance the proposition that dynamic stochastic general equilibrium (DSGE) models should not only be estimated and evaluated with full information methods. These require that the complete system of equations be specified properly. Some limited information analysis, which focuses upon specific equations, is therefore likely to be a useful complement to full system analysis. Two major problems occur when implementing limited information methods. These are the presence of forward-looking expectations in the system as well as unobservable non-stationary variables. We present methods for dealing with both of these difficulties, and illustrate the interaction between full and limited information methods using a well-known model.