975 resultados para Non ideal
Resumo:
This report presents findings from a project that considered a) the current capacity of Adult and Community Education (ACE) providers to offer non-accredited courses and single modules of accredited learning that provide pathways into full scale accredited VET programs, and b) the factors that aid and inhibit this from occurring. Based on the findings, suggestions are made as to what needs to be done to extend this capacity and thereby to achieve the goals outlined in the 2008 Ministerial Declaration on Adult Community Education.
Resumo:
Schizophrenia may not be a single disease, but the result of a diverse set of related conditions. Modern neuroscience is beginning to reveal some of the genetic and environmental underpinnings of schizophrenia; however, an approach less well travelled is to examine the medical disorders that produce symptoms resembling schizophrenia. This book is the first major attempt to bring together the diseases that produce what has been termed 'secondary schizophrenia'. International experts from diverse backgrounds ask the questions: does this medical disorder, or drug, or condition cause psychosis? If yes, does it resemble schizophrenia? What mechanisms form the basis of this relationship? What implications does this understanding have for aetiology and treatment? The answers are a feast for clinicians and researchers of psychosis and schizophrenia. They mark the next step in trying to meet the most important challenge to modern neuroscience – understanding and conquering this most mysterious of human diseases.
Resumo:
A recent advance in biosecurity surveillance design aims to benefit island conservation through early and improved detection of incursions by non-indigenous species. The novel aspects of the design are that it achieves a specified power of detection in a cost-managed system, while acknowledging heterogeneity of risk in the study area and stratifying the area to target surveillance deployment. The design also utilises a variety of surveillance system components, such as formal scientific surveys, trapping methods, and incidental sightings by non-biologist observers. These advances in design were applied to black rats (Rattus rattus) representing the group of invasive rats including R. norvegicus, and R. exulans, which are potential threats to Barrow Island, Australia, a high value conservation nature reserve where a proposed liquefied natural gas development is a potential source of incursions. Rats are important to consider as they are prevalent invaders worldwide, difficult to detect early when present in low numbers, and able to spread and establish relatively quickly after arrival. The ‘exemplar’ design for the black rat is then applied in a manner that enables the detection of a range of non-indigenous species of rat that could potentially be introduced. Many of the design decisions were based on expert opinion as data gaps exist in empirical data. The surveillance system was able to take into account factors such as collateral effects on native species, the availability of limited resources on an offshore island, financial costs, demands on expertise and other logistical constraints. We demonstrate the flexibility and robustness of the surveillance system and discuss how it could be updated as empirical data are collected to supplement expert opinion and provide a basis for adaptive management. Overall, the surveillance system promotes an efficient use of resources while providing defined power to detect early rat incursions, translating to reduced environmental, resourcing and financial costs.
Resumo:
The modern society has come to expect the electrical energy on demand, while many of the facilities in power systems are aging beyond repair and maintenance. The risk of failure is increasing with the aging equipments and can pose serious consequences for continuity of electricity supply. As the equipments used in high voltage power networks are very expensive, economically it may not be feasible to purchase and store spares in a warehouse for extended periods of time. On the other hand, there is normally a significant time before receiving equipment once it is ordered. This situation has created a considerable interest in the evaluation and application of probability methods for aging plant and provisions of spares in bulk supply networks, and can be of particular importance for substations. Quantitative adequacy assessment of substation and sub-transmission power systems is generally done using a contingency enumeration approach which includes the evaluation of contingencies, classification of the contingencies based on selected failure criteria. The problem is very complex because of the need to include detailed modelling and operation of substation and sub-transmission equipment using network flow evaluation and to consider multiple levels of component failures. In this thesis a new model associated with aging equipment is developed to combine the standard tools of random failures, as well as specific model for aging failures. This technique is applied in this thesis to include and examine the impact of aging equipments on system reliability of bulk supply loads and consumers in distribution network for defined range of planning years. The power system risk indices depend on many factors such as the actual physical network configuration and operation, aging conditions of the equipment, and the relevant constraints. The impact and importance of equipment reliability on power system risk indices in a network with aging facilities contains valuable information for utilities to better understand network performance and the weak links in the system. In this thesis, algorithms are developed to measure the contribution of individual equipment to the power system risk indices, as part of the novel risk analysis tool. A new cost worth approach was developed in this thesis that can make an early decision in planning for replacement activities concerning non-repairable aging components, in order to maintain a system reliability performance which economically is acceptable. The concepts, techniques and procedures developed in this thesis are illustrated numerically using published test systems. It is believed that the methods and approaches presented, substantially improve the accuracy of risk predictions by explicit consideration of the effect of equipment entering a period of increased risk of a non-repairable failure.
Resumo:
Research has established a close relationship between learning environments and learning outcomes (Department of Education and Early Childhood Development, Victoria, 2008; Woolner, Hall, Higgins, McCaughey & Wall, 2007) yet little is known about how students in Australian schools imagine the ways that their learning environments could be improved to enhance their engagement with the processes and content of education and children are rarely consulted on the issue of school design (Rudduck & Flutter, 2004). Currently, school and classroom designers give attention to operational matters of efficiency and economy, so that architecture for children’s education is largely conceived in terms of adult and professional needs (Halpin, 2007). This results in the construction of educational spaces that impose traditional teaching and learning methods, reducing the possibilities of imaginative pedagogical relationships. Education authorities may encourage new, student-centred pedagogical styles, such as collaborative learning, team-teaching and peer tutoring, but the spaces where such innovations are occurring do not always provide the features necessary to implement these styles. Heeding the views of children could result in the creation of spaces where more imaginative pedagogical relationships and student-centred pedagogical styles can be implemented. In this article, a research project conducted with children in nine Queensland primary schools to investigate their ideas of the ideal ‘school’ is discussed. Overwhelmingly, the students’ work emphasised that learning should be fun and that learning environments should be eco-friendly places where their imaginations can be engaged and where they learn from and in touch with reality. The children’s imagined schools echo ideas that have been promoted over many decades by progressive educators such as John Dewey (1897, in Provenzo, 2006) (“experiential learning”), AS Neill (in Cassebaum, 2003) (Summerhill school) and Ivan Illich (1970) (“deschooling”), with a vast majority of students suggesting that, wherever possible, learning should take place away from classrooms and in environments that support direct, hands-on learning.
Resumo:
IEC 61850 Process Bus technology has the potential to improve cost, performance and reliability of substation design. Substantial costs associated with copper wiring (designing, documentation, construction, commissioning and troubleshooting) can be reduced with the application of digital Process Bus technology, especially those based upon international standards. An IEC 61850-9-2 based sampled value Process Bus is an enabling technology for the application of Non-Conventional Instrument Transformers (NCIT). Retaining the output of the NCIT in its native digital form, rather than conversion to an analogue output, allows for improved transient performance, dynamic range, safety, reliability and reduced cost. In this paper we report on a pilot installation using NCITs communicating across a switched Ethernet network using the UCAIug Implementation Guideline for IEC 61850-9-2 (9-2 Light Edition or 9-2LE). This system was commissioned in a 275 kV Line Reactor bay at Powerlink Queensland’s Braemar substation in 2009, with sampled value protection IEDs 'shadowing' the existing protection system. The results of commissioning tests and twelve months of service experience using a Fibre Optic Current Transformer (FOCT) from Smart Digital Optics (SDO) are presented, including the response of the system to fault conditions. A number of remaining issues to be resolved to enable wide-scale deployment of NCITs and IEC 61850-9-2 Process Bus technology are also discussed.
Resumo:
This paper explores the genealogies of bio-power that cut across punitive state interventions aimed at regulating or normalising several distinctive ‘problem’ or ‘suspect’ deviant populations, such as state wards, non-lawful citizens and Indigenous youth. It begins by making some general comments about the theoretical approach to bio-power taken in this paper. It then outlines the distinctive features of bio-power in Australia and how these intersected with the emergence of penal welfarism to govern the unruly, unchaste, unlawful, and the primitive. The paper draws on three examples to illustrate the argument – the gargantuan criminalisation rates of Aboriginal youth, the history of incarcerating state wards in state institutions, and the mandatory detention of unlawful non-citizens and their children. The construction of Indigenous people as a dangerous presence, alongside the construction of the unruly neglected children of the colony — the larrikin descendants of convicts as necessitating special regimes of internal controls and institutions, found a counterpart in the racial and other exclusionary criteria operating through immigration controls for much of the twentieth century. In each case the problem child or population was expelled from the social body through forms of bio-power, rationalised as strengthening, protecting or purifying the Australian population.
Resumo:
Similarity solutions for flow over an impermeable, non-linearly (quadratic) stretching sheet were studied recently by Raptis and Perdikis (Int. J. Non-linear Mech. 41 (2006) 527–529) using a stream function of the form ψ=αxf(η)+βx2g(η). A fundamental error in their problem formulation is pointed out. On correction, it is shown that similarity solutions do not exist for this choice of ψ
Resumo:
There has been much conjecture of late as to whether the patentable subject matter standard contains a physicality requirement. The issue came to a head when the Federal Circuit introduced the machine-or-transformation test in In re Bilski and declared it to be the sole test for determining subject matter eligibility. Many commentators criticized the test, arguing that it is inconsistent with Supreme Court precedent and the need for the patent system to respond appropriately to all new and useful innovation in whatever form it arises. Those criticisms were vindicated when, on appeal, the Supreme Court in Bilski v. Kappos dispensed with any suggestion that the patentable subject matter test involves a physicality requirement. In this article, the issue is addressed from a normative perspective: it asks whether the patentable subject matter test should contain a physicality requirement. The conclusion reached is that it should not, because such a limitation is not an appropriate means of encouraging much of the valuable innovation we are likely to witness during the Information Age. It is contended that it is not only traditionally-recognized mechanical, chemical and industrial manufacturing processes that are patent eligible, but that patent eligibility extends to include non-machine implemented and non-physical methods that do not have any connection with a physical device and do not cause a physical transformation of matter. Concerns raised that there is a trend of overreaching commoditization or propertization, where the boundaries of patent law have been expanded too far, are unfounded since the strictures of novelty, nonobviousness and sufficiency of description will exclude undeserving subject matter from patentability. The argument made is that introducing a physicality requirement will have unintended adverse effects in various fields of technology, particularly those emerging technologies that are likely to have a profound social effect in the future.
Resumo:
Mesoporous bioactive glass (MBG) is a new class of biomaterials with a well-ordered nanochannel structure, whose in vitro bioactivity is far superior than that of non-mesoporous bioactive glass (BG); the material's in vivo osteogenic properties are, however, yet to be assessed. Porous silk scaffolds have been used for bone tissue engineering, but this material's osteoconductivity is far from optimal. The aims of this study were to incorporate MBG into silk scaffolds in order to improve their osteoconductivity and then to compare the effect of MBG and BG on the in vivo osteogenesis of silk scaffolds. MBG/silk and BG/silk scaffolds with a highly porous structure were prepared by a freeze-drying method. The mechanical strength, in vitro apatite mineralization, silicon ion release and pH stability of the composite scaffolds were assessed. The scaffolds were implanted into calvarial defects in SCID mice and the degree of in vivo osteogenesis was evaluated by microcomputed tomography (μCT), hematoxylin and eosin (H&E) and immunohistochemistry (type I collagen) analyses. The results showed that MBG/silk scaffolds have better physiochemical properties (mechanical strength, in vitro apatite mineralization, Si ion release and pH stability) compared to BG/silk scaffolds. MBG and BG both improved the in vivo osteogenesis of silk scaffolds. μCT and H&E analyses showed that MBG/silk scaffolds induced a slightly higher rate of new bone formation in the defects than did BG/silk scaffolds and immunohistochemical analysis showed greater synthesis of type I collagen in MBG/silk scaffolds compared to BG/silk scaffolds.
Resumo:
Vehicular traffic in urban areas may adversely affect urban water quality through the build-up of traffic generated semi and non volatile organic compounds (SVOCs and NVOCs) on road surfaces. The characterisation of the build-up processes is the key to developing mitigation measures for the removal of such pollutants from urban stormwater. An in-depth analysis of the build-up of SVOCs and NVOCs was undertaken in the Gold Coast region in Australia. Principal Component Analysis (PCA) and Multicriteria Decision tools such as PROMETHEE and GAIA were employed to understand the SVOC and NVOC build-up under combined traffic scenarios of low, moderate, and high traffic in different land uses. It was found that congestion in the commercial areas and use of lubricants and motor oils in the industrial areas were the main sources of SVOCs and NVOCs on urban roads, respectively. The contribution from residential areas to the build-up of such pollutants was hardly noticeable. It was also revealed through this investigation that the target SVOCs and NVOCs were mainly attached to particulate fractions of 75 to 300 µm whilst the redistribution of coarse fractions due to vehicle activity mainly occurred in the >300 µm size range. Lastly, under combined traffic scenario, moderate traffic with average daily traffic ranging from 2300 to 5900 and average congestion of 0.47 was found to dominate SVOC and NVOC build-up on roads.
Resumo:
This paper develops a general theory of validation gating for non-linear non-Gaussian mod- els. Validation gates are used in target tracking to cull very unlikely measurement-to-track associa- tions, before remaining association ambiguities are handled by a more comprehensive (and expensive) data association scheme. The essential property of a gate is to accept a high percentage of correct associ- ations, thus maximising track accuracy, but provide a su±ciently tight bound to minimise the number of ambiguous associations. For linear Gaussian systems, the ellipsoidal vali- dation gate is standard, and possesses the statistical property whereby a given threshold will accept a cer- tain percentage of true associations. This property does not hold for non-linear non-Gaussian models. As a system departs from linear-Gaussian, the ellip- soid gate tends to reject a higher than expected pro- portion of correct associations and permit an excess of false ones. In this paper, the concept of the ellip- soidal gate is extended to permit correct statistics for the non-linear non-Gaussian case. The new gate is demonstrated by a bearing-only tracking example.
Resumo:
Estimating and predicting degradation processes of engineering assets is crucial for reducing the cost and insuring the productivity of enterprises. Assisted by modern condition monitoring (CM) technologies, most asset degradation processes can be revealed by various degradation indicators extracted from CM data. Maintenance strategies developed using these degradation indicators (i.e. condition-based maintenance) are more cost-effective, because unnecessary maintenance activities are avoided when an asset is still in a decent health state. A practical difficulty in condition-based maintenance (CBM) is that degradation indicators extracted from CM data can only partially reveal asset health states in most situations. Underestimating this uncertainty in relationships between degradation indicators and health states can cause excessive false alarms or failures without pre-alarms. The state space model provides an efficient approach to describe a degradation process using these indicators that can only partially reveal health states. However, existing state space models that describe asset degradation processes largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires that failures and inspections only happen at fixed intervals. The discrete state assumption entails discretising continuous degradation indicators, which requires expert knowledge and often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This research proposes a Gamma-based state space model that does not have discrete time, discrete state, linear and Gaussian assumptions to model partially observable degradation processes. Monte Carlo-based algorithms are developed to estimate model parameters and asset remaining useful lives. In addition, this research also develops a continuous state partially observable semi-Markov decision process (POSMDP) to model a degradation process that follows the Gamma-based state space model and is under various maintenance strategies. Optimal maintenance strategies are obtained by solving the POSMDP. Simulation studies through the MATLAB are performed; case studies using the data from an accelerated life test of a gearbox and a liquefied natural gas industry are also conducted. The results show that the proposed Monte Carlo-based EM algorithm can estimate model parameters accurately. The results also show that the proposed Gamma-based state space model have better fitness result than linear and Gaussian state space models when used to process monotonically increasing degradation data in the accelerated life test of a gear box. Furthermore, both simulation studies and case studies show that the prediction algorithm based on the Gamma-based state space model can identify the mean value and confidence interval of asset remaining useful lives accurately. In addition, the simulation study shows that the proposed maintenance strategy optimisation method based on the POSMDP is more flexible than that assumes a predetermined strategy structure and uses the renewal theory. Moreover, the simulation study also shows that the proposed maintenance optimisation method can obtain more cost-effective strategies than a recently published maintenance strategy optimisation method by optimising the next maintenance activity and the waiting time till the next maintenance activity simultaneously.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.