966 resultados para large classes
Resumo:
Conventional clinical therapies are unable to resolve osteochondral defects adequately, hence tissue engineering solutions are sought to address the challenge. A biphasic implant which was seeded with Mesenchymal Stem Cells (MSC) and coupled with an electrospun membrane was evaluated as an alternative. This dual phase construct comprised of a Polycaprolactone (PCL) cartilage scaffold and a Polycaprolactone - Tri Calcium Phosphate (PCL - TCP) osseous matrix. Autologous MSC was seeded into the entire implant via fibrin and the construct was inserted into critically sized osteochondral defects located at the medial condyle and patellar groove of pigs. The defect was resurfaced with a PCL - collagen electrospun mesh that served as a substitute for periosteal flap in preventing cell leakage. Controls either without implanted MSC or resurfacing membrane were included. After 6 months, cartilaginous repair was observed with a low occurrence of fibrocartilage at the medial condyle. Osteochondral repair was promoted and host cartilage degeneration was arrested as shown by the superior Glycosaminoglycan (GAG) maintenance. This positive morphological outcome was supported by a higher relative Young's modulus which indicated functional cartilage restoration. Bone in growth and remodeling occurred in all groups with a higher degree of mineralization in the experimental group. Tissue repair was compromised in the absence of the implanted cells or the resurfacing membrane. Moreover healing was inferior at the patellar groove as compared to the medial condyle and this was attributed to the native biomechanical features.
Resumo:
The purpose of this article is to highlight the conflict in the policy objectives of subs 46(1) and subs 46(1AA) of the Trade Practices Act 1974 (Cth) (TPA). The policy objective of subs 46(1) is to promote competition and efficient markets for the benefit of consumers (consumer welfare standard). It does not prohibit corporations with substantial market power using cost savings arising from efficiencies such economies of scale or scope, to undercut small business competitors The policy objective of 46(1AA), on the other hand, is to protect small business operators from price discounting by their larger competitors.. Unlike subs 46(1), it does not contain a ‘taking advantage’ element. It is argued that subs 46(1AA) may harm consumer welfare by having a chilling effect on price competition if this would harm small business competitors.
Resumo:
While a number of factors have been highlighted in the innovation adoption literature, little is known about the key triggers of innovation adoption across differently-sized firms. This study compares case studies of small, medium and large manufacturing firms who recently decided to adopt a process innovation. We also employ organizational surveys from 134 firms investigating the factors which influence innovation adoption. The quantitative results support the qualitative findings that the external pressures are a trigger among small to medium firms, while the larger firm’s adoption was associated with internal causes.
Resumo:
In order to develop scientific literacy students need the cognitive tools that enable them to read and evaluate science texts. One cognitive tool that has been widely used in science education to aid the development of conceptual understanding is concept mapping. However, it has been found some students experience difficulty with concept map construction. This study reports on the development and evaluation of an instructional sequence that was used to scaffold the concept-mapping process when middle school students who were experiencing difficulty with science learning used concept mapping to summarise a chapter of a science text. In this study individual differences in working memory functioning are suggested as one reason that students experience difficulty with concept map construction. The study was conducted using a design-based research methodology in the school’s learning support centre. The analysis of student work samples collected during the two-year study identified some of the difficulties and benefits associated with the use of scaffolded concept mapping with these students. The observations made during this study highlight the difficulty that some students experience with the use of concept mapping as a means of developing an understanding of science concepts and the amount of instructional support that is required for such understanding to develop. Specifically, the findings of the study support the use of multi-component, multi-modal instructional techniques to facilitate the development of conceptual understanding with students who experience difficulty with science learning. In addition, the important roles of interactive dialogue and metacognition in the development of conceptual understanding are identified.
Resumo:
Understanding the complexities that are involved in the genetics of multifactorial diseases is still a monumental task. In addition to environmental factors that can influence the risk of disease, there is also a number of other complicating factors. Genetic variants associated with age of disease onset may be different from those variants associated with overall risk of disease, and variants may be located in positions that are not consistent with the traditional protein coding genetic paradigm. Latent Variable Models are well suited for the analysis of genetic data. A latent variable is one that we do not directly observe, but which is believed to exist or is included for computational or analytic convenience in a model. This thesis presents a mixture of methodological developments utilising latent variables, and results from case studies in genetic epidemiology and comparative genomics. Epidemiological studies have identified a number of environmental risk factors for appendicitis, but the disease aetiology of this oft thought useless vestige remains largely a mystery. The effects of smoking on other gastrointestinal disorders are well documented, and in light of this, the thesis investigates the association between smoking and appendicitis through the use of latent variables. By utilising data from a large Australian twin study questionnaire as both cohort and case-control, evidence is found for the association between tobacco smoking and appendicitis. Twin and family studies have also found evidence for the role of heredity in the risk of appendicitis. Results from previous studies are extended here to estimate the heritability of age-at-onset and account for the eect of smoking. This thesis presents a novel approach for performing a genome-wide variance components linkage analysis on transformed residuals from a Cox regression. This method finds evidence for a dierent subset of genes responsible for variation in age at onset than those associated with overall risk of appendicitis. Motivated by increasing evidence of functional activity in regions of the genome once thought of as evolutionary graveyards, this thesis develops a generalisation to the Bayesian multiple changepoint model on aligned DNA sequences for more than two species. This sensitive technique is applied to evaluating the distributions of evolutionary rates, with the finding that they are much more complex than previously apparent. We show strong evidence for at least 9 well-resolved evolutionary rate classes in an alignment of four Drosophila species and at least 7 classes in an alignment of four mammals, including human. A pattern of enrichment and depletion of genic regions in the profiled segments suggests they are functionally significant, and most likely consist of various functional classes. Furthermore, a method of incorporating alignment characteristics representative of function such as GC content and type of mutation into the segmentation model is developed within this thesis. Evidence of fine-structured segmental variation is presented.
Resumo:
Automatic recognition of people is an active field of research with important forensic and security applications. In these applications, it is not always possible for the subject to be in close proximity to the system. Voice represents a human behavioural trait which can be used to recognise people in such situations. Automatic Speaker Verification (ASV) is the process of verifying a persons identity through the analysis of their speech and enables recognition of a subject at a distance over a telephone channel { wired or wireless. A significant amount of research has focussed on the application of Gaussian mixture model (GMM) techniques to speaker verification systems providing state-of-the-art performance. GMM's are a type of generative classifier trained to model the probability distribution of the features used to represent a speaker. Recently introduced to the field of ASV research is the support vector machine (SVM). An SVM is a discriminative classifier requiring examples from both positive and negative classes to train a speaker model. The SVM is based on margin maximisation whereby a hyperplane attempts to separate classes in a high dimensional space. SVMs applied to the task of speaker verification have shown high potential, particularly when used to complement current GMM-based techniques in hybrid systems. This work aims to improve the performance of ASV systems using novel and innovative SVM-based techniques. Research was divided into three main themes: session variability compensation for SVMs; unsupervised model adaptation; and impostor dataset selection. The first theme investigated the differences between the GMM and SVM domains for the modelling of session variability | an aspect crucial for robust speaker verification. Techniques developed to improve the robustness of GMMbased classification were shown to bring about similar benefits to discriminative SVM classification through their integration in the hybrid GMM mean supervector SVM classifier. Further, the domains for the modelling of session variation were contrasted to find a number of common factors, however, the SVM-domain consistently provided marginally better session variation compensation. Minimal complementary information was found between the techniques due to the similarities in how they achieved their objectives. The second theme saw the proposal of a novel model for the purpose of session variation compensation in ASV systems. Continuous progressive model adaptation attempts to improve speaker models by retraining them after exploiting all encountered test utterances during normal use of the system. The introduction of the weight-based factor analysis model provided significant performance improvements of over 60% in an unsupervised scenario. SVM-based classification was then integrated into the progressive system providing further benefits in performance over the GMM counterpart. Analysis demonstrated that SVMs also hold several beneficial characteristics to the task of unsupervised model adaptation prompting further research in the area. In pursuing the final theme, an innovative background dataset selection technique was developed. This technique selects the most appropriate subset of examples from a large and diverse set of candidate impostor observations for use as the SVM background by exploiting the SVM training process. This selection was performed on a per-observation basis so as to overcome the shortcoming of the traditional heuristic-based approach to dataset selection. Results demonstrate the approach to provide performance improvements over both the use of the complete candidate dataset and the best heuristically-selected dataset whilst being only a fraction of the size. The refined dataset was also shown to generalise well to unseen corpora and be highly applicable to the selection of impostor cohorts required in alternate techniques for speaker verification.
Resumo:
Purpose : The Hong Kong Special Administrative Region (referred to as Hong Kong from here onwards) is an international leading commercial hub particularly in Asia. In order to keep up its reputation a number of large public works projects have been considered. Public Private Partnership (PPP) has increasingly been suggested for these projects, but the suitability of using this procurement method in Hong Kong is yet to be studied empirically. The findings presented in this paper will specifically consider whether PPPs should be used to procure public works projects in Hong Kong by studying the attractive and negative factors for adopting PPP. Design/methodology/approach : As part of this study a questionnaire survey was conducted with industrial practitioners. The respondents were requested to rank the importance of fifteen attractive factors and thirteen negative factors for adopting PPP. Findings : The results found that in general the top attractive factors ranked by respondents from Hong Kong were efficiency related, these included (1) ‘Provide an integrated solution (for public infrastructure / services)’; (2) ‘Facilitate creative and innovative approaches’; and (3) ‘Solve the problem of public sector budget restraint’. It was found that Australian respondents also shared similar findings to those in Hong Kong, but the United Kingdom respondents showed a higher priority to those economic driven attractive factors. Also, the ranking of the attractive and negative factors for adopting PPP showed that on average the attractive factors were scored higher than the negative factors. Originality/value : The results of this research have enabled a comparison of the attractive and negative factors for adopting PPP between three administrative systems. These findings have confirmed that PPP is a suitable means to procure large public projects which are believed to be useful and interesting to PPP researchers and practitioners.
Resumo:
In this paper, the authors propose a new structure for the decoupling of circulant symmetric arrays of more than four elements. In this case, network element values are again obtained through a process of repeated eigenmode decoupling, here by solving sets of nonlinear equations. However, the resulting circuit is much simpler and can be implemented on a single layer. The corresponding circuit topology for the 6-element array is displayed in figure diagrams. The procedure will be illustrated by considering different examples.
Resumo:
Typical quadrotor aerial robots used in research weigh inlMMLBox and carry payloads measured in hundreds of grams. Several obstacles in design and control must be overcome to cater for expected industry demands that push the boundaries of existing quadrotor performance. The X-4 Flyer, a 4 kg quadrotor with a 1 kg payload, is intended to be prototypical of useful commercial quadrotors. The custom-built craft uses tuned plant dynamics with an onboard embedded attitude controller to stabilise flight. Independent linear SISO controllers were designed to regulate flyer attitude. The performance of the system is demonstrated in indoor and outdoor flight.
Resumo:
This paper describes technologies we have developed to perform autonomous large-scale off-world excavation. A scale dragline excavator of size similar to that required for lunar excavation was made capable of autonomous control. Systems have been put in place to allow remote operation of the machine from anywhere in the world. Algorithms have been developed for complete autonomous digging and dumping of material taking into account machine and terrain constraints and regolith variability. Experimental results are presented showing the ability to autonomously excavate and move large amounts of regolith and accurately place it at a specified location.
Resumo:
In this paper we describe the development of a three-dimensional (3D) imaging system for a 3500 tonne mining machine (dragline).Draglines are large walking cranes used for removing the dirt that covers a coal seam. Our group has been developing a dragline swing automation system since 1994. The system so far has been `blind' to its external environment. The work presented in this paper attempts to give the dragline an ability to sense its surroundings. A 3D digital terrain map (DTM) is created from data obtained from a two-dimensional laser scanner while the dragline swings. Experimental data from an operational dragline are presented.
Resumo:
The large deformation analysis is one of major challenges in numerical modelling and simulation of metal forming. Because no mesh is used, the meshfree methods show good potential for the large deformation analysis. In this paper, a local meshfree formulation, based on the local weak-forms and the updated Lagrangian (UL) approach, is developed for the large deformation analysis. To fully employ the advantages of meshfree methods, a simple and effective adaptive technique is proposed, and this procedure is much easier than the re-meshing in FEM. Numerical examples of large deformation analysis are presented to demonstrate the effectiveness of the newly developed nonlinear meshfree approach. It has been found that the developed meshfree technique provides a superior performance to the conventional FEM in dealing with large deformation problems for metal forming.
Resumo:
In this paper we describe the Large Margin Vector Quantization algorithm (LMVQ), which uses gradient ascent to maximise the margin of a radial basis function classifier. We present a derivation of the algorithm, which proceeds from an estimate of the class-conditional probability densities. We show that the key behaviour of Kohonen's well-known LVQ2 and LVQ3 algorithms emerge as natural consequences of our formulation. We compare the performance of LMVQ with that of Kohonen's LVQ algorithms on an artificial classification problem and several well known benchmark classification tasks. We find that the classifiers produced by LMVQ attain a level of accuracy that compares well with those obtained via LVQ1, LVQ2 and LVQ3, with reduced storage complexity. We indicate future directions of enquiry based on the large margin approach to Learning Vector Quantization.
Resumo:
World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.