419 resultados para Application specific instruction-set processor
Resumo:
My journey with Peer Assisted Study Sessions, or Supplemental Instruction (SI), began in 1993 when I took over a 1st year, 1st semester unit in QUT's Bachelor of Engineering program. The unit had 500 enrolments with students from all 10 engineering majors at QUT. The 500 students received a 2 hour lecture and a 1 hour tutorial per week, usually run by academic staff or postgraduate students. The unit covered basic mechanics, which comprises a challenging set of topics on how forces interact with various bodies. One normally expects 1st year students to find it difficult to come to grips with the material. However, when I ran that unit in 1993, the failure rate had been usually around 50%.
Resumo:
Currently the Bachelor of Design is the generic degree offered to the four disciplines of Architecture, Landscape Architecture, Industrial Design, and Interior Design within the School of Design at the Queensland University of Technology. Regardless of discipline, Digital Communication is a core unit taken by the 600 first year students entering the Bachelor of Design degree. Within the design disciplines the communication of the designer's intentions is achieved primarily through the use of graphic images, with written information being considered as supportive or secondary. As such, Digital Communication attempts to educate learners in the fundamentals of this graphic design communication, using a generic digital or software tool. Past iterations of the unit have not acknowledged the subtle difference in design communication of the different design disciplines involved, and has used a single generic software tool. Following a review of the unit in 2008, it was decided that a single generic software tool was no longer entirely sufficient. This decision was based on the recognition that there was an increasing emergence of discipline specific digital tools, and an expressed student desire and apparent aptitude to learn these discipline specific tools. As a result the unit was reconstructed in 2009 to offer both discipline specific and generic software instruction, if elected by the student. This paper, apart from offering the general context and pedagogy of the existing and restructured units, will more importantly offer research data that validates the changes made to the unit. Most significant of this new data is the results of surveys that authenticate actual student aptitude versus desire in learning discipline specific tools. This is done through an exposure of student self efficacy in problem resolution and technological prowess - generally and specifically within the unit. More traditional means of validation is also presented that includes the results of the generic university-wide Learning Experience Survey of the unit, as well as a comparison between the assessment results of the restructured unit versus the previous year.
Resumo:
Campylobacter jejuni followed by Campylobacter coli contribute substantially to the economic and public health burden attributed to food-borne infections in Australia. Genotypic characterisation of isolates has provided new insights into the epidemiology and pathogenesis of C. jejuni and C. coli. However, currently available methods are not conducive to large scale epidemiological investigations that are necessary to elucidate the global epidemiology of these common food-borne pathogens. This research aims to develop high resolution C. jejuni and C. coli genotyping schemes that are convenient for high throughput applications. Real-time PCR and High Resolution Melt (HRM) analysis are fundamental to the genotyping schemes developed in this study and enable rapid, cost effective, interrogation of a range of different polymorphic sites within the Campylobacter genome. While the sources and routes of transmission of campylobacters are unclear, handling and consumption of poultry meat is frequently associated with human campylobacteriosis in Australia. Therefore, chicken derived C. jejuni and C. coli isolates were used to develop and verify the methods described in this study. The first aim of this study describes the application of MLST-SNP (Multi Locus Sequence Typing Single Nucleotide Polymorphisms) + binary typing to 87 chicken C. jejuni isolates using real-time PCR analysis. These typing schemes were developed previously by our research group using isolates from campylobacteriosis patients. This present study showed that SNP + binary typing alone or in combination are effective at detecting epidemiological linkage between chicken derived Campylobacter isolates and enable data comparisons with other MLST based investigations. SNP + binary types obtained from chicken isolates in this study were compared with a previously SNP + binary and MLST typed set of human isolates. Common genotypes between the two collections of isolates were identified and ST-524 represented a clone that could be worth monitoring in the chicken meat industry. In contrast, ST-48, mainly associated with bovine hosts, was abundant in the human isolates. This genotype was, however, absent in the chicken isolates, indicating the role of non-poultry sources in causing human Campylobacter infections. This demonstrates the potential application of SNP + binary typing for epidemiological investigations and source tracing. While MLST SNPs and binary genes comprise the more stable backbone of the Campylobacter genome and are indicative of long term epidemiological linkage of the isolates, the development of a High Resolution Melt (HRM) based curve analysis method to interrogate the hypervariable Campylobacter flagellin encoding gene (flaA) is described in Aim 2 of this study. The flaA gene product appears to be an important pathogenicity determinant of campylobacters and is therefore a popular target for genotyping, especially for short term epidemiological studies such as outbreak investigations. HRM curve analysis based flaA interrogation is a single-step closed-tube method that provides portable data that can be easily shared and accessed. Critical to the development of flaA HRM was the use of flaA specific primers that did not amplify the flaB gene. HRM curve analysis flaA interrogation was successful at discriminating the 47 sequence variants identified within the 87 C. jejuni and 15 C. coli isolates and correlated to the epidemiological background of the isolates. In the combinatorial format, the resolving power of flaA was additive to that of SNP + binary typing and CRISPR (Clustered regularly spaced short Palindromic repeats) HRM and fits the PHRANA (Progressive hierarchical resolving assays using nucleic acids) approach for genotyping. The use of statistical methods to analyse the HRM data enhanced sophistication of the method. Therefore, flaA HRM is a rapid and cost effective alternative to gel- or sequence-based flaA typing schemes. Aim 3 of this study describes the development of a novel bioinformatics driven method to interrogate Campylobacter MLST gene fragments using HRM, and is called ‘SNP Nucleated Minim MLST’ or ‘Minim typing’. The method involves HRM interrogation of MLST fragments that encompass highly informative “Nucleating SNPS” to ensure high resolution. Selection of fragments potentially suited to HRM analysis was conducted in silico using i) “Minimum SNPs” and ii) the new ’HRMtype’ software packages. Species specific sets of six “Nucleating SNPs” and six HRM fragments were identified for both C. jejuni and C. coli to ensure high typeability and resolution relevant to the MLST database. ‘Minim typing’ was tested empirically by typing 15 C. jejuni and five C. coli isolates. The association of clonal complexes (CC) to each isolate by ‘Minim typing’ and SNP + binary typing were used to compare the two MLST interrogation schemes. The CCs linked with each C. jejuni isolate were consistent for both methods. Thus, ‘Minim typing’ is an efficient and cost effective method to interrogate MLST genes. However, it is not expected to be independent, or meet the resolution of, sequence based MLST gene interrogation. ‘Minim typing’ in combination with flaA HRM is envisaged to comprise a highly resolving combinatorial typing scheme developed around the HRM platform and is amenable to automation and multiplexing. The genotyping techniques described in this thesis involve the combinatorial interrogation of differentially evolving genetic markers on the unified real-time PCR and HRM platform. They provide high resolution and are simple, cost effective and ideally suited to rapid and high throughput genotyping for these common food-borne pathogens.
Resumo:
The Queensland University of Technology (QUT) allows the presentation of theses for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of ten published /submitted papers and book chapters of which nine have been published and one is under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of investigating multilevel topologies for high quality and high power applications, with specific emphasis on renewable energy systems. The rapid evolution of renewable energy within the last several years has resulted in the design of efficient power converters suitable for medium and high-power applications such as wind turbine and photovoltaic (PV) systems. Today, the industrial trend is moving away from heavy and bulky passive components to power converter systems that use more and more semiconductor elements controlled by powerful processor systems. However, it is hard to connect the traditional converters to the high and medium voltage grids, as a single power switch cannot stand at high voltage. For these reasons, a new family of multilevel inverters has appeared as a solution for working with higher voltage levels. Besides this important feature, multilevel converters have the capability to generate stepped waveforms. Consequently, in comparison with conventional two-level inverters, they present lower switching losses, lower voltage stress across loads, lower electromagnetic interference (EMI) and higher quality output waveforms. These properties enable the connection of renewable energy sources directly to the grid without using expensive, bulky, heavy line transformers. Additionally, they minimize the size of the passive filter and increase the durability of electrical devices. However, multilevel converters have only been utilised in very particular applications, mainly due to the structural limitations, high cost and complexity of the multilevel converter system and control. New developments in the fields of power semiconductor switches and processors will favor the multilevel converters for many other fields of application. The main application for the multilevel converter presented in this work is the front-end power converter in renewable energy systems. Diode-clamped and cascade converters are the most common type of multilevel converters widely used in different renewable energy system applications. However, some drawbacks – such as capacitor voltage imbalance, number of components, and complexity of the control system – still exist, and these are investigated in the framework of this thesis. Various simulations using software simulation tools are undertaken and are used to study different cases. The feasibility of the developments is underlined with a series of experimental results. This thesis is divided into two main sections. The first section focuses on solving the capacitor voltage imbalance for a wide range of applications, and on decreasing the complexity of the control strategy on the inverter side. The idea of using sharing switches at the output structure of the DC-DC front-end converters is proposed to balance the series DC link capacitors. A new family of multioutput DC-DC converters is proposed for renewable energy systems connected to the DC link voltage of diode-clamped converters. The main objective of this type of converter is the sharing of the total output voltage into several series voltage levels using sharing switches. This solves the problems associated with capacitor voltage imbalance in diode-clamped multilevel converters. These converters adjust the variable and unregulated DC voltage generated by renewable energy systems (such as PV) to the desirable series multiple voltage levels at the inverter DC side. A multi-output boost (MOB) converter, with one inductor and series output voltage, is presented. This converter is suitable for renewable energy systems based on diode-clamped converters because it boosts the low output voltage and provides the series capacitor at the output side. A simple control strategy using cross voltage control with internal current loop is presented to obtain the desired voltage levels at the output voltage. The proposed topology and control strategy are validated by simulation and hardware results. Using the idea of voltage sharing switches, the circuit structure of different topologies of multi-output DC-DC converters – or multi-output voltage sharing (MOVS) converters – have been proposed. In order to verify the feasibility of this topology and its application, steady state and dynamic analyses have been carried out. Simulation and experiments using the proposed control strategy have verified the mathematical analysis. The second part of this thesis addresses the second problem of multilevel converters: the need to improve their quality with minimum cost and complexity. This is related to utilising asymmetrical multilevel topologies instead of conventional multilevel converters; this can increase the quality of output waveforms with a minimum number of components. It also allows for a reduction in the cost and complexity of systems while maintaining the same output quality, or for an increase in the quality while maintaining the same cost and complexity. Therefore, the asymmetrical configuration for two common types of multilevel converters – diode-clamped and cascade converters – is investigated. Also, as well as addressing the maximisation of the output voltage resolution, some technical issues – such as adjacent switching vectors – should be taken into account in asymmetrical multilevel configurations to keep the total harmonic distortion (THD) and switching losses to a minimum. Thus, the asymmetrical diode-clamped converter is proposed. An appropriate asymmetrical DC link arrangement is presented for four-level diode-clamped converters by keeping adjacent switching vectors. In this way, five-level inverter performance is achieved for the same level of complexity of the four-level inverter. Dealing with the capacitor voltage imbalance problem in asymmetrical diodeclamped converters has inspired the proposal for two different DC-DC topologies with a suitable control strategy. A Triple-Output Boost (TOB) converter and a Boost 3-Output Voltage Sharing (Boost-3OVS) converter connected to the four-level diode-clamped converter are proposed to arrange the proposed asymmetrical DC link for the high modulation indices and unity power factor. Cascade converters have shown their abilities and strengths in medium and high power applications. Using asymmetrical H-bridge inverters, more voltage levels can be generated in output voltage with the same number of components as the symmetrical converters. The concept of cascading multilevel H-bridge cells is used to propose a fifteen-level cascade inverter using a four-level H-bridge symmetrical diode-clamped converter, cascaded with classical two-level Hbridge inverters. A DC voltage ratio of cells is presented to obtain maximum voltage levels on output voltage, with adjacent switching vectors between all possible voltage levels; this can minimize the switching losses. This structure can save five isolated DC sources and twelve switches in comparison to conventional cascade converters with series two-level H bridge inverters. To increase the quality in presented hybrid topology with minimum number of components, a new cascade inverter is verified by cascading an asymmetrical four-level H-bridge diode-clamped inverter. An inverter with nineteen-level performance was achieved. This synthesizes more voltage levels with lower voltage and current THD, rather than using a symmetrical diode-clamped inverter with the same configuration and equivalent number of power components. Two different predictive current control methods for the switching states selection are proposed to minimise either losses or THD of voltage in hybrid converters. High voltage spikes at switching time in experimental results and investigation of a diode-clamped inverter structure raised another problem associated with high-level high voltage multilevel converters. Power switching components with fast switching, combined with hard switched-converters, produce high di/dt during turn off time. Thus, stray inductance of interconnections becomes an important issue and raises overvoltage and EMI issues correlated to the number of components. Planar busbar is a good candidate to reduce interconnection inductance in high power inverters compared with cables. The effect of different transient current loops on busbar physical structure of the high-voltage highlevel diode-clamped converters is highlighted. Design considerations of proper planar busbar are also presented to optimise the overall design of diode-clamped converters.
Resumo:
Dentists have the privilege of possessing, administering and prescribing drugs, including highly addictive medications, to their patients. But because drugs are often vulnerable to being abused by all members of society, including dentists and their patients, and because drugs can be dangerous, they are tightly regulated in Canada by the federal and provincial/territorial governments. Regulatory and professional dental bodies also provide guidance for their members about how to best administer and prescribe drugs. This chapter outlines the regulation by federal and provincial/territorial governments in this area, examines the professional practice requirements set out by regulatory/professional bodies and the issue of drug abuse by dental professional and patients. It is important to note from the outset that governmental and professional regulations, policies and practices differ from province to province and territory to territory. This chapter aims to alert dentists to possible legal and professional issues surrounding the possession, administration and prescription of drugs. For detailed specific information about regulation, policies, ethical standards and professional practice standards in Canada or their province/ territory, dentists should contact their insurer or professional association.
Resumo:
Introduction Ovine models are widely used in orthopaedic research. To better understand the impact of orthopaedic procedures computer simulations are necessary. 3D finite element (FE) models of bones allow implant designs to be investigated mechanically, thereby reducing mechanical testing. Hypothesis We present the development and validation of an ovine tibia FE model for use in the analysis of tibia fracture fixation plates. Material & Methods Mechanical testing of the tibia consisted of an offset 3-pt bend test with three repetitions of loading to 350N and return to 50N. Tri-axial stacked strain gauges were applied to the anterior and posterior surfaces of the bone and two rigid bodies – consisting of eight infrared active markers, were attached to the ends of the tibia. Positional measurements were taken with a FARO arm 3D digitiser. The FE model was constructed with both geometry and material properties derived from CT images of the bone. The elasticity-density relationship used for material property determination was validated separately using mechanical testing. This model was then transformed to the same coordinate system as the in vitro mechanical test and loads applied. Results Comparison between the mechanical testing and the FE model showed good correlation in surface strains (difference: anterior 2.3%, posterior 3.2%). Discussion & Conclusion This method of model creation provides a simple method for generating subject specific FE models from CT scans. The use of the CT data set for both the geometry and the material properties ensures a more accurate representation of the specific bone. This is reflected in the similarity of the surface strain results.
Resumo:
Background: The quality of stormwater runoff from ports is significant as it can be an important source of pollution to the marine environment. This is also a significant issue for the Port of Brisbane as it is located in an area of high environmental values. Therefore, it is imperative to develop an in-depth understanding of stormwater runoff quality to ensure that appropriate strategies are in place for quality improvement, where necessary. To this end, the Port of Brisbane Corporation aimed to develop a port specific stormwater model for the Fisherman Islands facility. The need has to be considered in the context of the proposed future developments of the Port area. ----------------- The Project: The research project is an outcome of the collaborative Partnership between the Port of Brisbane Corporation (POBC) and Queensland University of Technology (QUT). A key feature of this Partnership is that it seeks to undertake research to assist the Port in strengthening the environmental custodianship of the Port area through ‘cutting edge’ research and its translation into practical application. ------------------ The project was separated into two stages. The first stage developed a quantitative understanding of the generation potential of pollutant loads in the existing land uses. This knowledge was then used as input for the stormwater quality model developed in the subsequent stage. The aim is to expand this model across the yet to be developed port expansion area. This is in order to predict pollutant loads associated with stormwater flows from this area with the longer term objective of contributing to the development of ecological risk mitigation strategies for future expansion scenarios. ----------------- Study approach: Stage 1 of the overall study confirmed that Port land uses are unique in terms of the anthropogenic activities occurring on them. This uniqueness in land use results in distinctive stormwater quality characteristics different to other conventional urban land uses. Therefore, it was not scientifically valid to consider the Port as belonging to a single land use category or to consider as being similar to any typical urban land use. The approach adopted in this study was very different to conventional modelling studies where modelling parameters are developed using calibration. The field investigations undertaken in Stage 1 of the overall study helped to create fundamental knowledge on pollutant build-up and wash-off in different Port land uses. This knowledge was then used in computer modelling so that the specific characteristics of pollutant build-up and wash-off can be replicated. This meant that no calibration processes were involved due to the use of measured parameters for build-up and wash-off. ---------------- Conclusions: Stage 2 of the study was primarily undertaken using the SWMM stormwater quality model. It is a physically based model which replicates natural processes as closely as possible. The time step used and catchment variability considered was adequate to accommodate the temporal and spatial variability of input parameters and the parameters used in the modelling reflect the true nature of rainfall-runoff and pollutant processes to the best of currently available knowledge. In this study, the initial loss values adopted for the impervious surfaces are relatively high compared to values noted in research literature. However, given the scientifically valid approach used for the field investigations, it is appropriate to adopt the initial losses derived from this study for future modelling of Port land uses. The relatively high initial losses will reduce the runoff volume generated as well as the frequency of runoff events significantly. Apart from initial losses, most of the other parameters used in SWMM modelling are generic to most modelling studies. Development of parameters for MUSIC model source nodes was one of the primary objectives of this study. MUSIC, uses the mean and standard deviation of pollutant parameters based on a normal distribution. However, based on the values generated in this study, the variation of Event Mean Concentrations (EMCs) for Port land uses within the given investigation period does not fit a normal distribution. This is possibly due to the fact that only one specific location was considered, namely the Port of Brisbane unlike in the case of the MUSIC model where a range of areas with different geographic and climatic conditions were investigated. Consequently, the assumptions used in MUSIC are not totally applicable for the analysis of water quality in Port land uses. Therefore, in using the parameters included in this report for MUSIC modelling, it is important to note that it may result in under or over estimations of annual pollutant loads. It is recommended that the annual pollutant load values given in the report should be used as a guide to assess the accuracy of the modelling outcomes. A step by step guide for using the knowledge generated from this study for MUSIC modelling is given in Table 4.6. ------------------ Recommendations: The following recommendations are provided to further strengthen the cutting edge nature of the work undertaken: * It is important to further validate the approach recommended for stormwater quality modelling at the Port. Validation will require data collection in relation to rainfall, runoff and water quality from the selected Port land uses. Additionally, the recommended modelling approach could be applied to a soon-to-be-developed area to assess ‘before’ and ‘after’ scenarios. * In the modelling study, TSS was adopted as the surrogate parameter for other pollutants. This approach was based on other urban water quality research undertaken at QUT. The validity of this approach should be further assessed for Port land uses. * The adoption of TSS as a surrogate parameter for other pollutants and the confirmation that the <150 m particle size range was predominant in suspended solids for pollutant wash-off gives rise to a number of important considerations. The ability of the existing structural stormwater mitigation measures to remove the <150 m particle size range need to be assessed. The feasibility of introducing source control measures as opposed to end-of-pipe measures for stormwater quality improvement may also need to be considered.
Resumo:
Endoscopic approaches for anterior correction of idiopathic scoliosis are a relatively new surgical technique. This paper describes the development of patient-specific finite element modelling techniques to investigate the biomechanics of single rod anterior scoliosis correction. Spinal geometry is obtained from pre-operative CT scans and material properties for osteo-ligamentous spinal tissues are based on existing literature. The techniques being developed will allow pre-surgical prediction of stresses, forces and deformations in spinal tissues, rods and screws under post-operative physiological loads.
Resumo:
Human error, its causes and consequences, and the ways in which it can be prevented, remain of great interest to road safety practitioners. This paper presents the findings derived from an on-road study of driver errors in which 25 participants drove a pre-determined route using MUARC's On-Road Test Vehicle (ORTeV). In-vehicle observers recorded the different errors made, and a range of other data was collected, including driver verbal protocols, forward, cockpit and driver video, and vehicle data (speed, braking, steering wheel angle, lane tracking etc). Participants also completed a post trial cognitive task analysis interview. The drivers tested made a range of different errors, with speeding violations, both intentional and unintentional, being the most common. Further more detailed analysis of a sub-set of specific error types indicates that driver errors have various causes, including failures in the wider road 'system' such as poor roadway design, infrastructure failures and unclear road rules. In closing, a range of potential error prevention strategies, including intelligent speed adaptation and road infrastructure design, are discussed.
Resumo:
Traditional speech enhancement methods optimise signal-level criteria such as signal-to-noise ratio, but such approaches are sub-optimal for noise-robust speech recognition. Likelihood-maximising (LIMA) frameworks on the other hand, optimise the parameters of speech enhancement algorithms based on state sequences generated by a speech recogniser for utterances of known transcriptions. Previous applications of LIMA frameworks have generated a set of global enhancement parameters for all model states without taking in account the distribution of model occurrence, making optimisation susceptible to favouring frequently occurring models, in particular silence. In this paper, we demonstrate the existence of highly disproportionate phonetic distributions on two corpora with distinct speech tasks, and propose to normalise the influence of each phone based on a priori occurrence probabilities. Likelihood analysis and speech recognition experiments verify this approach for improving ASR performance in noisy environments.
Resumo:
Although many different materials, techniques and methods, including artificial or engineered bone substitutes, have been used to repair various bone defects, the restoration of critical-sized bone defects caused by trauma, surgery or congenital malformation is still a great challenge to orthopedic surgeons. One important fact that has been neglected in the pursuit of resolutions for large bone defect healing is that most physiological bone defect healing needs the periosteum and stripping off the periosteum may result in non-union or non-healed bone defects. Periosteum plays very important roles not only in bone development but also in bone defect healing. The purpose of this project was to construct a functional periosteum in vitro using a single stem cell source and then test its ability to aid the repair of critical-sized bone defect in animal models. This project was designed with three separate but closely-linked parts which in the end led to four independent papers. The first part of this study investigated the structural and cellular features in periostea from diaphyseal and metaphyseal bone surfaces in rats of different ages or with osteoporosis. Histological and immunohistological methods were used in this part of the study. Results revealed that the structure and cell populations in periosteum are both age-related and site-specific. The diaphyseal periosteum showed age-related degeneration, whereas the metaphyseal periosteum is more destructive in older aged rats. The periosteum from osteoporotic bones differs from normal bones both in terms of structure and cell populations. This is especially evident in the cambial layer of the metaphyseal area. Bone resorption appears to be more active in the periosteum from osteoporotic bones, whereas bone formation activity is comparable between the osteoporotic and normal bone. The dysregulation of bone resorption and formation in the periosteum may also be the effect of the interaction between various neural pathways and the cell populations residing within it. One of the most important aspects in periosteum engineering is how to introduce new blood vessels into the engineered periosteum to help form vascularized bone tissues in bone defect areas. The second part of this study was designed to investigate the possibility of differentiating bone marrow stromal cells (BMSCs) into the endothelial cells and using them to construct vascularized periosteum. The endothelial cell differentiation of BMSCs was induced in pro-angiogenic media under both normoxia and CoCl2 (hypoxia-mimicking agent)-induced hypoxia conditions. The VEGF/PEDF expression pattern, endothelial cell specific marker expression, in vitro and in vivo vascularization ability of BMSCs cultured in different situations were assessed. Results revealed that BMSCs most likely cannot be differentiated into endothelial cells through the application of pro-angiogenic growth factors or by culturing under CoCl2-induced hypoxic conditions. However, they may be involved in angiogenesis as regulators under both normoxia and hypoxia conditions. Two major angiogenesis-related growth factors, VEGF (pro-angiogenic) and PEDF (anti-angiogenic) were found to have altered their expressions in accordance with the extracellular environment. BMSCs treated with the hypoxia-mimicking agent CoCl2 expressed more VEGF and less PEDF and enhanced the vascularization of subcutaneous implants in vivo. Based on the findings of the second part, the CoCl2 pre-treated BMSCs were used to construct periosteum, and the in vivo vascularization and osteogenesis of the constructed periosteum were assessed in the third part of this project. The findings of the third part revealed that BMSCs pre-treated with CoCl2 could enhance both ectopic and orthotopic osteogenesis of BMSCs-derived osteoblasts and vascularization at the early osteogenic stage, and the endothelial cells (HUVECs), which were used as positive control, were only capable of promoting osteogenesis after four-weeks. The subcutaneous area of the mouse is most likely inappropriate for assessing new bone formation on collagen scaffolds. This study demonstrated the potential application of CoCl2 pre-treated BMSCs in the tissue engineering not only for periosteum but also bone or other vascularized tissues. In summary, the structure and cell populations in periosteum are age-related, site-specific and closely linked with bone health status. BMSCs as a stem cell source for periosteum engineering are not endothelial cell progenitors but regulators, and CoCl2-treated BMSCs expressed more VEGF and less PEDF. These CoCl2-treated BMSCs enhanced both vascularization and osteogenesis in constructed periosteum transplanted in vivo.
Resumo:
This thesis is devoted to the study of linear relationships in symmetric block ciphers. A block cipher is designed so that the ciphertext is produced as a nonlinear function of the plaintext and secret master key. However, linear relationships within the cipher can still exist if the texts and components of the cipher are manipulated in a number of ways, as shown in this thesis. There are four main contributions of this thesis. The first contribution is the extension of the applicability of integral attacks from word-based to bitbased block ciphers. Integral attacks exploit the linear relationship between texts at intermediate stages of encryption. This relationship can be used to recover subkey bits in a key recovery attack. In principle, integral attacks can be applied to bit-based block ciphers. However, specific tools to define the attack on these ciphers are not available. This problem is addressed in this thesis by introducing a refined set of notations to describe the attack. The bit patternbased integral attack is successfully demonstrated on reduced-round variants of the block ciphers Noekeon, Present and Serpent. The second contribution is the discovery of a very small system of equations that describe the LEX-AES stream cipher. LEX-AES is based heavily on the 128-bit-key (16-byte) Advanced Encryption Standard (AES) block cipher. In one instance, the system contains 21 equations and 17 unknown bytes. This is very close to the upper limit for an exhaustive key search, which is 16 bytes. One only needs to acquire 36 bytes of keystream to generate the equations. Therefore, the security of this cipher depends on the difficulty of solving this small system of equations. The third contribution is the proposal of an alternative method to measure diffusion in the linear transformation of Substitution-Permutation-Network (SPN) block ciphers. Currently, the branch number is widely used for this purpose. It is useful for estimating the possible success of differential and linear attacks on a particular SPN cipher. However, the measure does not give information on the number of input bits that are left unchanged by the transformation when producing the output bits. The new measure introduced in this thesis is intended to complement the current branch number technique. The measure is based on fixed points and simple linear relationships between the input and output words of the linear transformation. The measure represents the average fraction of input words to a linear diffusion transformation that are not effectively changed by the transformation. This measure is applied to the block ciphers AES, ARIA, Serpent and Present. It is shown that except for Serpent, the linear transformations used in the block ciphers examined do not behave as expected for a random linear transformation. The fourth contribution is the identification of linear paths in the nonlinear round function of the SMS4 block cipher. The SMS4 block cipher is used as a standard in the Chinese Wireless LAN Wired Authentication and Privacy Infrastructure (WAPI) and hence, the round function should exhibit a high level of nonlinearity. However, the findings in this thesis on the existence of linear relationships show that this is not the case. It is shown that in some exceptional cases, the first four rounds of SMS4 are effectively linear. In these cases, the effective number of rounds for SMS4 is reduced by four, from 32 to 28. The findings raise questions about the security provided by SMS4, and might provide clues on the existence of a flaw in the design of the cipher.
Resumo:
An Asset Management (AM) life-cycle constitutes a set of processes that align with the development, operation and maintenance of assets, in order to meet the desired requirements and objectives of the stake holders of the business. The scope of AM is often broad within an organization due to the interactions between its internal elements such as human resources, finance, technology, engineering operation, information technology and management, as well as external elements such as governance and environment. Due to the complexity of the AM processes, it has been proposed that in order to optimize asset management activities, process modelling initiatives should be adopted. Although organisations adopt AM principles and carry out AM initiatives, most do not document or model their AM processes, let alone enacting their processes (semi-) automatically using a computer-supported system. There is currently a lack of knowledge describing how to model AM processes through a methodical and suitable manner so that the processes are streamlines and optimized and are ready for deployment in a computerised way. This research aims to overcome this deficiency by developing an approach that will aid organisations in constructing AM process models quickly and systematically whilst using the most appropriate techniques, such as workflow technology. Currently, there is a wealth of information within the individual domains of AM and workflow. Both fields are gaining significant popularity in many industries thus fuelling the need for research in exploring the possible benefits of their cross-disciplinary applications. This research is thus inspired to investigate these two domains to exploit the application of workflow to modelling and execution of AM processes. Specifically, it will investigate appropriate methodologies in applying workflow techniques to AM frameworks. One of the benefits of applying workflow models to AM processes is to adapt and enable both ad-hoc and evolutionary changes over time. In addition, this can automate an AM process as well as to support the coordination and collaboration of people that are involved in carrying out the process. A workflow management system (WFMS) can be used to support the design and enactment (i.e. execution) of processes and cope with changes that occur to the process during the enactment. So far few literatures can be found in documenting a systematic approach to modelling the characteristics of AM processes. In order to obtain a workflow model for AM processes commonalities and differences between different AM processes need to be identified. This is the fundamental step in developing a conscientious workflow model for AM processes. Therefore, the first stage of this research focuses on identifying the characteristics of AM processes, especially AM decision making processes. The second stage is to review a number of contemporary workflow techniques and choose a suitable technique for application to AM decision making processes. The third stage is to develop an intermediate ameliorated AM decision process definition that improves the current process description and is ready for modelling using the workflow language selected in the previous stage. All these lead to the fourth stage where a workflow model for an AM decision making process is developed. The process model is then deployed (semi-) automatically in a state-of-the-art WFMS demonstrating the benefits of applying workflow technology to the domain of AM. Given that the information in the AM decision making process is captured at an abstract level within the scope of this work, the deployed process model can be used as an executable guideline for carrying out an AM decision process in practice. Moreover, it can be used as a vanilla system that, once being incorporated with rich information from a specific AM decision making process (e.g. in the case of a building construction or a power plant maintenance), is able to support the automation of such a process in a more elaborated way.
Resumo:
To date, studies have focused on the acquisition of alphabetic second languages (L2s) in alphabetic first language (L1) users, demonstrating significant transfer effects. The present study examined the process from a reverse perspective, comparing logographic (Mandarin-Chinese) and alphabetic (English) L1 users in the acquisition of an artificial logographic script, in order to determine whether similar language-specific advantageous transfer effects occurred. English monolinguals, English-French bilinguals and Chinese-English bilinguals learned a small set of symbols in an artificial logographic script and were subsequently tested on their ability to process this script in regard to three main perspectives: L2 reading, L2 working memory (WM), and inner processing strategies. In terms of L2 reading, a lexical decision task on the artificial symbols revealed markedly faster response times in the Chinese-English bilinguals, indicating a logographic transfer effect suggestive of a visual processing advantage. A syntactic decision task evaluated the degree to which the new language was mastered beyond the single word level. No L1-specific transfer effects were found for artificial language strings. In order to investigate visual processing of the artificial logographs further, a series of WM experiments were conducted. Artificial logographs were recalled under concurrent auditory and visuo-spatial suppression conditions to disrupt phonological and visual processing, respectively. No L1-specific transfer effects were found, indicating no visual processing advantage of the Chinese-English bilinguals. However, a bilingual processing advantage was found indicative of a superior ability to control executive functions. In terms of L1 WM, the Chinese-English bilinguals outperformed the alphabetic L1 users when processing L1 words, indicating a language experience-specific advantage. Questionnaire data on the cognitive strategies that were deployed during the acquisition and processing of the artificial logographic script revealed that the Chinese-English bilinguals rated their inner speech as lower than the alphabetic L1 users, suggesting that they were transferring their phonological processing skill set to the acquisition and use of an artificial script. Overall, evidence was found to indicate that language learners transfer specific L1 orthographic processing skills to L2 logographic processing. Additionally, evidence was also found indicating that a bilingual history enhances cognitive performance in L2.
Resumo:
In recent times, the improved levels of accuracy obtained by Automatic Speech Recognition (ASR) technology has made it viable for use in a number of commercial products. Unfortunately, these types of applications are limited to only a few of the world’s languages, primarily because ASR development is reliant on the availability of large amounts of language specific resources. This motivates the need for techniques which reduce this language-specific, resource dependency. Ideally, these approaches should generalise across languages, thereby providing scope for rapid creation of ASR capabilities for resource poor languages. Cross Lingual ASR emerges as a means for addressing this need. Underpinning this approach is the observation that sound production is largely influenced by the physiological construction of the vocal tract, and accordingly, is human, and not language specific. As a result, a common inventory of sounds exists across languages; a property which is exploitable, as sounds from a resource poor, target language can be recognised using models trained on resource rich, source languages. One of the initial impediments to the commercial uptake of ASR technology was its fragility in more challenging environments, such as conversational telephone speech. Subsequent improvements in these environments has gained consumer confidence. Pragmatically, if cross lingual techniques are to considered a viable alternative when resources are limited, they need to perform under the same types of conditions. Accordingly, this thesis evaluates cross lingual techniques using two speech environments; clean read speech and conversational telephone speech. Languages used in evaluations are German, Mandarin, Japanese and Spanish. Results highlight that previously proposed approaches provide respectable results for simpler environments such as read speech, but degrade significantly when in the more taxing conversational environment. Two separate approaches for addressing this degradation are proposed. The first is based on deriving better target language lexical representation, in terms of the source language model set. The second, and ultimately more successful approach, focuses on improving the classification accuracy of context-dependent (CD) models, by catering for the adverse influence of languages specific phonotactic properties. Whilst the primary research goal in this thesis is directed towards improving cross lingual techniques, the catalyst for investigating its use was based on expressed interest from several organisations for an Indonesian ASR capability. In Indonesia alone, there are over 200 million speakers of some Malay variant, provides further impetus and commercial justification for speech related research on this language. Unfortunately, at the beginning of the candidature, limited research had been conducted on the Indonesian language in the field of speech science, and virtually no resources existed. This thesis details the investigative and development work dedicated towards obtaining an ASR system with a 10000 word recognition vocabulary for the Indonesian language.