826 resultados para Regulation-based classification system


Relevância:

100.00% 100.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): I.4.9, I.4.10.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coral reef maps at various spatial scales and extents are needed for mapping, monitoring, modelling, and management of these environments. High spatial resolution satellite imagery, pixel <10 m, integrated with field survey data and processed with various mapping approaches, can provide these maps. These approaches have been accurately applied to single reefs (10-100 km**2), covering one high spatial resolution scene from which a single thematic layer (e.g. benthic community) is mapped. This article demonstrates how a hierarchical mapping approach can be applied to coral reefs from individual reef to reef-system scales (10-1000 km**2) using object-based image classification of high spatial resolution images guided by ecological and geomorphological principles. The approach is demonstrated for three individual reefs (10-35 km**2) in Australia, Fiji, and Palau; and for three complex reef systems (300-600 km**2) one in the Solomon Islands and two in Fiji. Archived high spatial resolution images were pre-processed and mosaics were created for the reef systems. Georeferenced benthic photo transect surveys were used to acquire cover information. Field and image data were integrated using an object-based image analysis approach that resulted in a hierarchically structured classification. Objects were assigned class labels based on the dominant benthic cover type, or location-relevant ecological and geomorphological principles, or a combination thereof. This generated a hierarchical sequence of reef maps with an increasing complexity in benthic thematic information that included: 'reef', 'reef type', 'geomorphic zone', and 'benthic community'. The overall accuracy of the 'geomorphic zone' classification for each of the six study sites was 76-82% using 6-10 mapping categories. For 'benthic community' classification, the overall accuracy was 52-75% with individual reefs having 14-17 categories and reef systems 20-30 categories. We show that an object-based classification of high spatial resolution imagery, guided by field data and ecological and geomorphological principles, can produce consistent, accurate benthic maps at four hierarchical spatial scales for coral reefs of various sizes and complexities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Immunity is broadly defined as a mechanism of protection against non-self entities, a process which must be sufficiently robust to both eliminate the initial foreign body and then be maintained over the life of the host. Life-long immunity is impossible without the development of immunological memory, of which a central component is the cellular immune system, or T cells. Cellular immunity hinges upon a naïve T cell pool of sufficient size and breadth to enable Darwinian selection of clones responsive to foreign antigens during an initial encounter. Further, the generation and maintenance of memory T cells is required for rapid clearance responses against repeated insult, and so this small memory pool must be actively maintained by pro-survival cytokine signals over the life of the host.

T cell development, function, and maintenance are regulated on a number of molecular levels through complex regulatory networks. Recently, small non-coding RNAs, miRNAs, have been observed to have profound impacts on diverse aspects of T cell biology by impeding the translation of RNA transcripts to protein. While many miRNAs have been described that alter T cell development or functional differentiation, little is known regarding the role that miRNAs have in T cell maintenance in the periphery at homeostasis.

In Chapter 3 of this dissertation, tools to study miRNA biology and function were developed. First, to understand the effect that miRNA overexpression had on T cell responses, a novel overexpression system was developed to enhance the processing efficiency and ultimate expression of a given miRNA by placing it within an alternative miRNA backbone. Next, a conditional knockout mouse system was devised to specifically delete miR-191 in a cell population expressing recombinase. This strategy was expanded to permit the selective deletion of single miRNAs from within a cluster to discern the effects of specific miRNAs that were previously inaccessible in isolation. Last, to enable the identification of potentially therapeutically viable miRNA function and/or expression modulators, a high-throughput flow cytometry-based screening system utilizing miRNA activity reporters was tested and validated. Thus, several novel and useful tools were developed to assist in the studies described in Chapter 4 and in future miRNA studies.

In Chapter 4 of this dissertation, the role of miR-191 in T cell biology was evaluated. Using tools developed in Chapter 3, miR-191 was observed to be critical for T cell survival following activation-induced cell death, while proliferation was unaffected by alterations in miR-191 expression. Loss of miR-191 led to significant decreases in the numbers of CD4+ and CD8+ T cells in the periphery lymph nodes, but this loss had no impact on the homeostatic activation of either CD4+ or CD8+ cells. These peripheral changes were not caused by gross defects in thymic development, but rather impaired STAT5 phosphorylation downstream of pro-survival cytokine signals. miR-191 does not specifically inhibit STAT5, but rather directly targets the scaffolding protein, IRS1, which in turn alters cytokine-dependent signaling. The defect in peripheral T cell maintenance was exacerbated by the presence of a Bcl-2YFP transgene, which led to even greater peripheral T cell losses in addition to developmental defects. These studies collectively demonstrate that miR-191 controls peripheral T cell maintenance by modulating homeostatic cytokine signaling through the regulation of IRS1 expression and downstream STAT5 phosphorylation.

The studies described in this dissertation collectively demonstrate that miR-191 has a profound role in the maintenance of T cells at homeostasis in the periphery. Importantly, the manipulation of miR-191 altered immune homeostasis without leading to severe immunodeficiency or autoimmunity. As much data exists on the causative agents disrupting active immune responses and the formation of immunological memory, the basic processes underlying the continued maintenance of a functioning immune system must be fully characterized to facilitate the development of methods for promoting healthy immune function throughout the life of the individual. These findings also have powerful implications for the ability of patients with modest perturbations in T cell homeostasis to effectively fight disease and respond to vaccination and may provide valuable targets for therapeutic intervention.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Part 8: Business Strategies Alignment

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Marine protection has been emphasized through global and European conventions which highlighted the need for the establishment of special areas of conservation. Classification and habitat mapping have been developed to enhance the assessment of marine environment and improve spatial and strategic planning of human activities and to help on the implementation of ecosystem based management. European Nature information System (EUNIS) is a comprehensive habitat classification system to facilitate the harmonised description and collection of habitat and biotopes that has been developed by the European Environment Agency (EEA) in collaboration with experts from institutions throughout Europe.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Our aim was to determine the normative reference values of cardiorespiratory fitness (CRF) and to establish the proportion of subjects with low CRF suggestive of future cardio-metabolic risk.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The project has further developed two programs for the industry partners related to service life prediction and salt deposition. The program for Queensland Department of Main Roads which predicts salt deposition on different bridge structures at any point in Queensland has been further refined by looking at more variables. It was found that the height of the bridge significantly affects the salt deposition levels only when very close to the coast. However the effect of natural cleaning of salt by rainfall was incorporated into the program. The user interface allows selection of a location in Queensland, followed by a bridge component. The program then predicts the annual salt deposition rate and rates the likely severity of the environment. The service life prediction program for the Queensland Department of Public Works has been expanded to include 10 common building components, in a variety of environments. Data mining procedures have been used to develop the program and increase the usefulness of the application. A Query Based Learning System (QBLS) has been developed which is based on a data-centric model with extensions to provide support for user interaction. The program is based on number of sources of information about the service life of building components. These include the Delphi survey, the CSIRO Holistic model and a school survey. During the project, the Holistic model was modified for each building component and databases generated for the locations of all Queensland schools. Experiments were carried out to verify and provide parameters for the modelling. These included instrumentation of a downpipe, measurements on pH and chloride levels in leaf litter, EIS measurements and chromate leaching from Colorbond materials and dose tests to measure corrosion rates of new materials. A further database was also generated for inclusion in the program through a large school survey. Over 30 schools in a range of environments from tropical coastal to temperate inland were visited and the condition of the building components rated on a scale of 0-5. The data was analysed and used to calculate an average service life for each component/material combination in the environments, where sufficient examples were available.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As part of a decision making process, the controlling process in construction companies can be supported by computer application that provides faster and reliable decision. This paper discusses the development of a knowledge-based decision support system for controlling construction companies’ business performance. The knowledge-base was developed using questionnaire survey and case studies. A questionnaire survey was conducted to identify potential problems that can occur in construction companies as well as the source of the problems and their impact on companies’ performance. Case studies were used to identify and analyse various corrective actions. The result of the study shows that decision support system using knowledge-based management system improves the effectiveness and the efficiency of decision making process for selecting the most appropriate corrective action that can improve construction companies’ performance. The application, which had been developed in this research, was designed to support the process of controlling construction companies’ business performance and to assist young manager in selecting the most optimum corrective actions for the problems related to achieving companies’ objectives. This computer application can be used as a learning tool for identifying potential problems that a construction company faces and the most optimum corrective action.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The social tags in web 2.0 are becoming another important information source to profile users' interests and preferences for making personalized recommendations. However, the uncontrolled vocabulary causes a lot of problems to profile users accurately, such as ambiguity, synonyms, misspelling, low information sharing etc. To solve these problems, this paper proposes to use popular tags to represent the actual topics of tags, the content of items, and also the topic interests of users. A novel user profiling approach is proposed in this paper that first identifies popular tags, then represents users’ original tags using the popular tags, finally generates users’ topic interests based on the popular tags. A collaborative filtering based recommender system has been developed that builds the user profile using the proposed approach. The user profile generated using the proposed approach can represent user interests more accurately and the information sharing among users in the profile is also increased. Consequently the neighborhood of a user, which plays a crucial role in collaborative filtering based recommenders, can be much more accurately determined. The experimental results based on real world data obtained from Amazon.com show that the proposed approach outperforms other approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information Retrieval is an important albeit imperfect component of information technologies. A problem of insufficient diversity of retrieved documents is one of the primary issues studied in this research. This study shows that this problem leads to a decrease of precision and recall, traditional measures of information retrieval effectiveness. This thesis presents an adaptive IR system based on the theory of adaptive dual control. The aim of the approach is the optimization of retrieval precision after all feedback has been issued. This is done by increasing the diversity of retrieved documents. This study shows that the value of recall reflects this diversity. The Probability Ranking Principle is viewed in the literature as the “bedrock” of current probabilistic Information Retrieval theory. Neither the proposed approach nor other methods of diversification of retrieved documents from the literature conform to this principle. This study shows by counterexample that the Probability Ranking Principle does not in general lead to optimal precision in a search session with feedback (for which it may not have been designed but is actively used). Retrieval precision of the search session should be optimized with a multistage stochastic programming model to accomplish the aim. However, such models are computationally intractable. Therefore, approximate linear multistage stochastic programming models are derived in this study, where the multistage improvement of the probability distribution is modelled using the proposed feedback correctness method. The proposed optimization models are based on several assumptions, starting with the assumption that Information Retrieval is conducted in units of topics. The use of clusters is the primary reasons why a new method of probability estimation is proposed. The adaptive dual control of topic-based IR system was evaluated in a series of experiments conducted on the Reuters, Wikipedia and TREC collections of documents. The Wikipedia experiment revealed that the dual control feedback mechanism improves precision and S-recall when all the underlying assumptions are satisfied. In the TREC experiment, this feedback mechanism was compared to a state-of-the-art adaptive IR system based on BM-25 term weighting and the Rocchio relevance feedback algorithm. The baseline system exhibited better effectiveness than the cluster-based optimization model of ADTIR. The main reason for this was insufficient quality of the generated clusters in the TREC collection that violated the underlying assumption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automatic recognition of people is an active field of research with important forensic and security applications. In these applications, it is not always possible for the subject to be in close proximity to the system. Voice represents a human behavioural trait which can be used to recognise people in such situations. Automatic Speaker Verification (ASV) is the process of verifying a persons identity through the analysis of their speech and enables recognition of a subject at a distance over a telephone channel { wired or wireless. A significant amount of research has focussed on the application of Gaussian mixture model (GMM) techniques to speaker verification systems providing state-of-the-art performance. GMM's are a type of generative classifier trained to model the probability distribution of the features used to represent a speaker. Recently introduced to the field of ASV research is the support vector machine (SVM). An SVM is a discriminative classifier requiring examples from both positive and negative classes to train a speaker model. The SVM is based on margin maximisation whereby a hyperplane attempts to separate classes in a high dimensional space. SVMs applied to the task of speaker verification have shown high potential, particularly when used to complement current GMM-based techniques in hybrid systems. This work aims to improve the performance of ASV systems using novel and innovative SVM-based techniques. Research was divided into three main themes: session variability compensation for SVMs; unsupervised model adaptation; and impostor dataset selection. The first theme investigated the differences between the GMM and SVM domains for the modelling of session variability | an aspect crucial for robust speaker verification. Techniques developed to improve the robustness of GMMbased classification were shown to bring about similar benefits to discriminative SVM classification through their integration in the hybrid GMM mean supervector SVM classifier. Further, the domains for the modelling of session variation were contrasted to find a number of common factors, however, the SVM-domain consistently provided marginally better session variation compensation. Minimal complementary information was found between the techniques due to the similarities in how they achieved their objectives. The second theme saw the proposal of a novel model for the purpose of session variation compensation in ASV systems. Continuous progressive model adaptation attempts to improve speaker models by retraining them after exploiting all encountered test utterances during normal use of the system. The introduction of the weight-based factor analysis model provided significant performance improvements of over 60% in an unsupervised scenario. SVM-based classification was then integrated into the progressive system providing further benefits in performance over the GMM counterpart. Analysis demonstrated that SVMs also hold several beneficial characteristics to the task of unsupervised model adaptation prompting further research in the area. In pursuing the final theme, an innovative background dataset selection technique was developed. This technique selects the most appropriate subset of examples from a large and diverse set of candidate impostor observations for use as the SVM background by exploiting the SVM training process. This selection was performed on a per-observation basis so as to overcome the shortcoming of the traditional heuristic-based approach to dataset selection. Results demonstrate the approach to provide performance improvements over both the use of the complete candidate dataset and the best heuristically-selected dataset whilst being only a fraction of the size. The refined dataset was also shown to generalise well to unseen corpora and be highly applicable to the selection of impostor cohorts required in alternate techniques for speaker verification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

RatSLAM is a vision-based SLAM system based on extended models of the rodent hippocampus. RatSLAM creates environment representations that can be processed by the experience mapping algorithm to produce maps suitable for goal recall. The experience mapping algorithm also allows RatSLAM to map environments many times larger than could be achieved with a one to one correspondence between the map and environment, by reusing the RatSLAM maps to represent multiple sections of the environment. This paper describes experiments investigating the effects of the environment-representation size ratio and visual ambiguity on mapping and goal navigation performance. The experiments demonstrate that system performance is weakly dependent on either parameter in isolation, but strongly dependent on their joint values.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Artificial neural network (ANN) learning methods provide a robust and non-linear approach to approximating the target function for many classification, regression and clustering problems. ANNs have demonstrated good predictive performance in a wide variety of practical problems. However, there are strong arguments as to why ANNs are not sufficient for the general representation of knowledge. The arguments are the poor comprehensibility of the learned ANN, and the inability to represent explanation structures. The overall objective of this thesis is to address these issues by: (1) explanation of the decision process in ANNs in the form of symbolic rules (predicate rules with variables); and (2) provision of explanatory capability by mapping the general conceptual knowledge that is learned by the neural networks into a knowledge base to be used in a rule-based reasoning system. A multi-stage methodology GYAN is developed and evaluated for the task of extracting knowledge from the trained ANNs. The extracted knowledge is represented in the form of restricted first-order logic rules, and subsequently allows user interaction by interfacing with a knowledge based reasoner. The performance of GYAN is demonstrated using a number of real world and artificial data sets. The empirical results demonstrate that: (1) an equivalent symbolic interpretation is derived describing the overall behaviour of the ANN with high accuracy and fidelity, and (2) a concise explanation is given (in terms of rules, facts and predicates activated in a reasoning episode) as to why a particular instance is being classified into a certain category.