931 resultados para Rule-based techniques


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Reliability and dependability modeling can be employed during many stages of analysis of a computing system to gain insights into its critical behaviors. To provide useful results, realistic models of systems are often necessarily large and complex. Numerical analysis of these models presents a formidable challenge because the sizes of their state-space descriptions grow exponentially in proportion to the sizes of the models. On the other hand, simulation of the models requires analysis of many trajectories in order to compute statistically correct solutions. This dissertation presents a novel framework for performing both numerical analysis and simulation. The new numerical approach computes bounds on the solutions of transient measures in large continuous-time Markov chains (CTMCs). It extends existing path-based and uniformization-based methods by identifying sets of paths that are equivalent with respect to a reward measure and related to one another via a simple structural relationship. This relationship makes it possible for the approach to explore multiple paths at the same time,· thus significantly increasing the number of paths that can be explored in a given amount of time. Furthermore, the use of a structured representation for the state space and the direct computation of the desired reward measure (without ever storing the solution vector) allow it to analyze very large models using a very small amount of storage. Often, path-based techniques must compute many paths to obtain tight bounds. In addition to presenting the basic path-based approach, we also present algorithms for computing more paths and tighter bounds quickly. One resulting approach is based on the concept of path composition whereby precomputed subpaths are composed to compute the whole paths efficiently. Another approach is based on selecting important paths (among a set of many paths) for evaluation. Many path-based techniques suffer from having to evaluate many (unimportant) paths. Evaluating the important ones helps to compute tight bounds efficiently and quickly.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Terahertz (THz) technology has been generating a lot of interest because of the potential applications for systems working in this frequency range. However, to fully achieve this potential, effective and efficient ways of generating controlled signals in the terahertz range are required. Devices that exhibit negative differential resistance (NDR) in a region of their current-voltage (I-V ) characteristics have been used in circuits for the generation of radio frequency signals. Of all of these NDR devices, resonant tunneling diode (RTD) oscillators, with their ability to oscillate in the THz range are considered as one of the most promising solid-state sources for terahertz signal generation at room temperature. There are however limitations and challenges with these devices, from inherent low output power usually in the range of micro-watts (uW) for RTD oscillators when milli-watts (mW) are desired. At device level, parasitic oscillations caused by the biasing line inductance when the device is biased in the NDR region prevent accurate device characterisation, which in turn prevents device modelling for computer simulations. This thesis describes work on I-V characterisation of tunnel diode (TD) and RTD (fabricated by Dr. Jue Wang) devices, and the radio frequency (RF) characterisation and small signal modelling of RTDs. The thesis also describes the design and measurement of hybrid TD oscillators for higher output power and the design and measurement of a planar Yagi antenna (fabricated by Khalid Alharbi) for THz applications. To enable oscillation free current-voltage characterisation of tunnel diodes, a commonly employed method is the use of a suitable resistor connected across the device to make the total differential resistance in the NDR region positive. However, this approach is not without problems as the value of the resistor has to satisfy certain conditions or else bias oscillations would still be present in the NDR region of the measured I-V characteristics. This method is difficult to use for RTDs which are fabricated on wafer due to the discrepancies in designed and actual resistance values of fabricated resistors using thin film technology. In this work, using pulsed DC rather than static DC measurements during device characterisation were shown to give accurate characteristics in the NDR region without the need for a stabilisation resistor. This approach allows for direct oscillation free characterisation for devices. Experimental results show that the I-V characterisation of tunnel diodes and RTD devices free of bias oscillations in the NDR region can be made. In this work, a new power-combining topology to address the limitations of low output power of TD and RTD oscillators is presented. The design employs the use of two oscillators biased separately, but with the combined output power from both collected at a single load. Compared to previous approaches, this method keeps the frequency of oscillation of the combined oscillators the same as for one of the oscillators. Experimental results with a hybrid circuit using two tunnel diode oscillators compared with a single oscillator design with similar values shows that the coupled oscillators produce double the output RF power of the single oscillator. This topology can be scaled for higher (up to terahertz) frequencies in the future by using RTD oscillators. Finally, a broadband Yagi antenna suitable for wireless communication at terahertz frequencies is presented in this thesis. The return loss of the antenna showed that the bandwidth is larger than the measured range (140-220 GHz). A new method was used to characterise the radiation pattern of the antenna in the E-plane. This was carried out on-wafer and the measured radiation pattern showed good agreement with the simulated pattern. In summary, this work makes important contributions to the accurate characterisation and modelling of TDs and RTDs, circuit-based techniques for power combining of high frequency TD or RTD oscillators, and to antennas suitable for on chip integration with high frequency oscillators.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In Part 1 of this thesis, we propose that biochemical cooperativity is a fundamentally non-ideal process. We show quantal effects underlying biochemical cooperativity and highlight apparent ergodic breaking at small volumes. The apparent ergodic breaking manifests itself in a divergence of deterministic and stochastic models. We further predict that this divergence of deterministic and stochastic results is a failure of the deterministic methods rather than an issue of stochastic simulations.

Ergodic breaking at small volumes may allow these molecular complexes to function as switches to a greater degree than has previously been shown. We propose that this ergodic breaking is a phenomenon that the synapse might exploit to differentiate Ca$^{2+}$ signaling that would lead to either the strengthening or weakening of a synapse. Techniques such as lattice-based statistics and rule-based modeling are tools that allow us to directly confront this non-ideality. A natural next step to understanding the chemical physics that underlies these processes is to consider \textit{in silico} specifically atomistic simulation methods that might augment our modeling efforts.

In the second part of this thesis, we use evolutionary algorithms to optimize \textit{in silico} methods that might be used to describe biochemical processes at the subcellular and molecular levels. While we have applied evolutionary algorithms to several methods, this thesis will focus on the optimization of charge equilibration methods. Accurate charges are essential to understanding the electrostatic interactions that are involved in ligand binding, as frequently discussed in the first part of this thesis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the last decades the automotive sector has seen a technological revolution, due mainly to the more restrictive regulation, the newly introduced technologies and, as last, to the poor resources of fossil fuels remaining on Earth. Promising solution in vehicles’ propulsion are represented by alternative architectures and energy sources, for example fuel-cells and pure electric vehicles. The automotive transition to new and green vehicles is passing through the development of hybrid vehicles, that usually combine positive aspects of each technology. To fully exploit the powerful of hybrid vehicles, however, it is important to manage the powertrain’s degrees of freedom in the smartest way possible, otherwise hybridization would be worthless. To this aim, this dissertation is focused on the development of energy management strategies and predictive control functions. Such algorithms have the goal of increasing the powertrain overall efficiency and contextually increasing the driver safety. Such control algorithms have been applied to an axle-split Plug-in Hybrid Electric Vehicle with a complex architecture that allows more than one driving modes, including the pure electric one. The different energy management strategies investigated are mainly three: the vehicle baseline heuristic controller, in the following mentioned as rule-based controller, a sub-optimal controller that can include also predictive functionalities, referred to as Equivalent Consumption Minimization Strategy, and a vehicle global optimum control technique, called Dynamic Programming, also including the high-voltage battery thermal management. During this project, different modelling approaches have been applied to the powertrain, including Hardware-in-the-loop, and diverse powertrain high-level controllers have been developed and implemented, increasing at each step their complexity. It has been proven the potential of using sophisticated powertrain control techniques, and that the gainable benefits in terms of fuel economy are largely influenced by the chose energy management strategy, even considering the powerful vehicle investigated.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A High-Performance Computing job dispatcher is a critical software that assigns the finite computing resources to submitted jobs. This resource assignment over time is known as the on-line job dispatching problem in HPC systems. The fact the problem is on-line means that solutions must be computed in real-time, and their required time cannot exceed some threshold to do not affect the normal system functioning. In addition, a job dispatcher must deal with a lot of uncertainty: submission times, the number of requested resources, and duration of jobs. Heuristic-based techniques have been broadly used in HPC systems, at the cost of achieving (sub-)optimal solutions in a short time. However, the scheduling and resource allocation components are separated, thus generates a decoupled decision that may cause a performance loss. Optimization-based techniques are less used for this problem, although they can significantly improve the performance of HPC systems at the expense of higher computation time. Nowadays, HPC systems are being used for modern applications, such as big data analytics and predictive model building, that employ, in general, many short jobs. However, this information is unknown at dispatching time, and job dispatchers need to process large numbers of them quickly while ensuring high Quality-of-Service (QoS) levels. Constraint Programming (CP) has been shown to be an effective approach to tackle job dispatching problems. However, state-of-the-art CP-based job dispatchers are unable to satisfy the challenges of on-line dispatching, such as generate dispatching decisions in a brief period and integrate current and past information of the housing system. Given the previous reasons, we propose CP-based dispatchers that are more suitable for HPC systems running modern applications, generating on-line dispatching decisions in a proper time and are able to make effective use of job duration predictions to improve QoS levels, especially for workloads dominated by short jobs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Most of the existing open-source search engines, utilize keyword or tf-idf based techniques to find relevant documents and web pages relative to an input query. Although these methods, with the help of a page rank or knowledge graphs, proved to be effective in some cases, they often fail to retrieve relevant instances for more complicated queries that would require a semantic understanding to be exploited. In this Thesis, a self-supervised information retrieval system based on transformers is employed to build a semantic search engine over the library of Gruppo Maggioli company. Semantic search or search with meaning can refer to an understanding of the query, instead of simply finding words matches and, in general, it represents knowledge in a way suitable for retrieval. We chose to investigate a new self-supervised strategy to handle the training of unlabeled data based on the creation of pairs of ’artificial’ queries and the respective positive passages. We claim that by removing the reliance on labeled data, we may use the large volume of unlabeled material on the web without being limited to languages or domains where labeled data is abundant.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The thesis is dedicated to the implementation of advanced x-ray-based techniques for the investigation of the battery systems, more predominantly, the cathode materials. The implemented characterisation methods include synchrotron based x-ray absorption spectroscopy, powder x-ray diffraction, 2-dimensional x-ray fluorescence, full field transmission soft x-ray microscopy, and laboratory x-ray photoelectron spectroscopy. The research highlights the different areas of expertise for each described method, in terms of material characterisation, exploring their complementarities and intersections. The results are focused over manganese hexacyanoferrate and partially Ni substituted manganese hexacyanoferrate, through both organic and aqueous battery systems. In aqueous system, the modification of cathode composition has been observed with various techniques, indicating to the processes occurring in bulk, surface, locally or in long-range, including with the speciation by 2-dimensional scanning, and the time-resolution, by the implementation of the operando measurements. In organic media, the inhomogenisation of the cathode material during the aging process was investigated by the development of the special image treatment procedure for the maps, obtained from the transmission soft x-ray microscopy. It worth mentioning, that apart from the combination of the outcomes from the various x-ray measurements, the exploration of the new capabilities was also conducted, namely, probing the oxidation state of the element with the synchrotron-based 2-dimensional x-ray fluorescence technique, which, generally, with conventional set up, is not possible to achieve. The results and methodology from this thesis can, of course, be generalised on the characterisation of the other battery systems, and not only, as the x-ray techniques are one of the most informative and sophisticated methods for advanced structural investigation of the materials.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis develops AI methods as a contribution to computational musicology, an interdisciplinary field that studies music with computers. In systematic musicology a composition is defined as the combination of harmony, melody and rhythm. According to de La Borde, harmony alone "merits the name of composition". This thesis focuses on analysing the harmony from a computational perspective. We concentrate on symbolic music representation and address the problem of formally representing chord progressions in western music compositions. Informally, chords are sets of pitches played simultaneously, and chord progressions constitute the harmony of a composition. Our approach combines ML techniques with knowledge-based techniques. We design and implement the Modal Harmony ontology (MHO), using OWL. It formalises one of the most important theories in western music: the Modal Harmony Theory. We propose and experiment with different types of embedding methods to encode chords, inspired by NLP and adapted to the music domain, using both statistical (extensional) knowledge by relying on a huge dataset of chord annotations (ChoCo), intensional knowledge by relying on MHO and a combination of the two. The methods are evaluated on two musicologically relevant tasks: chord classification and music structure segmentation. The former is verified by comparing the results of the Odd One Out algorithm to the classification obtained with MHO. Good performances (accuracy: 0.86) are achieved. We feed a RNN for the latter, using our embeddings. Results show that the best performance (F1: 0.6) is achieved with embeddings that combine both approaches. Our method outpeforms the state of the art (F1 = 0.42) for symbolic music structure segmentation. It is worth noticing that embeddings based only on MHO almost equal the best performance (F1 = 0.58). We remark that those embeddings only require the ontology as an input as opposed to other approaches that rely on large datasets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Due to the imprecise nature of biological experiments, biological data is often characterized by the presence of redundant and noisy data. This may be due to errors that occurred during data collection, such as contaminations in laboratorial samples. It is the case of gene expression data, where the equipments and tools currently used frequently produce noisy biological data. Machine Learning algorithms have been successfully used in gene expression data analysis. Although many Machine Learning algorithms can deal with noise, detecting and removing noisy instances from the training data set can help the induction of the target hypothesis. This paper evaluates the use of distance-based pre-processing techniques for noise detection in gene expression data classification problems. This evaluation analyzes the effectiveness of the techniques investigated in removing noisy data, measured by the accuracy obtained by different Machine Learning classifiers over the pre-processed data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In Natural Language Processing (NLP) symbolic systems, several linguistic phenomena, for instance, the thematic role relationships between sentence constituents, such as AGENT, PATIENT, and LOCATION, can be accounted for by the employment of a rule-based grammar. Another approach to NLP concerns the use of the connectionist model, which has the benefits of learning, generalization and fault tolerance, among others. A third option merges the two previous approaches into a hybrid one: a symbolic thematic theory is used to supply the connectionist network with initial knowledge. Inspired on neuroscience, it is proposed a symbolic-connectionist hybrid system called BIO theta PRED (BIOlogically plausible thematic (theta) symbolic-connectionist PREDictor), designed to reveal the thematic grid assigned to a sentence. Its connectionist architecture comprises, as input, a featural representation of the words (based on the verb/noun WordNet classification and on the classical semantic microfeature representation), and, as output, the thematic grid assigned to the sentence. BIO theta PRED is designed to ""predict"" thematic (semantic) roles assigned to words in a sentence context, employing biologically inspired training algorithm and architecture, and adopting a psycholinguistic view of thematic theory.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The ability to control both the minimum size of holes and the minimum size of structural members are essential requirements in the topology optimization design process for manufacturing. This paper addresses both requirements by means of a unified approach involving mesh-independent projection techniques. An inverse projection is developed to control the minimum hole size while a standard direct projection scheme is used to control the minimum length of structural members. In addition, a heuristic scheme combining both contrasting requirements simultaneously is discussed. Two topology optimization implementations are contributed: one in which the projection (either inverse or direct) is used at each iteration; and the other in which a two-phase scheme is explored. In the first phase, the compliance minimization is carried out without any projection until convergence. In the second phase, the chosen projection scheme is applied iteratively until a solution is obtained while satisfying either the minimum member size or minimum hole size. Examples demonstrate the various features of the projection-based techniques presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ocotea catharinensis is a rare tree species indigenous to the Atlantic rainforest of South America. In spite of its value as a hardwood species, it is in danger of extinction. The species erratically produces seeds showing irregular flowering and slow growth. Therefore, plants are not easily replaced. Tissue culture-based techniques are commonly used for obtaining living material for tree propagation and in vitro preservation. Therefore, a high-frequency somatic embryogenic system was developed for the species. In the present work, the genetic fidelity of cell aggregates and somatic embryos at various stages of in vitro development of O. catharinensis was investigated using RAPD and AFLP markers. Both analyses confirmed the absence of genetic variation in all developmental stages of O. catharinensis embryogenic cultures, verifying that the in vitro system is genetically stable. The cultures were also analyzed for their methylation profiles at 5`-CCGG-3` sites by identifying methylation-sensitive amplification polymorphisms. Some of these markers differentiated cell aggregates from embryo bodies. The sequencing of ten MSAP markers revealed that four sequences showed significant similarity to genes encoding plant proteins. Particularly, the predicted amino acid sequence of the fragment designated as OcEaggHMttc155 was similar to the enzyme 1-aminocyclopropane-1-carboxylate oxidase (ACO), which is involved in the biosynthesis of ethylene, and its expression was reported to occur from the beginning to the intermediate stages of plant embryo development. Here, we suggest that this enzyme is possibly involved in the control of the earliest stages of somatic embryogenesis of O. catharinensis, and an approach to study ACO expression during somatic embryogenesis is proposed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article extends Defeasible Logic to deal with the contextual deliberation process of cognitive agents. First, we introduce meta-rules to reason with rules. Meta-rules are rules that have as a consequent rules for motivational components, such as obligations, intentions and desires. In other words, they include nested rules. Second, we introduce explicit preferences among rules. They deal with complex structures where nested rules can be involved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Concurrent autoimmune disorders (CAIDs) have been shown to occur in 22% to 34% of the patients with autoimmune hepatitis (AIH). Their presence has been linked to female gender, older age, and to certain HLA antigens, namely HLA-A11. DRB1*04, and DRB4*01. Aims: To assess the frequency and nature of CAID in Brazilian patients with AIH types 1 (AIH-1) and 2 (AIH-2) and to investigate the influence of age, gender, and genetic background in their occurrence. Patients and Methods: The presence and nature of CAID was studied in 143 patients [117 females, median age 11 (1.3 to 69)] with AIH-1 (n = 125) and AIH-2 (n = 28). HLA typing and tumor necrosis factor a gene promoter and exon I cytotoxic T lymphocyte associated antigen 4 (CTLA-4) gene polymorphisms were determined by polymerase chain reaction-based techniques. Results: The frequency of CAID was similar in patients with AIH-1 (14%) and AIH-2 (18%), but their nature was shown to vary. Arthritis was seen in half of the patients (n = 8) with CAID and AIH-1 and in none of those with AIH-2. Subjects with AIH-1 and CAID were shown to be older [24 (1.3 to 6 1) vs. 11 (1.3 to 69) y P = 0.02] and to have more often circulating antinuclear antibody (76% vs. 40%, P = 0.008) and less frequently antiactin antibodies (33% vs. 75%, P = 0.008) when compared with their counterparts without CAID. No particular HLA-DR and DQ alleles, as well as tumor necrosis factor a and CTLA-4 genotypes, were associated with CAID. Conclusions: The nature, but not the frequency, of CAID was shown to vary in AIH-1 and AIH-2. In subjects with AIH-1, CAID was linked to older subjects and to the presence of antinuclear antibody. No predisposition to CAID was associated to HLA-DRB1*04 or DDB4*01 alleles. The observed lower frequency of CAID could be attributed to the lower age of disease onset in Brazilians and to differences in HLA-encoded susceptibility to AIH-1 observed in South America.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fuzzy Bayesian tests were performed to evaluate whether the mother`s seroprevalence and children`s seroconversion to measles vaccine could be considered as ""high"" or ""low"". The results of the tests were aggregated into a fuzzy rule-based model structure, which would allow an expert to influence the model results. The linguistic model was developed considering four input variables. As the model output, we obtain the recommended age-specific vaccine coverage. The inputs of the fuzzy rules are fuzzy sets and the outputs are constant functions, performing the simplest Takagi-Sugeno-Kang model. This fuzzy approach is compared to a classical one, where the classical Bayes test was performed. Although the fuzzy and classical performances were similar, the fuzzy approach was more detailed and revealed important differences. In addition to taking into account subjective information in the form of fuzzy hypotheses it can be intuitively grasped by the decision maker. Finally, we show that the Bayesian test of fuzzy hypotheses is an interesting approach from the theoretical point of view, in the sense that it combines two complementary areas of investigation, normally seen as competitive. (C) 2007 IMACS. Published by Elsevier B.V. All rights reserved.