926 resultados para Programmable array logic
Resumo:
Symbiotic associations with microorganisms are pivotal in many insects. Yet, the functional roles of obligate symbionts have been difficult to study because it has not been possible to cultivate these organisms in vitro. The medically important tsetse fly (Diptera: Glossinidae) relies on its obligate endosymbiont, Wigglesworthia glossinidia, a member of the Enterobacteriaceae, closely related to Escherichia coli, for fertility and possibly nutrition. We show here that the intracellular Wigglesworthia has a reduced genome size smaller than 770 kb. In an attempt to understand the composition of its genome, we used the gene arrays developed for E. coli. We were able to identify 650 orthologous genes in Wigglesworthia corresponding to ≈85% of its genome. The arrays were also applied for expression analysis using Wigglesworthia cDNA and 61 gene products were detected, presumably coding for some of its most abundant products. Overall, genes involved in cell processes, DNA replication, transcription, and translation were found largely retained in the small genome of Wigglesworthia. In addition, genes coding for transport proteins, chaperones, biosynthesis of cofactors, and some amino acids were found to comprise a significant portion, suggesting an important role for these proteins in its symbiotic life. Based on its expression profile, we predict that Wigglesworthia may be a facultative anaerobic organism that utilizes ammonia as its major source of nitrogen. We present an application of E. coli gene arrays to obtain broad genome information for a closely related organism in the absence of complete genome sequence data.
Resumo:
The mammalian form of the protozoan parasite Leishmania mexicana contains high activity of a cysteine proteinase (LmCPb) encoded on a tandem array of 19 genes (lmcpb). Homozygous null mutants for lmcpb have been produced by targeted gene disruption. All life-cycle stages of the mutant can be cultured in vitro, demonstrating that the gene is not essential for growth or differentiation of the parasite. However, the mutant exhibits a marked phenotype affecting virulence-- its infectivity to macrophages is reduced by 80%. The mutants are as efficient as wild-type parasites in invading macrophages but they only survive in a small proportion of the cells. However, those parasites that successfully infect these macrophages grow normally. Despite their reduced virulence, the mutants are still able to produce subcutaneous lesions in mice, albeit at a slower rate than wild-type parasites. The product of a single copy of lmcpb re-expressed in the null mutant was enzymatically active and restored infectivity toward macrophages to wild-type levels. Double null mutants created for lmcpb and lmcpa (another cathepsin L-like cysteine proteinase) have a similar phenotype to the lmcpb null mutant, showing that LmCPa does not compensate for the loss of LmCPb.
Resumo:
We present a general approach to forming structure-activity relationships (SARs). This approach is based on representing chemical structure by atoms and their bond connectivities in combination with the inductive logic programming (ILP) algorithm PROGOL. Existing SAR methods describe chemical structure by using attributes which are general properties of an object. It is not possible to map chemical structure directly to attribute-based descriptions, as such descriptions have no internal organization. A more natural and general way to describe chemical structure is to use a relational description, where the internal construction of the description maps that of the object described. Our atom and bond connectivities representation is a relational description. ILP algorithms can form SARs with relational descriptions. We have tested the relational approach by investigating the SARs of 230 aromatic and heteroaromatic nitro compounds. These compounds had been split previously into two subsets, 188 compounds that were amenable to regression and 42 that were not. For the 188 compounds, a SAR was found that was as accurate as the best statistical or neural network-generated SARs. The PROGOL SAR has the advantages that it did not need the use of any indicator variables handcrafted by an expert, and the generated rules were easily comprehensible. For the 42 compounds, PROGOL formed a SAR that was significantly (P < 0.025) more accurate than linear regression, quadratic regression, and back-propagation. This SAR is based on an automatically generated structural alert for mutagenicity.
Resumo:
We investigate the critical properties of the four-state commutative random permutation glassy Potts model in three and four dimensions by means of Monte Carlo simulations and a finite-size scaling analysis. By using a field programmable gate array, we have been able to thermalize a large number of samples of systems with large volume. This has allowed us to observe a spin-glass ordered phase in d=4 and to study the critical properties of the transition. In d=3, our results are consistent with the presence of a Kosterlitz-Thouless transition, but also with different scenarios: transient effects due to a value of the lower critical dimension slightly below 3 could be very important.
Resumo:
This Master’s Research Paper investigates Olafur Eliasson’s The weather project as a case study for the dialogue between Gothic artistic principles and prominent elements of contemporary art. A product of a post-modern mindset, weakened historicity allows us to examine these connections anew; past, present, and future blur and artists (and viewers) have the whole of time from which to gain inspiration and meaning in works of art. I demonstrate similarities through theories on phenomenology; the spatiotemporal relationship between viewer and artwork; the convergence of art and science; and the communal, quasi-liminal experience of pilgrimage. I embrace Eliasson’s belief in the self-reflexive potential of art and the importance of the viewer’s own values, memories, and methods of seeing. This new interpretive layer will hopefully offer a richer experience for future participants of both Gothic cathedrals and environments produced by Studio Olafur Eliasson.
Resumo:
This paper outlines the approach adopted by the PLSI research group at University of Alicante in the PASCAL-2006 second Recognising Textual Entailment challenge. Our system is composed of several components. On the one hand, the first component performs the derivation of the logic forms of the text/hypothesis pairs and, on the other hand, the second component provides us with a similarity score given by the semantic relations between the derived logic forms. In order to obtain this score we apply several measures of similitude and relatedness based on the structure and content of WordNet.
Resumo:
This paper describes a CL-SR system that employs two different techniques: the first one is based on NLP rules that consist on applying logic forms to the topic processing while the second one basically consists on applying the IR-n statistical search engine to the spoken document collection. The application of logic forms to the topics allows to increase the weight of topic terms according to a set of syntactic rules. Thus, the weights of the topic terms are used by IR-n system in the information retrieval process.
Resumo:
Hardware/Software partitioning (HSP) is a key task for embedded system co-design. The main goal of this task is to decide which components of an application are to be executed in a general purpose processor (software) and which ones, on a specific hardware, taking into account a set of restrictions expressed by metrics. In last years, several approaches have been proposed for solving the HSP problem, directed by metaheuristic algorithms. However, due to diversity of models and metrics used, the choice of the best suited algorithm is an open problem yet. This article presents the results of applying a fuzzy approach to the HSP problem. This approach is more flexible than many others due to the fact that it is possible to accept quite good solutions or to reject other ones which do not seem good. In this work we compare six metaheuristic algorithms: Random Search, Tabu Search, Simulated Annealing, Hill Climbing, Genetic Algorithm and Evolutionary Strategy. The presented model is aimed to simultaneously minimize the hardware area and the execution time. The obtained results show that Restart Hill Climbing is the best performing algorithm in most cases.
Resumo:
Array measurements have become a valuable tool for site response characterization in a non-invasive way. The array design, i.e. size, geometry and number of stations, has a great influence in the quality of the obtained results. From the previous parameters, the number of available stations uses to be the main limitation for the field experiments, because of the economical and logistical constraints that it involves. Sometimes, from the initially planned array layout, carefully designed before the fieldwork campaign, one or more stations do not work properly, modifying the prearranged geometry. Whereas other times, there is not possible to set up the desired array layout, because of the lack of stations. Therefore, for a planned array layout, the number of operative stations and their arrangement in the array become a crucial point in the acquisition stage and subsequently in the dispersion curve estimation. In this paper we carry out an experimental work to analyze which is the minimum number of stations that would provide reliable dispersion curves for three prearranged array configurations (triangular, circular with central station and polygonal geometries). For the optimization study, we analyze together the theoretical array responses and the experimental dispersion curves obtained through the f-k method. In the case of the f-k method, we compare the dispersion curves obtained for the original or prearranged arrays with the ones obtained for the modified arrays, i.e. the dispersion curves obtained when a certain number of stations n is removed, each time, from the original layout of X geophones. The comparison is evaluated by means of a misfit function, which helps us to determine how constrained are the studied geometries by stations removing and which station or combination of stations affect more to the array capability when they are not available. All this information might be crucial to improve future array designs, determining when it is possible to optimize the number of arranged stations without losing the reliability of the obtained results.
Resumo:
In this paper, we propose a novel algorithm for the rigorous design of distillation columns that integrates a process simulator in a generalized disjunctive programming formulation. The optimal distillation column, or column sequence, is obtained by selecting, for each column section, among a set of column sections with different number of theoretical trays. The selection of thermodynamic models, properties estimation etc., are all in the simulation environment. All the numerical issues related to the convergence of distillation columns (or column sections) are also maintained in the simulation environment. The model is formulated as a Generalized Disjunctive Programming (GDP) problem and solved using the logic based outer approximation algorithm without MINLP reformulation. Some examples involving from a single column to thermally coupled sequence or extractive distillation shows the performance of the new algorithm.
Resumo:
In t-norm based systems many-valued logic, valuations of propositions form a non-countable set: interval [0,1]. In addition, we are given a set E of truth values p, subject to certain conditions, the valuation v is v=V(p), V reciprocal application of E on [0,1]. The general propositional algebra of t-norm based many-valued logic is then constructed from seven axioms. It contains classical logic (not many-valued) as a special case. It is first applied to the case where E=[0,1] and V is the identity. The result is a t-norm based many-valued logic in which contradiction can have a nonzero degree of truth but cannot be true; for this reason, this logic is called quasi-paraconsistent.
Resumo:
Paraconsistent logic admits that the contradiction can be true. Let p be the truth values and P be a proposition. In paraconsistent logic the truth values of contradiction is . This equation has no real roots but admits complex roots . This is the result which leads to develop a multivalued logic to complex truth values. The sum of truth values being isomorphic to the vector of the plane, it is natural to relate the function V to the metric of the vector space R2. We will adopt as valuations the norms of vectors. The main objective of this paper is to establish a theory of truth-value evaluation for paraconsistent logics with the goal of using in analyzing ideological, mythical, religious and mystic belief systems.
Resumo:
We address the optimization of discrete-continuous dynamic optimization problems using a disjunctive multistage modeling framework, with implicit discontinuities, which increases the problem complexity since the number of continuous phases and discrete events is not known a-priori. After setting a fixed alternative sequence of modes, we convert the infinite-dimensional continuous mixed-logic dynamic (MLDO) problem into a finite dimensional discretized GDP problem by orthogonal collocation on finite elements. We use the Logic-based Outer Approximation algorithm to fully exploit the structure of the GDP representation of the problem. This modelling framework is illustrated with an optimization problem with implicit discontinuities (diver problem).
Resumo:
We present an extension of the logic outer-approximation algorithm for dealing with disjunctive discrete-continuous optimal control problems whose dynamic behavior is modeled in terms of differential-algebraic equations. Although the proposed algorithm can be applied to a wide variety of discrete-continuous optimal control problems, we are mainly interested in problems where disjunctions are also present. Disjunctions are included to take into account only certain parts of the underlying model which become relevant under some processing conditions. By doing so the numerical robustness of the optimization algorithm improves since those parts of the model that are not active are discarded leading to a reduced size problem and avoiding potential model singularities. We test the proposed algorithm using three examples of different complex dynamic behavior. In all the case studies the number of iterations and the computational effort required to obtain the optimal solutions is modest and the solutions are relatively easy to find.