901 resultados para Combinatorial Veronesian
Resumo:
This thesis describes some aspects of a computer system for doing medical diagnosis in the specialized field of kidney disease. Because such a system faces the spectre of combinatorial explosion, this discussion concentrates on heuristics which control the number of concurrent hypotheses and efficient "compiled" representations of medical knowledge. In particular, the differential diagnosis of hematuria (blood in the urine) is discussed in detail. A protocol of a simulated doctor/patient interaction is presented and analyzed to determine the crucial structures and processes involved in the diagnosis procedure. The data structure proposed for representing medical information revolves around elementary hypotheses which are activated when certain disposing of findings, activating hypotheses, evaluating hypotheses locally and combining hypotheses globally is examined for its heuristic implications. The thesis attempts to fit the problem of medical diagnosis into the framework of other Artifcial Intelligence problems and paradigms and in particular explores the notions of pure search vs. heuristic methods, linearity and interaction, local vs. global knowledge and the structure of hypotheses within the world of kidney disease.
Resumo:
The constraint paradigm is a model of computation in which values are deduced whenever possible, under the limitation that deductions be local in a certain sense. One may visualize a constraint 'program' as a network of devices connected by wires. Data values may flow along the wires, and computation is performed by the devices. A device computes using only locally available information (with a few exceptions), and places newly derived values on other, locally attached wires. In this way computed values are propagated. An advantage of the constraint paradigm (not unique to it) is that a single relationship can be used in more than one direction. The connections to a device are not labelled as inputs and outputs; a device will compute with whatever values are available, and produce as many new values as it can. General theorem provers are capable of such behavior, but tend to suffer from combinatorial explosion; it is not usually useful to derive all the possible consequences of a set of hypotheses. The constraint paradigm places a certain kind of limitation on the deduction process. The limitations imposed by the constraint paradigm are not the only one possible. It is argued, however, that they are restrictive enough to forestall combinatorial explosion in many interesting computational situations, yet permissive enough to allow useful computations in practical situations. Moreover, the paradigm is intuitive: It is easy to visualize the computational effects of these particular limitations, and the paradigm is a natural way of expressing programs for certain applications, in particular relationships arising in computer-aided design. A number of implementations of constraint-based programming languages are presented. A progression of ever more powerful languages is described, complete implementations are presented and design difficulties and alternatives are discussed. The goal approached, though not quite reached, is a complete programming system which will implicitly support the constraint paradigm to the same extent that LISP, say, supports automatic storage management.
Resumo:
Mavron, Vassili; Jungnickel, D.; McDonough, T.P., (2001) 'The Geometry of Frequency Squares', Journal of Combinatorial Theory, Series A 96, pp.376-387 RAE2008
Resumo:
This paper presents a lower-bound result on the computational power of a genetic algorithm in the context of combinatorial optimization. We describe a new genetic algorithm, the merged genetic algorithm, and prove that for the class of monotonic functions, the algorithm finds the optimal solution, and does so with an exponential convergence rate. The analysis pertains to the ideal behavior of the algorithm where the main task reduces to showing convergence of probability distributions over the search space of combinatorial structures to the optimal one. We take exponential convergence to be indicative of efficient solvability for the sample-bounded algorithm, although a sampling theory is needed to better relate the limit behavior to actual behavior. The paper concludes with a discussion of some immediate problems that lie ahead.
Resumo:
We propose a new notion of cryptographic tamper evidence. A tamper-evident signature scheme provides an additional procedure Div which detects tampering: given two signatures, Div can determine whether one of them was generated by the forger. Surprisingly, this is possible even after the adversary has inconspicuously learned (exposed) some-or even all-the secrets in the system. In this case, it might be impossible to tell which signature is generated by the legitimate signer and which by the forger. But at least the fact of the tampering will be made evident. We define several variants of tamper-evidence, differing in their power to detect tampering. In all of these, we assume an equally powerful adversary: she adaptively controls all the inputs to the legitimate signer (i.e., all messages to be signed and their timing), and observes all his outputs; she can also adaptively expose all the secrets at arbitrary times. We provide tamper-evident schemes for all the variants and prove their optimality. Achieving the strongest tamper evidence turns out to be provably expensive. However, we define a somewhat weaker, but still practical, variant: α-synchronous tamper-evidence (α-te) and provide α-te schemes with logarithmic cost. Our α-te schemes use a combinatorial construction of α-separating sets, which might be of independent interest. We stress that our mechanisms are purely cryptographic: the tamper-detection algorithm Div is stateless and takes no inputs except the two signatures (in particular, it keeps no logs), we use no infrastructure (or other ways to conceal additional secrets), and we use no hardware properties (except those implied by the standard cryptographic assumptions, such as random number generators). Our constructions are based on arbitrary ordinary signature schemes and do not require random oracles.
Resumo:
The proposed model, called the combinatorial and competitive spatio-temporal memory or CCSTM, provides an elegant solution to the general problem of having to store and recall spatio-temporal patterns in which states or sequences of states can recur in various contexts. For example, fig. 1 shows two state sequences that have a common subsequence, C and D. The CCSTM assumes that any state has a distributed representation as a collection of features. Each feature has an associated competitive module (CM) containing K cells. On any given occurrence of a particular feature, A, exactly one of the cells in CMA will be chosen to represent it. It is the particular set of cells active on the previous time step that determines which cells are chosen to represent instances of their associated features on the current time step. If we assume that typically S features are active in any state then any state has K^S different neural representations. This huge space of possible neural representations of any state is what underlies the model's ability to store and recall numerous context-sensitive state sequences. The purpose of this paper is simply to describe this mechanism.
Resumo:
Constraint programming has emerged as a successful paradigm for modelling combinatorial problems arising from practical situations. In many of those situations, we are not provided with an immutable set of constraints. Instead, a user will modify his requirements, in an interactive fashion, until he is satisfied with a solution. Examples of such applications include, amongst others, model-based diagnosis, expert systems, product configurators. The system he interacts with must be able to assist him by showing the consequences of his requirements. Explanations are the ideal tool for providing this assistance. However, existing notions of explanations fail to provide sufficient information. We define new forms of explanations that aim to be more informative. Even if explanation generation is a very hard task, in the applications we consider, we must manage to provide a satisfactory level of interactivity and, therefore, we cannot afford long computational times. We introduce the concept of representative sets of relaxations, a compact set of relaxations that shows the user at least one way to satisfy each of his requirements and at least one way to relax them, and present an algorithm that efficiently computes such sets. We introduce the concept of most soluble relaxations, maximising the number of products they allow. We present algorithms to compute such relaxations in times compatible with interactivity, achieving this by indifferently making use of different types of compiled representations. We propose to generalise the concept of prime implicates to constraint problems with the concept of domain consequences, and suggest to generate them as a compilation strategy. This sets a new approach in compilation, and allows to address explanation-related queries in an efficient way. We define ordered automata to compactly represent large sets of domain consequences, in an orthogonal way from existing compilation techniques that represent large sets of solutions.
Resumo:
Much work has been done on learning from failure in search to boost solving of combinatorial problems, such as clause-learning and clause-weighting in boolean satisfiability (SAT), nogood and explanation-based learning, and constraint weighting in constraint satisfaction problems (CSPs). Many of the top solvers in SAT use clause learning to good effect. A similar approach (nogood learning) has not had as large an impact in CSPs. Constraint weighting is a less fine-grained approach where the information learnt gives an approximation as to which variables may be the sources of greatest contention. In this work we present two methods for learning from search using restarts, in order to identify these critical variables prior to solving. Both methods are based on the conflict-directed heuristic (weighted-degree heuristic) introduced by Boussemart et al. and are aimed at producing a better-informed version of the heuristic by gathering information through restarting and probing of the search space prior to solving, while minimizing the overhead of these restarts. We further examine the impact of different sampling strategies and different measurements of contention, and assess different restarting strategies for the heuristic. Finally, two applications for constraint weighting are considered in detail: dynamic constraint satisfaction problems and unary resource scheduling problems.
Resumo:
With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.
Resumo:
Marine sponges have been an abundant source of new metabolites in recent years. The symbiotic association between the bacteria and the sponge has enabled scientists to access the bacterial diversity present within the bacterial/sponge ecosystem. This study has focussed on accessing the bacterial diversity in two Irish coastal marine sponges, namely Amphilectus fucorum and Eurypon major. A novel species from the genus Aquimarina has been isolated from the sponge Amphilectus fucorum. The study has also resulted in the identification of an α–Proteobacteria, Pseudovibrio sp. as a potential producer of antibiotics. Thus a targeted based approach to specifically cultivate Pseudovibrio sp. may prove useful for the development of new metabolites from this particular genus. Bacterial isolates from the marine sponge Haliclona simulans were screened for anti–fungal activity and one isolate namely Streptomyces sp. SM8 displayed activity against all five fungal strains tested. The strain was also tested for anti–bacterial activity and it showed activity against both against B. subtilis and P. aeruginosa. Hence a combinatorial approach involving both biochemical and genomic approaches were employed in an attempt to identify the bioactive compounds with these activities which were being produced by this strain. Culture broths from Streptomyces sp. SM8 were extracted and purified by various techniques such as reverse–phase HPLC, MPLC and ash chromatography. Anti–bacterial activity was observed in a fraction which contained a hydroxylated saturated fatty acid and also another compound with a m/z 227 but further structural elucidation of these compounds proved unsuccessful. The anti–fungal fractions from SM8 were shown to contain antimycin–like compounds, with some of these compounds having different retention times from that of an antimycin standard. A high–throughput assay was developed to screen for novel calcineurin inhibitors using yeast as a model system and three putative bacterial extracts were found to be positive using this screen. One of these extracts from SM8 was subsequently analysed using NMR and the calcineurin inhibition activity was con rmed to belong to a butenolide type compound. A H. simulans metagenomic library was also screened using the novel calcineurin inhibitor high–throughput assay system and eight clones displaying putative calcineurin inhibitory activity were detected. The clone which displayed the best inhibitory activity was subsequently sequenced and following the use of other genetic based approaches it became clear that the inhibition was being caused by a hypothetical protein with similarity to a hypothetical Na+/Ca2+ exchanger protein. The Streptomyces sp. SM8 genome was sequenced from a fragment library using Roche 454 pyrosequencing technology to identify potential secondary metabolism clusters. The draft genome was annotated by IMG/ER using the Prodigal pipeline. The Whole Genome Shotgun project has been deposited at DDBJ/EMBL/GenBank under the accession AMPN00000000. The genome contains genes which appear to encode for several polyketide synthases (PKS), non–ribosomal peptide synthetases (NRPS), terpene and siderophore biosynthesis and ribosomal peptides. Transcriptional analyses led to the identification of three hybrid clusters of which one is predicted to be involved in the synthesis of antimycin, while the functions of the others are as yet unknown. Two NRPS clusters were also identified, of which one may be involved in gramicidin biosynthesis and the function of the other is unknown. A Streptomyces sp. SM8 NRPS antC gene knockout was constructed and extracts from the strain were shown to possess a mild anti–fungal activity when compared to the SM8 wild–type. Subsequent LCMS analysis of antC mutant extracts confirmed the absence of the antimycin in the extract proving that the observed anti–fungal activity may involve metabolite(s) other than antimycin. Anti–bacterial activity in the antC gene knockout strain against P. aeruginosa was reduced when compared to the SM8 wild–type indicating that antimycin may be contributing to the observed anti–bacterial activity in addition to the metabolite(s) already identified during the chemical analyses. This is the first report of antimycins exhibiting anti–bacterial activity against P. aeruginosa. One of the hybrid clusters potentially involved in secondary metabolism in SM8 that displayed high and consistent levels of gene–expression in RNA studies was analysed in an attempt to identify the metabolite being produced by the pathway. A number of unusual features were observed following bioinformatics analysis of the gene sequence of the cluster, including a formylation domain within the NRPS cluster which may add a formyl group to the growing chain. Another unusual feature is the lack of AT domains on two of the PKS modules. Other unusual features observed in this cluster is the lack of a KR domain in module 3 of the cluster and an aminotransferase domain in module 4 for which no clear role has been hypothesised.
Resumo:
Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications.
Resumo:
Background: The role of Fas (CD95) and its ligand, Fas ligand (FasL/CD95L), is poorly understood in the intestine. Whilst Fas is best studies in terms of its function in apoptosis, recent studies suggest that Fas ligation may mediate additional, non-apoptotic functions such as inflammation. Toll like Receptors (TLRs) play an important role in mediating inflammation and homeostasis in the intestine. Recent studies have shown that a level of crosstalk exists between the Fas and TLR signalling pathways but this has not yet been investigated in the intestine. Aim: The aim of this study was to evaluate potential cross-talk between TLRs and Fas/FasL system in intestinal cancer cells. Results: Treatment with TLR4 and TLR5 ligands, but not ligands for TLR2 and TLR9 increased the expression of Fas and FasL in intestinal cancer cells in vitro. Consistent with this, expression of Fas and FasL was reduced in the distal colon tissue from germ-free (GF), TLR4 and TLR5 knock-out (KO) mice but was unchanged in TLR2KO tissue, suggesting that intestinal cancer cells display a degree of specificity in their ability to upregulate Fas and FasL expression in response to TLR ligation. Expression of both Fas and FasL was significantly reduced in TRIF KO tissue, indicating that signalling via TRIF by TLR4 and TLR5 agonists may be responsible for the induction of Fas and FasL expression in intestinal cancer cells. In addition, modulating Fas signalling using agonistic anti-Fas augmented TLR4 and TLR5-mediated tumour necrosis factor alpha (TNFα) and interleukin 8 (IL)-8 production by intestinal cancer cells, suggesting crosstalk occurs between these receptors in these cells. Furthermore, suppression of Fas in intestinal cancer cells reduced the ability of the intestinal pathogens, Salmonella typhimurium and Listeria monocytogenes to induce the expression of IL-8, suggesting that Fas signalling may play a role in intestinal host defence against pathogens. Inflammation is known to be important in colon tumourigenesis and Fas signalling on intestinal cancer cells has been shown to result in the production of inflammatory mediators. Fas-mediated signalling may therefore play a role in colon cancer development. Suppression of tumour-derived Fas by 85% led to a reduction in the tumour volume and changes in tumour infiltrating macrophages and neutrophils. TLR4 signalling has been shown to play a role in colon cancer via the recruitment and activation of alternatively activated immune cells. Given the crosstalk seen between Fas and TLR4 signalling in intestinal cancer cells in vitro, suppressing Fas signalling may enhance the efficacy of TLR4 antagonism in vivo. TLR4 antagonism resulted in smaller tumours with fewer infiltrating neutrophils. Whilst Fas downregulation did not significantly augment the ability of TLR4 antagonism to reduce the final tumour volume, Fas suppression may augment the anti-tumour effects of TLR4 antagonism as neutrophil infiltration was further reduced upon combinatorial treatment. Conclusion: Together, this study demonstrates evidence of a new role for Fas in the intestinal immune response and that manipulating Fas signalling has potential anti-tumour benefit.
Resumo:
Recent genomic analyses suggest the importance of combinatorial regulation by broadly expressed transcription factors rather than expression domains characterized by highly specific factors.
Resumo:
An enterprise information system (EIS) is an integrated data-applications platform characterized by diverse, heterogeneous, and distributed data sources. For many enterprises, a number of business processes still depend heavily on static rule-based methods and extensive human expertise. Enterprises are faced with the need for optimizing operation scheduling, improving resource utilization, discovering useful knowledge, and making data-driven decisions.
This thesis research is focused on real-time optimization and knowledge discovery that addresses workflow optimization, resource allocation, as well as data-driven predictions of process-execution times, order fulfillment, and enterprise service-level performance. In contrast to prior work on data analytics techniques for enterprise performance optimization, the emphasis here is on realizing scalable and real-time enterprise intelligence based on a combination of heterogeneous system simulation, combinatorial optimization, machine-learning algorithms, and statistical methods.
On-demand digital-print service is a representative enterprise requiring a powerful EIS.We use real-life data from Reischling Press, Inc. (RPI), a digit-print-service provider (PSP), to evaluate our optimization algorithms.
In order to handle the increase in volume and diversity of demands, we first present a high-performance, scalable, and real-time production scheduling algorithm for production automation based on an incremental genetic algorithm (IGA). The objective of this algorithm is to optimize the order dispatching sequence and balance resource utilization. Compared to prior work, this solution is scalable for a high volume of orders and it provides fast scheduling solutions for orders that require complex fulfillment procedures. Experimental results highlight its potential benefit in reducing production inefficiencies and enhancing the productivity of an enterprise.
We next discuss analysis and prediction of different attributes involved in hierarchical components of an enterprise. We start from a study of the fundamental processes related to real-time prediction. Our process-execution time and process status prediction models integrate statistical methods with machine-learning algorithms. In addition to improved prediction accuracy compared to stand-alone machine-learning algorithms, it also performs a probabilistic estimation of the predicted status. An order generally consists of multiple series and parallel processes. We next introduce an order-fulfillment prediction model that combines advantages of multiple classification models by incorporating flexible decision-integration mechanisms. Experimental results show that adopting due dates recommended by the model can significantly reduce enterprise late-delivery ratio. Finally, we investigate service-level attributes that reflect the overall performance of an enterprise. We analyze and decompose time-series data into different components according to their hierarchical periodic nature, perform correlation analysis,
and develop univariate prediction models for each component as well as multivariate models for correlated components. Predictions for the original time series are aggregated from the predictions of its components. In addition to a significant increase in mid-term prediction accuracy, this distributed modeling strategy also improves short-term time-series prediction accuracy.
In summary, this thesis research has led to a set of characterization, optimization, and prediction tools for an EIS to derive insightful knowledge from data and use them as guidance for production management. It is expected to provide solutions for enterprises to increase reconfigurability, accomplish more automated procedures, and obtain data-driven recommendations or effective decisions.
Resumo:
While blockade of the cytotoxic T-lymphocyte antigen-4 (CTLA-4) T cell regulatory receptor has become a commonly utilized strategy in the management of advanced melanoma, many questions remain regarding the use of this agent in patient populations with autoimmune disease. We present a case involving the treatment of a patient with stage IV melanoma and ulcerative colitis (UC) with anti-CTLA-4 antibody immunotherapy. Upon initial treatment, the patient developed grade III colitis requiring tumor necrosis factor-alpha (TNF-α) blocking antibody therapy, however re-treatment with anti-CTLA-4 antibody following a total colectomy resulted in a rapid complete response accompanied by the development of a tracheobronchitis, a previously described extra-intestinal manifestation of UC. This case contributes to the evolving literature on the use of checkpoint inhibitors in patients also suffering from autoimmune disease, supports future clinical trials investigating the use of these agents in patients with autoimmune diseases, and suggests that an understanding of the specific molecular pathways involved in a patient's autoimmune pathology may provide insight into the development of more effective novel combinatorial immunotherapeutic strategies.