932 resultados para Convex combination
Resumo:
Aberrant behavior of biological signaling pathways has been implicated in diseases such as cancers. Therapies have been developed to target proteins in these networks in the hope of curing the illness or bringing about remission. However, identifying targets for drug inhibition that exhibit good therapeutic index has proven to be challenging since signaling pathways have a large number of components and many interconnections such as feedback, crosstalk, and divergence. Unfortunately, some characteristics of these pathways such as redundancy, feedback, and drug resistance reduce the efficacy of single drug target therapy and necessitate the employment of more than one drug to target multiple nodes in the system. However, choosing multiple targets with high therapeutic index poses more challenges since the combinatorial search space could be huge. To cope with the complexity of these systems, computational tools such as ordinary differential equations have been used to successfully model some of these pathways. Regrettably, for building these models, experimentally-measured initial concentrations of the components and rates of reactions are needed which are difficult to obtain, and in very large networks, they may not be available at the moment. Fortunately, there exist other modeling tools, though not as powerful as ordinary differential equations, which do not need the rates and initial conditions to model signaling pathways. Petri net and graph theory are among these tools. In this thesis, we introduce a methodology based on Petri net siphon analysis and graph network centrality measures for identifying prospective targets for single and multiple drug therapies. In this methodology, first, potential targets are identified in the Petri net model of a signaling pathway using siphon analysis. Then, the graph-theoretic centrality measures are employed to prioritize the candidate targets. Also, an algorithm is developed to check whether the candidate targets are able to disable the intended outputs in the graph model of the system or not. We implement structural and dynamical models of ErbB1-Ras-MAPK pathways and use them to assess and evaluate this methodology. The identified drug-targets, single and multiple, correspond to clinically relevant drugs. Overall, the results suggest that this methodology, using siphons and centrality measures, shows promise in identifying and ranking drugs. Since this methodology only uses the structural information of the signaling pathways and does not need initial conditions and dynamical rates, it can be utilized in larger networks.
Resumo:
The convex hull describes the extent or shape of a set of data and is used ubiquitously in computational geometry. Common algorithms to construct the convex hull on a finite set of n points (x,y) range from O(nlogn) time to O(n) time. However, it is often the case that a heuristic procedure is applied to reduce the original set of n points to a set of s < n points which contains the hull and so accelerates the final hull finding procedure. We present an algorithm to precondition data before building a 2D convex hull with integer coordinates, with three distinct advantages. First, for all practical purposes, it is linear; second, no explicit sorting of data is required and third, the reduced set of s points is constructed such that it forms an ordered set that can be directly pipelined into an O(n) time convex hull algorithm. Under these criteria a fast (or O(n)) pre-conditioner in principle creates a fast convex hull (approximately O(n)) for an arbitrary set of points. The paper empirically evaluates and quantifies the acceleration generated by the method against the most common convex hull algorithms. An extra acceleration of at least four times when compared to previous existing preconditioning methods is found from experiments on a dataset.
Resumo:
The convex hull describes the extent or shape of a set of data and is used ubiquitously in computational geometry. Common algorithms to construct the convex hull on a finite set of n points (x,y) range from O(nlogn) time to O(n) time. However, it is often the case that a heuristic procedure is applied to reduce the original set of n points to a set of s < n points which contains the hull and so accelerates the final hull finding procedure. We present an algorithm to precondition data before building a 2D convex hull with integer coordinates, with three distinct advantages. First, for all practical purposes, it is linear; second, no explicit sorting of data is required and third, the reduced set of s points is constructed such that it forms an ordered set that can be directly pipelined into an O(n) time convex hull algorithm. Under these criteria a fast (or O(n)) pre-conditioner in principle creates a fast convex hull (approximately O(n)) for an arbitrary set of points. The paper empirically evaluates and quantifies the acceleration generated by the method against the most common convex hull algorithms. An extra acceleration of at least four times when compared to previous existing preconditioning methods is found from experiments on a dataset.
Resumo:
Abstract Honey is a high value food commodity with recognized nutraceutical properties. A primary driver of the value of honey is its floral origin. The feasibility of applying multivariate data analysis to various chemical parameters for the discrimination of honeys was explored. This approach was applied to four authentic honeys with different floral origins (rata, kamahi, clover and manuka) obtained from producers in New Zealand. Results from elemental profiling, stable isotope analysis, metabolomics (UPLC-QToF MS), and NIR, FT-IR, and Raman spectroscopic fingerprinting were analyzed. Orthogonal partial least square discriminant analysis (OPLS-DA) was used to determine which technique or combination of techniques provided the best classification and prediction abilities. Good prediction values were achieved using metabolite data (for all four honeys, Q2 = 0.52; for manuka and clover, Q2 = 0.76) and the trace element/isotopic data (for manuka and clover, Q2 = 0.65), while the other chemical parameters showed promise when combined (for manuka and clover, Q2 = 0.43).
Resumo:
Background Lumacaftor/ivacaftor combination therapy demonstrated clinical benefits inpatients with cystic fibrosis homozygous for the Phe508del CFTR mutation.Pretreatment lung function is a confounding factor that potentially impacts the efficacyand safety of lumacaftor/ivacaftor therapy. Methods Two multinational, randomised, double-blind, placebo-controlled, parallelgroupPhase 3 studies randomised patients to receive placebo or lumacaftor (600 mgonce daily [qd] or 400 mg every 12 hours [q12h]) in combination with ivacaftor (250 mgq12h) for 24 weeks. Prespecified analyses of pooled efficacy and safety data by lungfunction, as measured by percent predicted forced expiratory volume in 1 second(ppFEV1), were performed for patients with baseline ppFEV1 <40 (n=81) and ≥40(n=1016) and screening ppFEV1 <70 (n=730) and ≥70 (n=342). These studies wereregistered with ClinicalTrials.gov (NCT01807923 and NCT01807949). Findings The studies were conducted from April 2013 through April 2014.Improvements in the primary endpoint, absolute change from baseline at week 24 inppFEV1, were observed with both lumacaftor/ivacaftor doses in the subgroup withbaseline ppFEV1 <40 (least-squares mean difference versus placebo was 3∙7 and 3.3percentage points for lumacaftor 600 mg qd/ivacaftor 250 mg q12h and lumacaftor 400mg q12h/ivacaftor 250 mg q12h, respectively [p<0∙05] and in the subgroup with baselineppFEV1 ≥40 (3∙3 and 2∙8 percentage points, respectively [p<0∙001]). Similar absoluteimprovements versus placebo in ppFEV1 were observed in subgroups with screening 4ppFEV1 <70 (3∙3 and 3∙3 percentage points for lumacaftor 600 mg qd/ivacaftor 250 mgq12h and lumacaftor 400 mg q12h/ivacaftor 250 mg q12h, respectively [p<0∙001]) and≥70 (3∙3 and 1∙9 percentage points, respectively [p=0.002] and [p=0∙079]). Increases inBMI and reduction in number of pulmonary exacerbation events were observed in bothLUM/IVA dose groups vs placebo across all lung function subgroups. Treatment wasgenerally well tolerated, although the incidence of some respiratory adverse events washigher with active treatment than with placebo. Interpretation Lumacaftor/ivacaftor combination therapy benefits patients homozygousfor Phe508del CFTR who have varying degrees of lung function impairment. Funding Vertex Pharmaceuticals Incorporated.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
This work presents a tool to support authentication studies of paintings attributed to the modernist Portuguese artist Amadeo de Souza-Cardoso (1887-1918). The strategy adopted was to quantify and combine the information extracted from the analysis of the brushstroke with information on the pigments present in the paintings. The brushstroke analysis was performed combining Gabor filter and Scale Invariant Feature Transform. Hyperspectral imaging and elemental analysis were used to compare the materials in the painting with those present in a database of oil paint tubes used by the artist. The outputs of the tool are a quantitative indicator for authenticity, and a mapping image that indicates the areas where materials not coherent with Amadeo's palette were detected, if any. This output is a simple and effective way of assessing the results of the system. The method was tested in twelve paintings obtaining promising results.
Resumo:
Targeted cancer therapy aims to disrupt aberrant cellular signalling pathways. Biomarkers are surrogates of pathway state, but there is limited success in translating candidate biomarkers to clinical practice due to the intrinsic complexity of pathway networks. Systems biology approaches afford better understanding of complex, dynamical interactions in signalling pathways targeted by anticancer drugs. However, adoption of dynamical modelling by clinicians and biologists is impeded by model inaccessibility. Drawing on computer games technology, we present a novel visualisation toolkit, SiViT, that converts systems biology models of cancer cell signalling into interactive simulations that can be used without specialist computational expertise. SiViT allows clinicians and biologists to directly introduce for example loss of function mutations and specific inhibitors. SiViT animates the effects of these introductions on pathway dynamics, suggesting further experiments and assessing candidate biomarker effectiveness. In a systems biology model of Her2 signalling we experimentally validated predictions using SiViT, revealing the dynamics of biomarkers of drug resistance and highlighting the role of pathway crosstalk. No model is ever complete: the iteration of real data and simulation facilitates continued evolution of more accurate, useful models. SiViT will make accessible libraries of models to support preclinical research, combinatorial strategy design and biomarker discovery.
Resumo:
This paper considers identification of treatment effects when the outcome variables and covari-ates are not observed in the same data sets. Ecological inference models, where aggregate out-come information is combined with individual demographic information, are a common example of these situations. In this context, the counterfactual distributions and the treatment effects are not point identified. However, recent results provide bounds to partially identify causal effects. Unlike previous works, this paper adopts the selection on unobservables assumption, which means that randomization of treatment assignments is not achieved until time fixed unobserved heterogeneity is controlled for. Panel data models linear in the unobserved components are con-sidered to achieve identification. To assess the performance of these bounds, this paper provides a simulation exercise.
Resumo:
Resumo:
Cancer is a problem of global importance, since the incidence is increasing worldwide and therapeutic options are generally limited. Thus, it becomes imperative to find new therapeutic targets as well as new molecules with therapeutic potential for tumors. Flavonoids are polyphenolic compounds that may be potential therapeutic agents. Several studies have shown that these compounds have a higher anticancer potential. Among the flavonoids in the human diet, quercetin is one of the most important. In the last decades, several anticancer properties of quercetin have been described, such as cell signaling, pro-apoptotic, anti-proliferative and anti-oxidant effects, growth suppression. In fact, it is now well known that quercetin has diverse biological effects, inhibiting multiple enzymes involved in cell proliferation, as well as, in signal transduction pathways. On the other hand, there are also studies reporting potential synergistic effects when combined quercetin with chemotherapeutic agents or radiotherapy. In fact, several studies which aim to explore the anticancer potential of these combined treatments have already been published, the majority with promising results. Actually it is well known that quercetin can act on the chemosensitization and radiosensitization but also as chemoprotective and radioprotective, protecting normal cells of the side effects that results from chemotherapy and radiotherapy, which obviously provides notable advantages in their use in anticancer treatment. Thus, all these data indicate that quercetin may have a key role in anticancer treatment. In this context, this review is focused on the relationship between flavonoids and cancer, with special emphasis on the role of quercetin.
Resumo:
In the past decade, systems that extract information from millions of Internet documents have become commonplace. Knowledge graphs -- structured knowledge bases that describe entities, their attributes and the relationships between them -- are a powerful tool for understanding and organizing this vast amount of information. However, a significant obstacle to knowledge graph construction is the unreliability of the extracted information, due to noise and ambiguity in the underlying data or errors made by the extraction system and the complexity of reasoning about the dependencies between these noisy extractions. My dissertation addresses these challenges by exploiting the interdependencies between facts to improve the quality of the knowledge graph in a scalable framework. I introduce a new approach called knowledge graph identification (KGI), which resolves the entities, attributes and relationships in the knowledge graph by incorporating uncertain extractions from multiple sources, entity co-references, and ontological constraints. I define a probability distribution over possible knowledge graphs and infer the most probable knowledge graph using a combination of probabilistic and logical reasoning. Such probabilistic models are frequently dismissed due to scalability concerns, but my implementation of KGI maintains tractable performance on large problems through the use of hinge-loss Markov random fields, which have a convex inference objective. This allows the inference of large knowledge graphs using 4M facts and 20M ground constraints in 2 hours. To further scale the solution, I develop a distributed approach to the KGI problem which runs in parallel across multiple machines, reducing inference time by 90%. Finally, I extend my model to the streaming setting, where a knowledge graph is continuously updated by incorporating newly extracted facts. I devise a general approach for approximately updating inference in convex probabilistic models, and quantify the approximation error by defining and bounding inference regret for online models. Together, my work retains the attractive features of probabilistic models while providing the scalability necessary for large-scale knowledge graph construction. These models have been applied on a number of real-world knowledge graph projects, including the NELL project at Carnegie Mellon and the Google Knowledge Graph.
Resumo:
Evolutionary algorithms alone cannot solve optimization problems very efficiently since there are many random (not very rational) decisions in these algorithms. Combination of evolutionary algorithms and other techniques have been proven to be an efficient optimization methodology. In this talk, I will explain the basic ideas of our three algorithms along this line (1): Orthogonal genetic algorithm which treats crossover/mutation as an experimental design problem, (2) Multiobjective evolutionary algorithm based on decomposition (MOEA/D) which uses decomposition techniques from traditional mathematical programming in multiobjective optimization evolutionary algorithm, and (3) Regular model based multiobjective estimation of distribution algorithms (RM-MEDA) which uses the regular property and machine learning methods for improving multiobjective evolutionary algorithms.
Resumo:
Most approaches to stereo visual odometry reconstruct the motion based on the tracking of point features along a sequence of images. However, in low-textured scenes it is often difficult to encounter a large set of point features, or it may happen that they are not well distributed over the image, so that the behavior of these algorithms deteriorates. This paper proposes a probabilistic approach to stereo visual odometry based on the combination of both point and line segment that works robustly in a wide variety of scenarios. The camera motion is recovered through non-linear minimization of the projection errors of both point and line segment features. In order to effectively combine both types of features, their associated errors are weighted according to their covariance matrices, computed from the propagation of Gaussian distribution errors in the sensor measurements. The method, of course, is computationally more expensive that using only one type of feature, but still can run in real-time on a standard computer and provides interesting advantages, including a straightforward integration into any probabilistic framework commonly employed in mobile robotics.