935 resultados para ElGamal, CZK, Multiple discrete logarithm assumption, Extended linear algebra


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Cellulose consisting of arrays of linear beta-1,4 linked glucans, is the most abundant carbon-containing polymer present in biomass. Recalcitrance of crystalline cellulose towards enzymatic degradation is widely reported and is the result of intra-and inter-molecular hydrogen bonds within and among the linear glucans. Cellobiohydrolases are enzymes that attack crystalline cellulose. Here we report on two forms of glycosyl hydrolase family 7 cellobiohydrolases common to all Aspergillii that attack Avicel, cotton cellulose and other forms of crystalline cellulose. Results: Cellobiohydrolases Cbh1 and CelD have similar catalytic domains but only Cbh1 contains a carbohydrate-binding domain (CBD) that binds to cellulose. Structural superpositioning of Cbh1 and CelD on the Talaromyces emersonii Cel7A 3-dimensional structure, identifies the typical tunnel-like catalytic active site while Cbh1 shows an additional loop that partially obstructs the substrate-fitting channel. CelD does not have a CBD and shows a four amino acid residue deletion on the tunnel-obstructing loop providing a continuous opening in the absence of a CBD. Cbh1 and CelD are catalytically functional and while specific activity against Avicel is 7.7 and 0.5 U. mg prot-1, respectively specific activity on pNPC is virtually identical. Cbh1 is slightly more stable to thermal inactivation compared to CelD and is much less sensitive to glucose inhibition suggesting that an open tunnel configuration, or absence of a CBD, alters the way the catalytic domain interacts with the substrate. Cbh1 and CelD enzyme mixtures on crystalline cellulosic substrates show a strong combinatorial effort response for mixtures where Cbh1 is present in 2: 1 or 4: 1 molar excess. When CelD was overrepresented the combinatorial effort could only be partially overcome. CelD appears to bind and hydrolyze only loose cellulosic chains while Cbh1 is capable of opening new cellulosic substrate molecules away from the cellulosic fiber. Conclusion: Cellobiohydrolases both with and without a CBD occur in most fungal genomes where both enzymes are secreted, and likely participate in cellulose degradation. The fact that only Cbh1 binds to the substrate and in combination with CelD exhibits strong synergy only when Cbh1 is present in excess, suggests that Cbh1 unties enough chains from cellulose fibers, thus enabling processive access of CelD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a novel method for power quality signal decomposition is proposed based on Independent Component Analysis (ICA). This method aims to decompose the power system signal (voltage or current) into components that can provide more specific information about the different disturbances which are occurring simultaneously during a multiple disturbance situation. The ICA is originally a multichannel technique. However, the method proposes its use to blindly separate out disturbances existing in a single measured signal (single channel). Therefore, a preprocessing step for the ICA is proposed using a filter bank. The proposed method was applied to synthetic data, simulated data, as well as actual power system signals, showing a very good performance. A comparison with the decomposition provided by the Discrete Wavelet Transform shows that the proposed method presented better decoupling for the analyzed data. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The beta-Birnbaum-Saunders (Cordeiro and Lemonte, 2011) and Birnbaum-Saunders (Birnbaum and Saunders, 1969a) distributions have been used quite effectively to model failure times for materials subject to fatigue and lifetime data. We define the log-beta-Birnbaum-Saunders distribution by the logarithm of the beta-Birnbaum-Saunders distribution. Explicit expressions for its generating function and moments are derived. We propose a new log-beta-Birnbaum-Saunders regression model that can be applied to censored data and be used more effectively in survival analysis. We obtain the maximum likelihood estimates of the model parameters for censored data and investigate influence diagnostics. The new location-scale regression model is modified for the possibility that long-term survivors may be presented in the data. Its usefulness is illustrated by means of two real data sets. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report self-similar properties of periodic structures remarkably organized in the two-parameter space for a two-gene system, described by two-dimensional symmetric map. The map consists of difference equations derived from the chemical reactions for gene expression and regulation. We characterize the system by using Lyapunov exponents and isoperiodic diagrams identifying periodic windows, denominated Arnold tongues and shrimp-shaped structures. Period-adding sequences are observed for both periodic windows. We also identify Fibonacci-type series and Golden ratio for Arnold tongues, and period multiple-of-three windows for shrimps. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work of hospital food service is characterized by demands that can be associated with work ability - WA. The aim of this study was to evaluate factors associated with WA among hospital food service professionals and recommend intervention measures. This is a cross sectional study carried out in 2009, conducted in a hospital of Sao Paulo, Brazil. Participants were 76 (96.2%) of the eligible. They filled out a questionnaire including socio-demographic data, life styles, working conditions and WA. Multivariate linear regression analyses were performed. Factors associated with WA were age (p=0.051), over commitment (p=0.011), effort-reward ratio (p=0.002) and work injuries (p<0.001). In spite was a young population, age was associated with WA. Association with work injuries is consistent with the theoretical model that demonstrated that health status is the basis to maintain the WA. The association of effort-reward imbalance shows that issues related with work organization are relevant for these workers. The association of overcommittment suggests that workers recognize their responsibility with the therapeutic processes of patients. Results showed a number of features of different nature that should be taken into account when implementing measures to improve the WA, to be applied at different levels: individual, task and institutional.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background The genetic mechanisms underlying interindividual blood pressure variation reflect the complex interplay of both genetic and environmental variables. The current standard statistical methods for detecting genes involved in the regulation mechanisms of complex traits are based on univariate analysis. Few studies have focused on the search for and understanding of quantitative trait loci responsible for gene × environmental interactions or multiple trait analysis. Composite interval mapping has been extended to multiple traits and may be an interesting approach to such a problem. Methods We used multiple-trait analysis for quantitative trait locus mapping of loci having different effects on systolic blood pressure with NaCl exposure. Animals studied were 188 rats, the progenies of an F2 rat intercross between the hypertensive and normotensive strain, genotyped in 179 polymorphic markers across the rat genome. To accommodate the correlational structure from measurements taken in the same animals, we applied univariate and multivariate strategies for analyzing the data. Results We detected a new quantitative train locus on a region close to marker R589 in chromosome 5 of the rat genome, not previously identified through serial analysis of individual traits. In addition, we were able to justify analytically the parametric restrictions in terms of regression coefficients responsible for the gain in precision with the adopted analytical approach. Conclusion Future work should focus on fine mapping and the identification of the causative variant responsible for this quantitative trait locus signal. The multivariable strategy might be valuable in the study of genetic determinants of interindividual variation of antihypertensive drug effectiveness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Decreased heart rate variability (HRV) is related to higher morbidity and mortality. In this study we evaluated the linear and nonlinear indices of the HRV in stable angina patients submitted to coronary angiography. Methods We studied 77 unselected patients for elective coronary angiography, which were divided into two groups: coronary artery disease (CAD) and non-CAD groups. For analysis of HRV indices, HRV was recorded beat by beat with the volunteers in the supine position for 40 minutes. We analyzed the linear indices in the time (SDNN [standard deviation of normal to normal], NN50 [total number of adjacent RR intervals with a difference of duration greater than 50ms] and RMSSD [root-mean square of differences]) and frequency domains ultra-low frequency (ULF) ≤ 0,003 Hz, very low frequency (VLF) 0,003 – 0,04 Hz, low frequency (LF) (0.04–0.15 Hz), and high frequency (HF) (0.15–0.40 Hz) as well as the ratio between LF and HF components (LF/HF). In relation to the nonlinear indices we evaluated SD1, SD2, SD1/SD2, approximate entropy (−ApEn), α1, α2, Lyapunov Exponent, Hurst Exponent, autocorrelation and dimension correlation. The definition of the cutoff point of the variables for predictive tests was obtained by the Receiver Operating Characteristic curve (ROC). The area under the ROC curve was calculated by the extended trapezoidal rule, assuming as relevant areas under the curve ≥ 0.650. Results Coronary arterial disease patients presented reduced values of SDNN, RMSSD, NN50, HF, SD1, SD2 and -ApEn. HF ≤ 66 ms2, RMSSD ≤ 23.9 ms, ApEn ≤−0.296 and NN50 ≤ 16 presented the best discriminatory power for the presence of significant coronary obstruction. Conclusion We suggest the use of Heart Rate Variability Analysis in linear and nonlinear domains, for prognostic purposes in patients with stable angina pectoris, in view of their overall impairment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Semi-qualitative probabilistic networks (SQPNs) merge two important graphical model formalisms: Bayesian networks and qualitative probabilistic networks. They provade a very Complexity of inferences in polytree-shaped semi-qualitative probabilistic networks and qualitative probabilistic networks. They provide a very general modeling framework by allowing the combination of numeric and qualitative assessments over a discrete domain, and can be compactly encoded by exploiting the same factorization of joint probability distributions that are behind the bayesian networks. This paper explores the computational complexity of semi-qualitative probabilistic networks, and takes the polytree-shaped networks as its main target. We show that the inference problem is coNP-Complete for binary polytrees with multiple observed nodes. We also show that interferences can be performed in time linear in the number of nodes if there is a single observed node. Because our proof is construtive, we obtain an efficient linear time algorithm for SQPNs under such assumptions. To the best of our knowledge, this is the first exact polynominal-time algorithm for SQPn. Together these results provide a clear picture of the inferential complexity in polytree-shaped SQPNs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Network reconfiguration for service restoration (SR) in distribution systems is a complex optimization problem. For large-scale distribution systems, it is computationally hard to find adequate SR plans in real time since the problem is combinatorial and non-linear, involving several constraints and objectives. Two Multi-Objective Evolutionary Algorithms that use Node-Depth Encoding (NDE) have proved able to efficiently generate adequate SR plans for large distribution systems: (i) one of them is the hybridization of the Non-Dominated Sorting Genetic Algorithm-II (NSGA-II) with NDE, named NSGA-N; (ii) the other is a Multi-Objective Evolutionary Algorithm based on subpopulation tables that uses NDE, named MEAN. Further challenges are faced now, i.e. the design of SR plans for larger systems as good as those for relatively smaller ones and for multiple faults as good as those for one fault (single fault). In order to tackle both challenges, this paper proposes a method that results from the combination of NSGA-N, MEAN and a new heuristic. Such a heuristic focuses on the application of NDE operators to alarming network zones according to technical constraints. The method generates similar quality SR plans in distribution systems of significantly different sizes (from 3860 to 30,880 buses). Moreover, the number of switching operations required to implement the SR plans generated by the proposed method increases in a moderate way with the number of faults.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The influence of the shear stress and angular momentum on the nonlinear spherical collapse model is discussed in the framework of the Einstein–de Sitter and ΛCDM models. By assuming that the vacuum component is not clustering within the homogeneous nonspherical overdensities, we show how the local rotation and shear affect the linear density threshold for collapse of the nonrelativistic component (δc) and its virial overdensity (ΔV ). It is also found that the net effect of shear and rotation in galactic scale is responsible for higher values of the linear overdensity parameter as compared with the standard spherical collapse model (no shear and rotation)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deformability is often a crucial to the conception of many civil-engineering structural elements. Also, design is all the more burdensome if both long- and short-term deformability has to be considered. In this thesis, long- and short-term deformability has been studied from the material and the structural modelling point of view. Moreover, two materials have been handled: pultruded composites and concrete. A new finite element model for thin-walled beams has been introduced. As a main assumption, cross-sections rigid are considered rigid in their plane; this hypothesis replaces that of the classical beam theory of plane cross-sections in the deformed state. That also allows reducing the total number of degrees of freedom, and therefore making analysis faster compared with twodimensional finite elements. Longitudinal direction warping is left free, allowing describing phenomena such as the shear lag. The new finite-element model has been first applied to concrete thin-walled beams (such as roof high span girders or bridge girders) subject to instantaneous service loadings. Concrete in his cracked state has been considered through a smeared crack model for beams under bending. At a second stage, the FE-model has been extended to the viscoelastic field and applied to pultruded composite beams under sustained loadings. The generalized Maxwell model has been adopted. As far as materials are concerned, long-term creep tests have been carried out on pultruded specimens. Both tension and shear tests have been executed. Some specimen has been strengthened with carbon fibre plies to reduce short- and long- term deformability. Tests have been done in a climate room and specimens kept 2 years under constant load in time. As for concrete, a model for tertiary creep has been proposed. The basic idea is to couple the UMLV linear creep model with a damage model in order to describe nonlinearity. An effective strain tensor, weighting the total and the elasto-damaged strain tensors, controls damage evolution through the damage loading function. Creep strains are related to the effective stresses (defined by damage models) and so associated to the intact material.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first part of my work consisted in samplings conduced in nine different localities of the salento peninsula and Apulia (Italy): Costa Merlata (BR), Punta Penne (BR), Santa Cesarea terme (LE), Santa Caterina (LE), Torre Inserraglio (LE), Torre Guaceto (BR), Porto Cesareo (LE), Otranto (LE), Isole Tremiti (FG). I collected data of species percentage covering from the infralittoral rocky zone, using squares of 50x50 cm. We considered 3 sites for location and 10 replicates for each site, which has been taken randomly. Then I took other data about the same places, collected in some years, and I combined them together, to do a spatial analysis. So I started from a data set of 1896 samples but I decided not to consider time as a factor because I have reason to think that in this period of time anthropogenic stressors and their effects (if present), didn’t change considerably. The response variable I’ve analysed is the covering percentage of an amount of 243 species (subsequently merged into 32 functional groups), including seaweeds, invertebrates, sediment and rock. 2 After the sampling, I have been spent a period of two months at the Hopkins Marine Station of Stanford University, in Monterey (California,USA), at Fiorenza Micheli's laboratory. I've been carried out statistical analysis on my data set, using the software PRIMER 6. My explorative analysis starts with a nMDS in PRIMER 6, considering the original data matrix without, for the moment, the effect of stressors. What comes out is a good separation between localities and it confirms the result of ANOSIM analysis conduced on the original data matrix. What is possible to ensure is that there is not a separation led by a geographic pattern, but there should be something else that leads the differences. Is clear the presence of at least three groups: one composed by Porto cesareo, Torre Guaceto and Isole tremiti (the only marine protected areas considered in this work); another one by Otranto, and the last one by the rest of little, impacted localities. Inside the localities that include MPA(Marine Protected Areas), is also possible to observe a sort of grouping between protected and controlled areas. What comes out from SIMPER analysis is that the most of the species involved in leading differences between populations are not rare species, like: Cystoseira spp., Mytilus sp. and ECR. Moreover I assigned discrete values (0,1,2) of each stressor to all the sites I considered, in relation to the intensity with which the anthropogenic factor affect the localities. 3 Then I tried to estabilish if there were some significant interactions between stressors: by using Spearman rank correlation and Spearman tables of significance, and taking into account 17 grades of freedom, the outcome shows some significant stressors interactions. Then I built a nMDS considering the stressors as response variable. The result was positive: localities are well separeted by stressors. Consequently I related the matrix with 'localities and species' with the 'localities and stressors' one. Stressors combination explains with a good significance level the variability inside my populations. I tried with all the possible data transformations (none, square root, fourth root, log (X+1), P/A), but the fourth root seemed to be the best one, with the highest level of significativity, meaning that also rare species can influence the result. The challenge will be to characterize better which kind of stressors (including also natural ones), act on the ecosystem; and give them a quantitative and more accurate values, trying to understand how they interact (in an additive or non-additive way).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.