862 resultados para HEC-RAS
Resumo:
Transforming growth factor beta (TGF-beta) and tumor necrosis factor alpha (TNF-alpha) often exhibit antagonistic actions on the regulation of various activities such as immune responses, cell growth, and gene expression. However, the molecular mechanisms involved in the mutually opposing effects of TGF-beta and TNF-alpha are unknown. Here, we report that binding sites for the transcription factor CTF/NF-I mediate antagonistic TGF-beta and TNF-alpha transcriptional regulation in NIH3T3 fibroblasts. TGF-beta induces the proline-rich transactivation domain of specific CTF/NF-I family members, such as CTF-1, whereas TNF-alpha represses both the uninduced as well as the TGF-beta-induced CTF-1 transcriptional activity. CTF-1 is thus the first transcription factor reported to be repressed by TNF-alpha. The previously identified TGF-beta-responsive domain in the proline-rich transcriptional activation sequence of CTF-1 mediates both transcriptional induction and repression by the two growth factors. Analysis of potential signal transduction intermediates does not support a role for known mediators of TNF-alpha action, such as arachidonic acid, in CTF-1 regulation. However, overexpression of oncogenic forms of the small GTPase Ras or of the Raf-1 kinase represses CTF-1 transcriptional activity, as does TNF-alpha. Furthermore, TNF-alpha is unable to repress CTF-1 activity in NIH3T3 cells overexpressing ras or raf, suggesting that TNF-alpha regulates CTF-1 by a Ras-Raf kinase-dependent pathway. Mutagenesis studies demonstrated that the CTF-1 TGF-beta-responsive domain is not the primary target of regulatory phosphorylations. Interestingly, however, the domain mediating TGF-beta and TNF-alpha antagonistic regulation overlapped precisely the previously identified histone H3 interaction domain of CTF-1. These results identify CTF-1 as a molecular target of mutually antagonistic TGF-beta and TNF-alpha regulation, and they further suggest a molecular mechanism for the opposing effects of these growth factors on gene expression.
Resumo:
We previously demonstrated the synergistic therapeutic effect of the cetuximab (anti-epidermal growth factor receptor [EGFR] monoclonal antibody, mAb)-trastuzumab (anti-HER2 mAb) combination (2mAbs therapy) in HER2(low) human pancreatic carcinoma xenografts. Here, we compared the 2mAbs therapy, the erlotinib (EGFR tyrosine kinase inhibitor [TKI])-trastuzumab combination and lapatinib alone (dual HER2/EGFR TKI) and explored their possible mechanisms of action. The effects on tumor growth and animal survival of the three therapies were assessed in nude mice xenografted with the human pancreatic carcinoma cell lines Capan-1 and BxPC-3. After therapy, EGFR and HER2 expression and AKT phosphorylation in tumor cells were analyzed by Western blot analysis. EGFR/HER2 heterodimerization was quantified in BxPC-3 cells by time-resolved FRET. In K-ras-mutated Capan-1 xenografts, the 2mAbs therapy gave significantly higher inhibition of tumor growth than the erlotinib/trastuzumab combination, whereas in BxPC-3 (wild-type K-ras) xenografts, the erlotinib/trastuzumab combination showed similar growth inhibition but fewer tumor-free mice. Lapatinib showed no antitumor effect in both types of xenografts. The efficacy of the 2mAbs therapy was partly Fc-independent because F(ab')(2) fragments of the two mAbs significantly inhibited BxPC-3 growth, although with a time-limited therapeutic effect. The 2mAbs therapy was associated with a reduction of EGFR and HER2 expression and AKT phosphorylation. BxPC-3 cells preincubated with the two mAbs showed 50% less EGFR/HER2 heterodimers than controls. In pancreatic carcinoma xenografts, the 2mAbs therapy is more effective than treatments involving dual EGFR/HER2 TKIs. The mechanism of action may involve decreased AKT phosphorylation and/or disruption of EGFR/HER2 heterodimerization.
Resumo:
Risk theory has been a very active research area over the last decades. The main objectives of the theory are to find adequate stochastic processes which can model the surplus of a (non-life) insurance company and to analyze the risk related quantities such as ruin time, ruin probability, expected discounted penalty function and expected discounted dividend/tax payments. The study of these ruin related quantities provides crucial information for actuaries and decision makers. This thesis consists of the study of four different insurance risk models which are essentially related. The ruin and related quantities are investigated by using different techniques, resulting in explicit or asymptotic expressions for the ruin time, the ruin probability, the expected discounted penalty function and the expected discounted tax payments. - La recherche en théorie du risque a été très dynamique au cours des dernières décennies. D'un point de vue théorique, les principaux objectifs sont de trouver des processus stochastiques adéquats permettant de modéliser le surplus d'une compagnie d'assurance non vie et d'analyser les mesures de risque, notamment le temps de ruine, la probabilité de ruine, l'espérance de la valeur actuelle de la fonction de pénalité et l'espérance de la valeur actuelle des dividendes et taxes. L'étude de ces mesures associées à la ruine fournit des informations cruciales pour les actuaires et les décideurs. Cette thèse consiste en l'étude des quatre différents modèles de risque d'assurance qui sont essentiellement liés. La ruine et les mesures qui y sont associées sont examinées à l'aide de différentes techniques, ce qui permet d'induire des expressions explicites ou asymptotiques du temps de ruine, de la probabilité de ruine, de l'espérance de la valeur actuelle de la fonction de pénalité et l'espérance de la valeur actuelle des dividendes et taxes.
Resumo:
EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.
Resumo:
BACKGROUND: At least 2 apparently independent mechanisms, microsatellite instability (MSI) and chromosomal instability, are implicated in colorectal tumorigenesis. Their respective roles in predicting clinical outcomes of patients with T3N0 colorectal cancer remain unknown. METHODS: Eighty-eight patients with a sporadic T3N0 colon or rectal adenocarcinoma were followed up for a median of 67 months. For chromosomal instability analysis, Ki-ras mutations were determined by single-strand polymerase chain reaction, and p53 protein staining was studied by immunohistochemistry. For MSI analysis, DNA was amplified by polymerase chain reaction at 7 microsatellite targets (BAT25, BAT26, D17S250, D2S123, D5S346, transforming growth factor receptor II, and BAX). RESULTS: Overall 5-year survival rate was 72%. p53 protein nuclear staining was detected in 39 patients (44%), and MSI was detected in 21 patients (24%). MSI correlated with proximal location (P <.001) and mucinous content (P <.001). In a multivariate analysis, p53 protein expression carried a significant risk of death (relative risk = 4.0, 95% CI = 1.6 to 10.1, P =.004). By comparison, MSI was not a statistically significant prognostic factor for survival in this group (relative risk = 2.2, 95% CI = 0.6 to 7.3, P =.21). CONCLUSIONS: p53 protein overexpression provides better prognostic discrimination than MSI in predicting survival of patients with T3N0 colorectal cancer. Although MSI is associated with specific clinicopathologic parameters, it did not predict overall survival in this group. Assessment of p53 protein expression by immunocytochemistry provides a simple means to identify a subset of T3N0 patients with a 4-times increased risk for death.
Resumo:
AIM: To assess whether blockade of the renin-angiotensin system (RAS), a recognized strategy to prevent the progression of diabetic nephropathy, affects renal tissue oxygenation in type 2 diabetes mellitus (T2DM) patients. METHODS: Prospective randomized 2-way cross over study; T2DM patients with (micro)albuminuria and/or hypertension underwent blood oxygenation level-dependent magnetic resonance imaging (BOLD-MRI) at baseline, after one month of enalapril (20mgqd), and after one month of candesartan (16mgqd). Each BOLD-MRI was performed before and after the administration of furosemide. The mean R2* (=1/T2*) values in the medulla and cortex were calculated, a low R2* indicating high tissue oxygenation. RESULTS: Twelve patients (mean age: 60±11 years, eGFR: 62±22ml/min/1.73m(2)) completed the study. Neither chronic enalapril nor candesartan intake modified renal cortical or medullary R2* levels. Furosemide significantly decreased cortical and medullary R2* levels suggesting a transient increase in renal oxygenation. Medullary R2* levels correlated positively with urinary sodium excretion and systemic blood pressure, suggesting lower renal oxygenation at higher dietary sodium intake and blood pressure; cortical R2* levels correlated positively with glycemia and HbA1c. CONCLUSION: RAS blockade does not seem to increase renal tissue oxygenation in T2DM hypertensive patients. The response to furosemide and the association with 24h urinary sodium excretion emphasize the crucial role of renal sodium handling as one of the main determinants of renal tissue oxygenation.
Resumo:
"MotionMaker (TM)" is a stationary programmable test and training system for the lower limbs developed at the 'Ecole Polytechnique Federale de Lausanne' with the 'Fondation Suisse pour les Cybertheses'.. The system is composed of two robotic orthoses comprising motors and sensors, and a control unit managing the trans-cutaneous electrical muscle stimulation with real-time regulation. The control of the Functional Electrical Stimulation (FES) induced muscle force necessary to mimic natural exercise is ensured by the control unit which receives a continuous input from the position and force sensors mounted on the robot. First results with control subjects showed the feasibility of creating movements by such closed-loop controlled FES induced muscle contractions. To make exercising with the MotionMaker (TM) safe for clinical trials with Spinal Cord Injured (SCI) volunteers, several original safety features have been introduced. The MotionMaker (TM) is able to identify and manage the occurrence of spasms. Fatigue can also be detected and overfatigue during exercise prevented.
Resumo:
This paper aims to estimate empirically the efficiency of a Swiss telemedicine service introduced in 2003. We used claims' data gathered by a major Swiss health insurer, over a period of 6 years and involving 160 000 insured adults. In Switzerland, health insurance is mandatory, but everyone has the option of choosing between a managed care plan and a fee-for-service plan. This paper focuses on a conventional fee-for-service plan including a mandatory access to a telemedicine service; the insured are obliged to phone this medical call centre before visiting a physician. This type of plan generates much lower average health expenditures than a conventional insurance plan. Reasons for this may include selection, incentive effects or efficiency. In our sample, about 90% of the difference in health expenditure can be explained by selection and incentive effects. The remaining 10% of savings due to the efficiency of the telemedicine service amount to about SFr 150 per year per insured, of which approximately 60% is saved by the insurer and 40% by the insured. Although the efficiency effect is greater than the cost of the plan, the big winners are the insured who not only save monetary and non-monetary costs but also benefit from reduced premiums. Copyright © 2010 John Wiley & Sons, Ltd.
Resumo:
In this thesis, I develop analytical models to price the value of supply chain investments under demand uncer¬tainty. This thesis includes three self-contained papers. In the first paper, we investigate the value of lead-time reduction under the risk of sudden and abnormal changes in demand forecasts. We first consider the risk of a complete and permanent loss of demand. We then provide a more general jump-diffusion model, where we add a compound Poisson process to a constant-volatility demand process to explore the impact of sudden changes in demand forecasts on the value of lead-time reduction. We use an Edgeworth series expansion to divide the lead-time cost into that arising from constant instantaneous volatility, and that arising from the risk of jumps. We show that the value of lead-time reduction increases substantially in the intensity and/or the magnitude of jumps. In the second paper, we analyze the value of quantity flexibility in the presence of supply-chain dis- intermediation problems. We use the multiplicative martingale model and the "contracts as reference points" theory to capture both positive and negative effects of quantity flexibility for the downstream level in a supply chain. We show that lead-time reduction reduces both supply-chain disintermediation problems and supply- demand mismatches. We furthermore analyze the impact of the supplier's cost structure on the profitability of quantity-flexibility contracts. When the supplier's initial investment cost is relatively low, supply-chain disin¬termediation risk becomes less important, and hence the contract becomes more profitable for the retailer. We also find that the supply-chain efficiency increases substantially with the supplier's ability to disintermediate the chain when the initial investment cost is relatively high. In the third paper, we investigate the value of dual sourcing for the products with heavy-tailed demand distributions. We apply extreme-value theory and analyze the effects of tail heaviness of demand distribution on the optimal dual-sourcing strategy. We find that the effects of tail heaviness depend on the characteristics of demand and profit parameters. When both the profit margin of the product and the cost differential between the suppliers are relatively high, it is optimal to buffer the mismatch risk by increasing both the inventory level and the responsive capacity as demand uncertainty increases. In that case, however, both the optimal inventory level and the optimal responsive capacity decrease as the tail of demand becomes heavier. When the profit margin of the product is relatively high, and the cost differential between the suppliers is relatively low, it is optimal to buffer the mismatch risk by increasing the responsive capacity and reducing the inventory level as the demand uncertainty increases. In that case, how¬ever, it is optimal to buffer with more inventory and less capacity as the tail of demand becomes heavier. We also show that the optimal responsive capacity is higher for the products with heavier tails when the fill rate is extremely high.
Resumo:
Combinatorial optimization involves finding an optimal solution in a finite set of options; many everyday life problems are of this kind. However, the number of options grows exponentially with the size of the problem, such that an exhaustive search for the best solution is practically infeasible beyond a certain problem size. When efficient algorithms are not available, a practical approach to obtain an approximate solution to the problem at hand, is to start with an educated guess and gradually refine it until we have a good-enough solution. Roughly speaking, this is how local search heuristics work. These stochastic algorithms navigate the problem search space by iteratively turning the current solution into new candidate solutions, guiding the search towards better solutions. The search performance, therefore, depends on structural aspects of the search space, which in turn depend on the move operator being used to modify solutions. A common way to characterize the search space of a problem is through the study of its fitness landscape, a mathematical object comprising the space of all possible solutions, their value with respect to the optimization objective, and a relationship of neighborhood defined by the move operator. The landscape metaphor is used to explain the search dynamics as a sort of potential function. The concept is indeed similar to that of potential energy surfaces in physical chemistry. Borrowing ideas from that field, we propose to extend to combinatorial landscapes the notion of the inherent network formed by energy minima in energy landscapes. In our case, energy minima are the local optima of the combinatorial problem, and we explore several definitions for the network edges. At first, we perform an exhaustive sampling of local optima basins of attraction, and define weighted transitions between basins by accounting for all the possible ways of crossing the basins frontier via one random move. Then, we reduce the computational burden by only counting the chances of escaping a given basin via random kick moves that start at the local optimum. Finally, we approximate network edges from the search trajectory of simple search heuristics, mining the frequency and inter-arrival time with which the heuristic visits local optima. Through these methodologies, we build a weighted directed graph that provides a synthetic view of the whole landscape, and that we can characterize using the tools of complex networks science. We argue that the network characterization can advance our understanding of the structural and dynamical properties of hard combinatorial landscapes. We apply our approach to prototypical problems such as the Quadratic Assignment Problem, the NK model of rugged landscapes, and the Permutation Flow-shop Scheduling Problem. We show that some network metrics can differentiate problem classes, correlate with problem non-linearity, and predict problem hardness as measured from the performances of trajectory-based local search heuristics.