55 resultados para Multi-objective optimization techniques
Resumo:
Introduction: Coordination is a strategy chosen by the central nervous system to control the movements and maintain stability during gait. Coordinated multi-joint movements require a complex interaction between nervous outputs, biomechanical constraints, and pro-prioception. Quantitatively understanding and modeling gait coordination still remain a challenge. Surgeons lack a way to model and appreciate the coordination of patients before and after surgery of the lower limbs. Patients alter their gait patterns and their kinematic synergies when they walk faster or slower than normal speed to maintain their stability and minimize the energy cost of locomotion. The goal of this study was to provide a dynamical system approach to quantitatively describe human gait coordination and apply it to patients before and after total knee arthroplasty. Methods: A new method of quantitative analysis of interjoint coordination during gait was designed, providing a general model to capture the whole dynamics and showing the kinematic synergies at various walking speeds. The proposed model imposed a relationship among lower limb joint angles (hips and knees) to parameterize the dynamics of locomotion of each individual. An integration of different analysis tools such as Harmonic analysis, Principal Component Analysis, and Artificial Neural Network helped overcome high-dimensionality, temporal dependence, and non-linear relationships of the gait patterns. Ten patients were studied using an ambulatory gait device (Physilog®). Each participant was asked to perform two walking trials of 30m long at 3 different speeds and to complete an EQ-5D questionnaire, a WOMAC and Knee Society Score. Lower limbs rotations were measured by four miniature angular rate sensors mounted respectively, on each shank and thigh. The outcomes of the eight patients undergoing total knee arthroplasty, recorded pre-operatively and post-operatively at 6 weeks, 3 months, 6 months and 1 year were compared to 2 age-matched healthy subjects. Results: The new method provided coordination scores at various walking speeds, ranged between 0 and 10. It determined the overall coordination of the lower limbs as well as the contribution of each joint to the total coordination. The difference between the pre-operative and post-operative coordination values were correlated with the improvements of the subjective outcome scores. Although the study group was small, the results showed a new way to objectively quantify gait coordination of patients undergoing total knee arthroplasty, using only portable body-fixed sensors. Conclusion: A new method for objective gait coordination analysis has been developed with very encouraging results regarding the objective outcome of lower limb surgery.
Resumo:
Colistin is a last resort's antibacterial treatment in critically ill patients with multi-drug resistant Gram-negative infections. As appropriate colistin exposure is the key for maximizing efficacy while minimizing toxicity, individualized dosing optimization guided by therapeutic drug monitoring is a top clinical priority. Objective of the present work was to develop a rapid and robust HPLC-MS/MS assay for quantification of colistin plasma concentrations. This novel methodology validated according to international standards simultaneously quantifies the microbiologically active compounds colistin A and B, plus the pro-drug colistin methanesulfonate (colistimethate, CMS). 96-well micro-Elution SPE on Oasis Hydrophilic-Lipophilic-Balanced (HLB) followed by direct analysis by Hydrophilic Interaction Liquid Chromatography (HILIC) with Ethylene Bridged Hybrid - BEH - Amide phase column coupled to tandem mass spectrometry allows a high-throughput with no significant matrix effect. The technique is highly sensitive (limit of quantification 0.014 and 0.006μg/mL for colistin A and B), precise (intra-/inter-assay CV 0.6-8.4%) and accurate (intra-/inter-assay deviation from nominal concentrations -4.4 to +6.3%) over the clinically relevant analytical range 0.05-20μg/mL. Colistin A and B in plasma and whole blood samples are reliably quantified over 48h at room temperature and at +4°C (<6% deviation from nominal values) and after three freeze-thaw cycles. Colistimethate acidic hydrolysis (1M H2SO4) to colistin A and B in plasma was completed in vitro after 15min of sonication while the pro-drug hydrolyzed spontaneously in plasma ex vivo after 4h at room temperature: this information is of utmost importance for interpretation of analytical results. Quantification is precise and accurate when using serum, citrated or EDTA plasma as biological matrix, while use of heparin plasma is not appropriate. This new analytical technique providing optimized quantification in real-life conditions of the microbiologically active compounds colistin A and B offers a highly efficient tool for routine therapeutic drug monitoring aimed at individualizing drug dosing against life-threatening infections.
Resumo:
BACKGROUND: Food allergy is a common allergic disorder--especially in early childhood. The avoidance of the allergenic food is the only available method to prevent further reactions in sensitized patients. A better understanding of the immunologic mechanisms involved in this reaction would help to develop therapeutic approaches applicable to the prevention of food allergy. OBJECTIVE: To establish a multi-cell in vitro model of sensitized intestinal epithelium that mimics the intestinal epithelial barrier to study the capacity of probiotic microorganisms to modulate permeability, translocation and immunoreactivity of ovalbumin (OVA) used as a model antigen. METHODS: Polarized Caco-2 cell monolayers were conditioned by basolateral basophils and used to examine apical to basolateral transport of OVA by ELISA. Activation of basophils with translocated OVA was measured by beta-hexosaminidase release assay. This experimental setting was used to assess how microorganisms added apically affected these parameters. Basolateral secretion of cytokine/chemokines by polarized Caco-2 cell monolayers was analysed by ELISA. RESULTS: Basophils loaded with OVA-specific IgE responded to OVA in a dose-dependent manner. OVA transported across polarized Caco-2 cell monolayers was found to trigger basolateral basophil activation. Microorganisms including lactobacilli and Escherichia coli increased transepithelial electrical resistance while promoting OVA passage capable to trigger basophil activation. Non-inflammatory levels of IL-8 and thymic stromal lymphopoietin were produced basolaterally by Caco-2 cells exposed to microorganisms. CONCLUSION: The complex model designed in here is adequate to learn about the consequence of the interaction between microorganisms and epithelial cells vis-a-vis the barrier function and antigen translocation, two parameters essential to mucosal homeostasis. It can further serve as a direct tool to search for microorganisms with anti-allergic and anti-inflammatory properties.
Resumo:
OBJECTIVE: The optimal coronary MR angiography sequence has yet to be determined. We sought to quantitatively and qualitatively compare four coronary MR angiography sequences. SUBJECTS AND METHODS. Free-breathing coronary MR angiography was performed in 12 patients using four imaging sequences (turbo field-echo, fast spin-echo, balanced fast field-echo, and spiral turbo field-echo). Quantitative comparisons, including signal-to-noise ratio, contrast-to-noise ratio, vessel diameter, and vessel sharpness, were performed using a semiautomated analysis tool. Accuracy for detection of hemodynamically significant disease (> 50%) was assessed in comparison with radiographic coronary angiography. RESULTS: Signal-to-noise and contrast-to-noise ratios were markedly increased using the spiral (25.7 +/- 5.7 and 15.2 +/- 3.9) and balanced fast field-echo (23.5 +/- 11.7 and 14.4 +/- 8.1) sequences compared with the turbo field-echo (12.5 +/- 2.7 and 8.3 +/- 2.6) sequence (p < 0.05). Vessel diameter was smaller with the spiral sequence (2.6 +/- 0.5 mm) than with the other techniques (turbo field-echo, 3.0 +/- 0.5 mm, p = 0.6; balanced fast field-echo, 3.1 +/- 0.5 mm, p < 0.01; fast spin-echo, 3.1 +/- 0.5 mm, p < 0.01). Vessel sharpness was highest with the balanced fast field-echo sequence (61.6% +/- 8.5% compared with turbo field-echo, 44.0% +/- 6.6%; spiral, 44.7% +/- 6.5%; fast spin-echo, 18.4% +/- 6.7%; p < 0.001). The overall accuracies of the sequences were similar (range, 74% for turbo field-echo, 79% for spiral). Scanning time for the fast spin-echo sequences was longest (10.5 +/- 0.6 min), and for the spiral acquisitions was shortest (5.2 +/- 0.3 min). CONCLUSION: Advantages in signal-to-noise and contrast-to-noise ratios, vessel sharpness, and the qualitative results appear to favor spiral and balanced fast field-echo coronary MR angiography sequences, although subjective accuracy for the detection of coronary artery disease was similar to that of other sequences.
Resumo:
Tractography is a class of algorithms aiming at in vivo mapping the major neuronal pathways in the white matter from diffusion magnetic resonance imaging (MRI) data. These techniques offer a powerful tool to noninvasively investigate at the macroscopic scale the architecture of the neuronal connections of the brain. However, unfortunately, the reconstructions recovered with existing tractography algorithms are not really quantitative even though diffusion MRI is a quantitative modality by nature. As a matter of fact, several techniques have been proposed in recent years to estimate, at the voxel level, intrinsic microstructural features of the tissue, such as axonal density and diameter, by using multicompartment models. In this paper, we present a novel framework to reestablish the link between tractography and tissue microstructure. Starting from an input set of candidate fiber-tracts, which are estimated from the data using standard fiber-tracking techniques, we model the diffusion MRI signal in each voxel of the image as a linear combination of the restricted and hindered contributions generated in every location of the brain by these candidate tracts. Then, we seek for the global weight of each of them, i.e., the effective contribution or volume, such that they globally fit the measured signal at best. We demonstrate that these weights can be easily recovered by solving a global convex optimization problem and using efficient algorithms. The effectiveness of our approach has been evaluated both on a realistic phantom with known ground-truth and in vivo brain data. Results clearly demonstrate the benefits of the proposed formulation, opening new perspectives for a more quantitative and biologically plausible assessment of the structural connectivity of the brain.
Resumo:
Stress radiographs have been recommended in order to obtain a better objective quantification of abnormal compartment knee motion. This tool has showed to be superior in quantifying a posterior cruciate ligament (PCL) lesion compared to clinical or arthrometer evaluation. Different radiographic techniques have been described in literature to quantify posterior pathological laxity. In this study we evaluated the total amount of posterior displacement (PTD) and side to side difference (SSD), before and after surgical reconstruction of PCL or PCL and posterolateral complex (PLC), using two different stress radiography techniques (Telos stress and kneeling view). Twenty patients were included in this study. We found a statistical significant difference about both total PTD and SSD among the two techniques preoperatively and at follow-up, with greatest values occurring using the kneeling view. Although stress radiographies has been introduced to allow an objective quantification of laxity in ligamentous injured knee, we believe that further studies on a large numbers of subjects are required to define the relationship between PTD values, measured with stress knee radiography, particularly using kneeling view, and ligamentous knee injury, in order to obtain a real useful tool in the decision making process, as well as to evaluate the outcome after ligamentous surgery.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale for the purpose of improving predictions of groundwater flow and solute transport. However, extending corresponding approaches to the regional scale still represents one of the major challenges in the domain of hydrogeophysics. To address this problem, we have developed a regional-scale data integration methodology based on a two-step Bayesian sequential simulation approach. Our objective is to generate high-resolution stochastic realizations of the regional-scale hydraulic conductivity field in the common case where there exist spatially exhaustive but poorly resolved measurements of a related geophysical parameter, as well as highly resolved but spatially sparse collocated measurements of this geophysical parameter and the hydraulic conductivity. To integrate this multi-scale, multi-parameter database, we first link the low- and high-resolution geophysical data via a stochastic downscaling procedure. This is followed by relating the downscaled geophysical data to the high-resolution hydraulic conductivity distribution. After outlining the general methodology of the approach, we demonstrate its application to a realistic synthetic example where we consider as data high-resolution measurements of the hydraulic and electrical conductivities at a small number of borehole locations, as well as spatially exhaustive, low-resolution estimates of the electrical conductivity obtained from surface-based electrical resistivity tomography. The different stochastic realizations of the hydraulic conductivity field obtained using our procedure are validated by comparing their solute transport behaviour with that of the underlying ?true? hydraulic conductivity field. We find that, even in the presence of strong subsurface heterogeneity, our proposed procedure allows for the generation of faithful representations of the regional-scale hydraulic conductivity structure and reliable predictions of solute transport over long, regional-scale distances.
Resumo:
OBJECTIVE: Mild neurocognitive disorders (MND) affect a subset of HIV+ patients under effective combination antiretroviral therapy (cART). In this study, we used an innovative multi-contrast magnetic resonance imaging (MRI) approach at high-field to assess the presence of micro-structural brain alterations in MND+ patients. METHODS: We enrolled 17 MND+ and 19 MND- patients with undetectable HIV-1 RNA and 19 healthy controls (HC). MRI acquisitions at 3T included: MP2RAGE for T1 relaxation times, Magnetization Transfer (MT), T2* and Susceptibility Weighted Imaging (SWI) to probe micro-structural integrity and iron deposition in the brain. Statistical analysis used permutation-based tests and correction for family-wise error rate. Multiple regression analysis was performed between MRI data and (i) neuropsychological results (ii) HIV infection characteristics. A linear discriminant analysis (LDA) based on MRI data was performed between MND+ and MND- patients and cross-validated with a leave-one-out test. RESULTS: Our data revealed loss of structural integrity and micro-oedema in MND+ compared to HC in the global white and cortical gray matter, as well as in the thalamus and basal ganglia. Multiple regression analysis showed a significant influence of sub-cortical nuclei alterations on the executive index of MND+ patients (p = 0.04 he and R(2) = 95.2). The LDA distinguished MND+ and MND- patients with a classification quality of 73% after cross-validation. CONCLUSION: Our study shows micro-structural brain tissue alterations in MND+ patients under effective therapy and suggests that multi-contrast MRI at high field is a powerful approach to discriminate between HIV+ patients on cART with and without mild neurocognitive deficits.
Resumo:
BACKGROUND: Iterative reconstruction (IR) techniques reduce image noise in multidetector computed tomography (MDCT) imaging. They can therefore be used to reduce radiation dose while maintaining diagnostic image quality nearly constant. However, CT manufacturers offer several strength levels of IR to choose from. PURPOSE: To determine the optimal strength level of IR in low-dose MDCT of the cervical spine. MATERIAL AND METHODS: Thirty consecutive patients investigated by low-dose cervical spine MDCT were prospectively studied. Raw data were reconstructed using filtered back-projection and sinogram-affirmed IR (SAFIRE, strength levels 1 to 5) techniques. Image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were measured at C3-C4 and C6-C7 levels. Two radiologists independently and blindly evaluated various anatomical structures (both dense and soft tissues) using a 4-point scale. They also rated the overall diagnostic image quality using a 10-point scale. RESULTS: As IR strength levels increased, image noise decreased linearly, while SNR and CNR both increased linearly at C3-C4 and C6-C7 levels (P < 0.001). For the intervertebral discs, the content of neural foramina and dural sac, and for the ligaments, subjective image quality scores increased linearly with increasing IR strength level (P ≤ 0.03). Conversely, for the soft tissues and trabecular bone, the scores decreased linearly with increasing IR strength level (P < 0.001). Finally, the overall diagnostic image quality scores increased linearly with increasing IR strength level (P < 0.001). CONCLUSION: The optimal strength level of IR in low-dose cervical spine MDCT depends on the anatomical structure to be analyzed. For the intervertebral discs and the content of neural foramina, high strength levels of IR are recommended.
Resumo:
Individual-as-maximizing agent analogies result in a simple understanding of the functioning of the biological world. Identifying the conditions under which individuals can be regarded as fitness maximizing agents is thus of considerable interest to biologists. Here, we compare different concepts of fitness maximization, and discuss within a single framework the relationship between Hamilton's (J Theor Biol 7: 1-16, 1964) model of social interactions, Grafen's (J Evol Biol 20: 1243-1254, 2007a) formal Darwinism project, and the idea of evolutionary stable strategies. We distinguish cases where phenotypic effects are additive separable or not, the latter not being covered by Grafen's analysis. In both cases it is possible to define a maximand, in the form of an objective function phi(z), whose argument is the phenotype of an individual and whose derivative is proportional to Hamilton's inclusive fitness effect. However, this maximand can be identified with the expression for fecundity or fitness only in the case of additive separable phenotypic effects, making individual-as-maximizing agent analogies unattractive (although formally correct) under general situations of social interactions. We also feel that there is an inconsistency in Grafen's characterization of the solution of his maximization program by use of inclusive fitness arguments. His results are in conflict with those on evolutionary stable strategies obtained by applying inclusive fitness theory, and can be repaired only by changing the definition of the problem.
Resumo:
Introduction: Mantle cell lymphoma (MCL) accounts for 6% of all B-cell lymphomas and remains incurable for most patients. Those who relapse after first line therapy or hematopoietic stem cell transplantation have a dismal prognosis with short response duration after salvage therapy. On a molecular level, MCL is characterised by the translocation t[11;14] leading to Cyclin D1 overexpression. Cyclin D1 is downstream of the mammalian target of rapamycin (mTOR) kinase and can be effectively blocked by mTOR inhibitors such as temsirolimus. We set out to define the single agent activity of the orally available mTOR inhibitor everolimus (RAD001) in a prospective, multi-centre trial in patients with relapsed or refractory MCL (NCT00516412). The study was performed in collaboration with the EU-MCL network. Methods: Eligible patients with histologically/cytologically confirmed relapsed (not more than 3 prior lines of systemic treatment) or refractory MCL received everolimus 10 mg orally daily on day 1 - 28 of each cycle (4 weeks) for 6 cycles or until disease progression. The primary endpoint was the best objective response with adverse reactions, time to progression (TTP), time to treatment failure, response duration and molecular response as secondary endpoints. A response rate of 10% was considered uninteresting and, conversely, promising if 30%. The required sample size was 35 pts using the Simon's optimal two-stage design with 90% power and 5% significance. Results: A total of 36 patients with 35 evaluable patients from 19 centers were enrolled between August 2007 and January 2010. The median age was 69.4 years (range 40.1 to 84.9 years), with 22 males and 13 females. Thirty patients presented with relapsed and 5 with refractory MCL with a median of two prior therapies. Treatment was generally well tolerated with anemia (11%), thrombocytopenia (11%), neutropenia (8%), diarrhea (3%) and fatigue (3%) being the most frequent complications of CTC grade III or higher. Eighteen patients received 6 or more cycles of everolimus treatment. The objective response rate was 20% (95% CI: 8-37%) with 2 CR, 5 PR, 17 SD, and 11 PD. At a median follow-up of 6 months, TTP was 5.45 months (95% CI: 2.8-8.2 months) for the entire population and 10.6 months for the 18 patients receiving 6 or more cycles of treatment. Conclusion: This study demonstrates that single agent everolimus 10 mg once daily orally is well tolerated. The null hypothesis of inactivity could be rejected indicating a moderate anti-lymphoma activity in relapsed/refractory MCL. Further studies of either everolimus in combination with chemotherapy or as single agent for maintenance treatment are warranted in MCL.
Resumo:
In the context of Systems Biology, computer simulations of gene regulatory networks provide a powerful tool to validate hypotheses and to explore possible system behaviors. Nevertheless, modeling a system poses some challenges of its own: especially the step of model calibration is often difficult due to insufficient data. For example when considering developmental systems, mostly qualitative data describing the developmental trajectory is available while common calibration techniques rely on high-resolution quantitative data. Focusing on the calibration of differential equation models for developmental systems, this study investigates different approaches to utilize the available data to overcome these difficulties. More specifically, the fact that developmental processes are hierarchically organized is exploited to increase convergence rates of the calibration process as well as to save computation time. Using a gene regulatory network model for stem cell homeostasis in Arabidopsis thaliana the performance of the different investigated approaches is evaluated, documenting considerable gains provided by the proposed hierarchical approach.
Resumo:
Context.-Unlike the small bowel, the colorectal mucosa is seldom the site of metastatic disease. Objective.-To determine the incidence of truly colorectal metastases, and subsequent clinicopathologic findings, in a substantial colorectal cancer population collected from 7 European centers. Design.-During the last decade, 10 365 patients were identified as having colorectal malignant tumors, other than systemic diseases. Data collected included patient demographics, clinical symptoms, treatment, the presence of metastases in other sites, disease-free interval, follow-up, and overall survival. All secondary tumors resulting from direct invasion from malignant tumors of the contiguous organs were excluded, as well as those resulting from lymph node metastases or peritoneal seeding. Results.-Only 35 patients were included (10 men) with a median age of 59 years. They presented with obstruction, bleeding, abdominal pain, or perforation. The leading source of metastases was the breast, followed by melanoma. Metastases were synchronous in 3 cases. The mean disease-free interval for the remaining cases was 6.61 years. Surgical resection was performed in 28 cases. Follow-up was available for 26 patients; all had died, with a mean survival time of 10.67 months (range, 1-41 months). Conclusions.-Colorectal metastases are exceptional (0.338%) with the breast as a leading source of metastases; they still represent a late stage of disease and reflect a poor prognosis. Therefore, the pathologist should be alert for the possibility of secondary tumors when studying large bowel biopsies. Any therapy is usually palliative, but our results suggest that prolonged survival after surgery and complementary therapy can be obtained in some patients.
Resumo:
Objective: To assess the importance of spirituality and religious coping among outpatients with a DSM-IV diagnosis of schizophrenia or schizoaffective disorder living in three countries. Method: A total of 276 outpatients (92 from Geneva, Switzerland, 121 from Trois-Rivières, Canada, and 63 from Durham, North Carolina), aged 18-65, were administered a semi-structured interview on the role of spirituality and religiousness in their lives and to cope with their illness. Results: Religion is important for outpatients in each of the three country sites, and religious involvement is higher than in the general population. Religion was helpful (i.e., provided a positive sense of self and positive coping with the illness) among 87% of the participants and harmful (a source of despair and suffering) among 13%. Helpful religion was associated with better social, clinical and psychological status. The opposite was observed for the harmful aspects of religion. In addition, religion sometimes conflicted with psychiatric treatment. Conclusions: These results indicate that outpatients with schizophrenia or schizoaffective disorder often use spirituality and religion to cope with their illness, basically positively, yet sometimes negatively. These results underscore the importance of clinicians taking into account the spiritual and religious lives of patients with schizophrenia.
Resumo:
One of the key emphases of these three essays is to provide practical managerial insight. However, good practical insight, can only be created by grounding it firmly on theoretical and empirical research. Practical experience-based understanding without theoretical grounding remains tacit and cannot be easily disseminated. Theoretical understanding without links to real life remains sterile. My studies aim to increase the understanding of how radical innovation could be generated at large established firms and how it can have an impact on business performance as most businesses pursue innovation with one prime objective: value creation. My studies focus on large established firms with sales revenue exceeding USD $ 1 billion. Usually large established firms cannot rely on informal ways of management, as these firms tend to be multinational businesses operating with subsidiaries, offices, or production facilities in more than one country. I. Internal and External Determinants of Corporate Venture Capital Investment The goal of this chapter is to focus on CVC as one of the mechanisms available for established firms to source new ideas that can be exploited. We explore the internal and external determinants under which established firms engage in CVC to source new knowledge through investment in startups. We attempt to make scholars and managers aware of the forces that influence CVC activity by providing findings and insights to facilitate the strategic management of CVC. There are research opportunities to further understand the CVC phenomenon. Why do companies engage in CVC? What motivates them to continue "playing the game" and keep their active CVC investment status. The study examines CVC investment activity, and the importance of understanding the influential factors that make a firm decide to engage in CVC. The main question is: How do established firms' CVC programs adapt to changing internal conditions and external environments. Adaptation typically involves learning from exploratory endeavors, which enable companies to transform the ways they compete (Guth & Ginsberg, 1990). Our study extends the current stream of research on CVC. It aims to contribute to the literature by providing an extensive comparison of internal and external determinants leading to CVC investment activity. To our knowledge, this is the first study to examine the influence of internal and external determinants on CVC activity throughout specific expansion and contraction periods determined by structural breaks occurring between 1985 to 2008. Our econometric analysis indicates a strong and significant positive association between CVC activity and R&D, cash flow availability and environmental financial market conditions, as well as a significant negative association between sales growth and the decision to engage into CVC. The analysis of this study reveals that CVC investment is highly volatile, as demonstrated by dramatic fluctuations in CVC investment activity over the past decades. When analyzing the overall cyclical CVC period from 1985 to 2008 the results of our study suggest that CVC activity has a pattern influenced by financial factors such as the level of R&D, free cash flow, lack of sales growth, and external conditions of the economy, with the NASDAQ price index as the most significant variable influencing CVC during this period. II. Contribution of CVC and its Interaction with R&D to Value Creation The second essay takes into account the demands of corporate executives and shareholders regarding business performance and value creation justifications for investments in innovation. Billions of dollars are invested in CVC and R&D. However there is little evidence that CVC and its interaction with R&D create value. Firms operating in dynamic business sectors seek to innovate to create the value demanded by changing market conditions, consumer preferences, and competitive offerings. Consequently, firms operating in such business sectors put a premium on finding new, sustainable and competitive value propositions. CVC and R&D can help them in this challenge. Dushnitsky and Lenox (2006) presented evidence that CVC investment is associated with value creation. However, studies have shown that the most innovative firms do not necessarily benefit from innovation. For instance Oyon (2007) indicated that between 1995 and 2005 the most innovative automotive companies did not obtain adequate rewards for shareholders. The interaction between CVC and R&D has generated much debate in the CVC literature. Some researchers see them as substitutes suggesting that firms have to choose between CVC and R&D (Hellmann, 2002), while others expect them to be complementary (Chesbrough & Tucci, 2004). This study explores the interaction that CVC and R&D have on value creation. This essay examines the impact of CVC and R&D on value creation over sixteen years across six business sectors and different geographical regions. Our findings suggest that the effect of CVC and its interaction with R&D on value creation is positive and significant. In dynamic business sectors technologies rapidly relinquish obsolete, consequently firms operating in such business sectors need to continuously develop new sources of value creation (Eisenhardt & Martin, 2000; Qualls, Olshavsky, & Michaels, 1981). We conclude that in order to impact value creation, firms operating in business sectors such as Engineering & Business Services, and Information Communication & Technology ought to consider CVC as a vital element of their innovation strategy. Moreover, regarding the CVC and R&D interaction effect, our findings suggest that R&D and CVC are complementary to value creation hence firms in certain business sectors can be better off supporting both R&D and CVC simultaneously to increase the probability of generating value creation. III. MCS and Organizational Structures for Radical Innovation Incremental innovation is necessary for continuous improvement but it does not provide a sustainable permanent source of competitiveness (Cooper, 2003). On the other hand, radical innovation pursuing new technologies and new market frontiers can generate new platforms for growth providing firms with competitive advantages and high economic margin rents (Duchesneau et al., 1979; Markides & Geroski, 2005; O'Connor & DeMartino, 2006; Utterback, 1994). Interestingly, not all companies distinguish between incremental and radical innovation, and more importantly firms that manage innovation through a one-sizefits- all process can almost guarantee a sub-optimization of certain systems and resources (Davila et al., 2006). Moreover, we conducted research on the utilization of MCS along with radical innovation and flexible organizational structures as these have been associated with firm growth (Cooper, 2003; Davila & Foster, 2005, 2007; Markides & Geroski, 2005; O'Connor & DeMartino, 2006). Davila et al. (2009) identified research opportunities for innovation management and provided a list of pending issues: How do companies manage the process of radical and incremental innovation? What are the performance measures companies use to manage radical ideas and how do they select them? The fundamental objective of this paper is to address the following research question: What are the processes, MCS, and organizational structures for generating radical innovation? Moreover, in recent years, research on innovation management has been conducted mainly at either the firm level (Birkinshaw, Hamel, & Mol, 2008a) or at the project level examining appropriate management techniques associated with high levels of uncertainty (Burgelman & Sayles, 1988; Dougherty & Heller, 1994; Jelinek & Schoonhoven, 1993; Kanter, North, Bernstein, & Williamson, 1990; Leifer et al., 2000). Therefore, we embarked on a novel process-related research framework to observe the process stages, MCS, and organizational structures that can generate radical innovation. This article is based on a case study at Alcan Engineered Products, a division of a multinational company provider of lightweight material solutions. Our observations suggest that incremental and radical innovation should be managed through different processes, MCS and organizational structures that ought to be activated and adapted contingent to the type of innovation that is being pursued (i.e. incremental or radical innovation). More importantly, we conclude that radical can be generated in a systematic way through enablers such as processes, MCS, and organizational structures. This is in line with the findings of Jelinek and Schoonhoven (1993) and Davila et al. (2006; 2007) who show that innovative firms have institutionalized mechanisms, arguing that radical innovation cannot occur in an organic environment where flexibility and consensus are the main managerial mechanisms. They rather argue that radical innovation requires a clear organizational structure and formal MCS.