881 resultados para Minimization of open stack problem
Resumo:
Tutkielman tavoitteena oli analysoida mentorointia, sen teoriaa ja menetelmää sekä miten sitä käytetään S-ryhmässä ja millä tavalla mentoroinnin käyttöä voidaan edistää johtamisen tukena S-ryhmässä. Tutkimuksessa tarkasteltiin mentorointia menetelmänä, jolla edistetään toisilta oppimista. Mentorointia analysoidaan myös hiljaisen tiedon, tietopääoman johtamisen ja kehittyvän johtajan ominaisuuksien näkökulmista. Osatavoitteiksi asetettiin kahdella eri tavalla toteutettujen mentorointiohjelmien erojen selvittäminen tuloksiltaan ja vaikutuksiltaan sekä mentoroinnin avulla tapahtuva oppimisen tehostaminen S-ryhmässä. Tutkielmassa käytetty konstruktiivinen metodologia valittiin tutkimusongelman perusteella, koska tarkoituksena oli kehittää S-ryhmään mentoroinnin hyödyntämiseen sopiva malli. Tiedonkeruumenetelmänä käytettiin sekä kyselytutkimusta että tietoverkossa tapahtuvaa strukturoitua teemahaastattelua. Tutkimuksen teoriaosa perustuu alan kirjallisuuteen, aihetta käsitteleviin kotimaisiin ja ulkomaisiin tutkimuksiin sekä tieteellisiin lehti- ja muihin artikkeleihin. Aikaisemman teoreettisen tutkimuksen perusteella laadittiin teoreettinen viitekehys, joka muodosti perustan tutkimuksen empiiriselle osalle. Empiirinen osa koostuu S-ryhmän mentorointiprojektia koskevasta materiaalista, mentorointiprojektiin osallistuneiden henkilöiden kyselytutkimuksista sekä S-ryhmän johtoon kuuluvien yhdeksän johtajan haastattelu-tutkimuksesta. Tutkimuksen päätuloksena oli se, että mentorointi sopii erinomaisen hyvin kehitysmenetelmäksi osaamisen ja kokemustiedon siirtämiseen vanhemmilta johtajilta nuorille potentiaalisille johtajille. Tämä tulos on erityisen merkittävä S-ryhmälle johtajaosaamisen kehittämisessä, kun ryhmään ollaan parhaillaan kasvattamassa uutta johtajasukupolvea. Empiirinen tutkimus tukee myös sitä näkemystä, että mentoroinnin toteuttamiseen S-ryhmässä on olemassa erilaisia tapoja. Keskeisenä tavoitteena toteutuksissa on henkilöiden kehittyminen ja oppiminen. Tutkimustuloksissa korostuivat myös nuoremman eli mentoroitavan tarve päästä kahdenkeskiseen, avoimeen ja luottamukselliseen keskusteluun kokeneemman henkilön kanssa. Näiden tutkimustulosten perusteella päädyttiin seuraaviin johtopäätöksiin: S-ryhmä tarvitsee oman mentorointijärjestelmän, joka toteutetaan ohjatun mentorointimallin mukaisesti. S-ryhmään on tärkeä perustaa oma mentorointipooli, jossa on halukkaita, eri osa-alueita osaavia mentoreita ja meklari, joka yhdyttää osapuolet toisiinsa.
Resumo:
The aims were twofold: to examine the gambling habits of emerging adult males in the French-speaking regions of Switzerland and to what extent these habits predict problem gambling within this population. We also evaluated problem gambling rates and provided data concerning variables such as gambling location, level of information about problem gambling and awareness of treatment centers. 606 Swiss male conscripts, aged 18-22 years, completed a self-report questionnaire. This was administered during their army recruitment day in 2012. Problem gambling was assessed through the Problem Gambling Severity Index (PGSI) (Ferris and Wynne 2001). 78.5% of the respondents were lifetime gamblers, 56.1% were past-year gamblers. Four out of ten past-year gamblers played in private spaces and in back rooms. The PGSI indicated that 10.8% of past-year gamblers presented with moderate gambling problems, whilst 1.4% appeared to be problem gamblers. The majority of respondents had never received information about problem gambling. Moreover, they were unaware of the existence of treatment centers for problem gambling in their region. PGSI scores were significantly predicted by the variety of games played. Problem gambling rates among young men appear to be higher than those of the general Swiss population. This confirms that emerging adult males are a particularly vulnerable population with regards to gambling addiction. The implications of this are considered for youth gambling-prevention programs.
Resumo:
Selective reinnervation of peripheral targets after nerve injury might be assessed by injecting a first tracer in a target before nerve injury to label the original neuronal population, and applying a second tracer after the regeneration period to label the regenerated population. However, altered uptake of tracer, fading, and cell death may interfere with the results. Furthermore, if the first tracer injected remains in the target tissue, available for 're-uptake' by misdirected regenerating axons, which originally innervated another region, then the identification of the original population would be confused. With the aim of studying this problem, the sciatic nerve of adult rats was sectioned and sutured. After 3 days, to allow the distal axon to degenerate avoiding immediate retrograde transport, one of the dyes: Fast Blue (FB), Fluoro-Gold (FG) or Diamidino Yellow (DY), was injected into the tibial branch of the sciatic nerve, or in the skin of one of the denervated digits. Rats survived 2-3 months. The results showed labelled dorsal root ganglion (DRG) cells and motoneurones, indicating that late re-uptake of a first tracer occurs. This phenomenon must be considered when the model of sequential labelling is used for studying the accuracy of peripheral reinnervation.
Resumo:
OBJECTIVE: To describe the epidemiology, the surgical treatment, the microbiology, the antibiotic prophylaxis and the outcome of patients with the most severe type of open fractures. METHODS: Retrospective chart reviews of patients with Gustilo type III open fracture admitted to an university hospital in Switzerland between January 2007 and December 2011. The patient's and fracture's characteristics, surgery, antibiotic prophylaxis, and microbiology findings at the initial and at the revision surgery were described. RESULTS: Thirty patients were included (83% male, mean age 41 years). More than half of the patients had polytrauma. In all patients, debridement and stabilization surgery (70% using external fixation) were performed at admission. Soft tissue reconstruction was performed in 87% and in 23% immediate bone graft was performed. Antibiotic prophylaxis were given in all patients for a median duration of 9 days (60% received amoxicillin/clavulanic acid). Positive bacterial culture was found in 53% of the patients at initial surgery and in 88% at revision surgery. At initial and revision surgery, 47% and 88% of the pathogens were amoxicillin/clavulanic acid-resistant. Treatment outcome was favorable in 24 of 30 patients (80%) and in six cases (20%) an amputation had to be performed. None of the patients had chronic bone infection. CONCLUSIONS: Positive cultures were found often in open fractures. Amoxicillin/clavulanic acid which is often mentioned in many guidelines as prophylaxis in open fractures does not cover the most common isolated organisms. The combination of surgery and antibiotic prophylaxis leads to good outcome in Gustilo type III fracture.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
Mono- and bi-allelic mutations in the low-density lipoprotein receptor related protein 5 (LRP5) may cause osteopetrosis, autosomal dominant and recessive exudative vitreoretinopathy, juvenile osteoporosis, or persistent hyperplastic primary vitreous (PHPV). We report on a child affected with PHPV and carrying compound mutations. The father carried the splice mutation and suffered from severe bone fragility since childhood. The mother carried the missense mutation without any clinical manifestations. The genetic diagnosis of their child allowed for appropriate treatment in the father and for the detection of osteopenia in the mother. Mono- and bi-allelic mutations in LRP5 may cause osteopetrosis, autosomal dominant and recessive exudative vitreoretinopathy, juvenile osteoporosis, or PHPV. PHPV is a component of persistent fetal vasculature of the eye, characterized by highly variable expressivity and resulting in a wide spectrum of anterior and/or posterior congenital developmental defects, which may lead to blindness. We evaluated a family diagnosed with PHPV in their only child. The child presented photophobia during the first 3 weeks of life, followed by leukocoria at 2 months of age. Molecular resequencing of NDP, FZD4, and LRP5 was performed in the child and segregation of the observed mutations in the parents. At presentation, fundus examination of the child showed a retrolental mass in the right eye. Ultrasonography revealed retinal detachment in both eyes. Thorough familial analysis revealed that the father suffered from many fractures since childhood without specific fragility bone diagnosis, treatment, or management. The mother was asymptomatic. Molecular analysis in the proband identified two mutations: a c.[2091+2T>C] splice mutation and c.[1682C>T] missense mutation. We report the case of a child affected with PHPV and carrying compound heterozygous LRP5 mutations. This genetic diagnosis allowed the clinical diagnosis of the bone problem to be made in the father, resulting in better management of the family. It also enabled preventive treatment to be prescribed for the mother and accurate genetic counseling to be provided.
Resumo:
Reversed phase liquid chromatography (RPLC) coupled to mass spectrometry (MS) is the gold standard technique in bioanalysis. However, hydrophilic interaction chromatography (HILIC) could represent a viable alternative to RPLC for the analysis of polar and/or ionizable compounds, as it often provides higher MS sensitivity and alternative selectivity. Nevertheless, this technique can be also prone to matrix effects (ME). ME are one of the major issues in quantitative LC-MS bioanalysis. To ensure acceptable method performance (i.e., trueness and precision), a careful evaluation and minimization of ME is required. In the present study, the incidence of ME in HILIC-MS/MS and RPLC-MS/MS was compared for plasma and urine samples using two representative sets of 38 pharmaceutical compounds and 40 doping agents, respectively. The optimal generic chromatographic conditions in terms of selectivity with respect to interfering compounds were established in both chromatographic modes by testing three different stationary phases in each mode with different mobile phase pH. A second step involved the assessment of ME in RPLC and HILIC under the best generic conditions, using the post-extraction addition method. Biological samples were prepared using two different sample pre-treatments, i.e., a non-selective sample clean-up procedure (protein precipitation and simple dilution for plasma and urine samples, respectively) and a selective sample preparation, i.e., solid phase extraction for both matrices. The non-selective pretreatments led to significantly less ME in RPLC vs. HILIC conditions regardless of the matrix. On the contrary, HILIC appeared as a valuable alternative to RPLC for plasma and urine samples treated by a selective sample preparation. Indeed, in the case of selective sample preparation, the compounds influenced by ME were different in HILIC and RPLC, and lower and similar ME occurrence was generally observed in RPLC vs. HILIC for urine and plasma samples, respectively. The complementary of both chromatographic modes was also demonstrated, as ME was observed only scarcely for urine and plasma samples when selecting the most appropriate chromatographic mode.
Resumo:
In an explorative study, we investigated on German schoolteachers how they use, reuse, produce and manage Open Educational Resources. The main questions in this research have been, what their motivators and barriers are in their use of Open Educational Resources, what others can learn from their Open Educational Practices, and what we can do to raise the dissemination level of OER in schools.
Resumo:
Instructor and student beliefs, attitudes and intentions toward contributing to local open courseware (OCW) sites have been investigated through campus-wide surveys at Universidad Politecnica de Valencia and the University of Michigan. In addition, at the University of Michigan, faculty have been queried about their participation in open access (OA) publishing. We compare the instructor and student data concerning OCW between the two institutions, and introduce the investigation of open access publishing in relation to open courseware publishing.
Resumo:
Through indisputable evidence of climate change and its link to the greenhouse gas emissions comes the necessity for change in energy production infrastructure during the coming decades. Through political conventions and restrictions energy industry is pushed toward using bigger share of renewable energy sources as energy supply. In addition to climate change, sustainable energy supply is another major issue for future development plans, but neither of these should come with unbearable price. All the power production types have environmental effects as well as strengths and weaknesses. Although each change comes with a price, right track in minimising the environmental impacts and energy supply security can be found by combining all possible low-carbon technologies and by improving energy efficiency in all sectors, for creating a new power production infrastructure of tolerable energy price and of minor environmental effects. GEMIS-Global Emission Model for Integrated Systems is a life-cycle analysis program which was used in this thesis to make indicative energy models for Finland’s future energy supply. Results indicate that the energy supply must comprise both high capacity nuclear power as well as large variation of renewable energy sources for minimization of all environmental effects and keeping energy price reasonable.
Resumo:
The marine alkaloid, Lamellarin D (Lam-D), has shown potent cytotoxicity in numerous cancer cell lines, and was recently identified as a potent topoisomerase I inhibitor. A library of open lactone analogs of Lam-D was prepared from a methyl 5,6-dihydropyrrolo[2,1-a]isoquinoline-3- carboxylate scaffold (1) by introducing various aryl groups through sequential and regioselective bromination, followed by Pd(0)-catalyzed Suzuki cross-coupling chemistry. The compounds were obtained in a 24-44% overall yield, and tested in a panel of three human tumor cell lines, MDA-MB- 231 (breast), A-549 (lung), and HT-29 (colon), to evaluate their cytotoxic potential. From these data the SAR study concluded that more than 75% of the open-chain Lam-D analogs tested showed cytotoxicity in a low micromolar GI50 range.
Resumo:
The marine alkaloid, Lamellarin D (Lam-D), has shown potent cytotoxicity in numerous cancer cell lines, and was recently identified as a potent topoisomerase I inhibitor. A library of open lactone analogs of Lam-D was prepared from a methyl 5,6-dihydropyrrolo[2,1-a]isoquinoline-3- carboxylate scaffold (1) by introducing various aryl groups through sequential and regioselective bromination, followed by Pd(0)-catalyzed Suzuki cross-coupling chemistry. The compounds were obtained in a 24-44% overall yield, and tested in a panel of three human tumor cell lines, MDA-MB- 231 (breast), A-549 (lung), and HT-29 (colon), to evaluate their cytotoxic potential. From these data the SAR study concluded that more than 75% of the open-chain Lam-D analogs tested showed cytotoxicity in a low micromolar GI50 range.
Resumo:
The general task of clamping devise is to connect the parts to the machining centers so that the work piece could be fixed on its position during the whole machining process. Additionally, the work piece should be clamped easily and rapidly by the machine users. The purpose of this Master’s thesis project was to develop a product design and find out the dimensioning of a hydraulic vise system for Astex Engineering OY, which was taking care of the general principles of product design and development during the design process. Throughout the process, the needs of manufacturing and assembling were taken into consideration for the machinability and minimization of the cost of manufacturing. The most critical component of the clamping devise was modeled by FEM for the issue of strength requirements. This 3D model was created with Solidworks and FEM-analysis was done with Cosmos software. As the result of this design work, a prototype of the hydraulic vise was manufactured for Astex Engineering OY and the practical test.
Resumo:
Background: Recent research based on comparisons between bilinguals and monolinguals postulates that bilingualism enhances cognitive control functions, because the parallel activation of languages necessitates control of interference. In a novel approach we investigated two groups of bilinguals, distinguished by their susceptibility to cross-language interference, asking whether bilinguals with strong language control abilities ('non-switchers") have an advantage in executive functions (inhibition of irrelevant information, problem solving, planning efficiency, generative fluency and self-monitoring) compared to those bilinguals showing weaker language control abilities ('switchers"). Methods: 29 late bilinguals (21 women) were evaluated using various cognitive control neuropsychological tests [e.g., Tower of Hanoi, Ruff Figural Fluency Task, Divided Attention, Go/noGo] tapping executive functions as well as four subtests of the Wechsler Adult Intelligence Scale. The analysis involved t-tests (two independent samples). Non-switchers (n = 16) were distinguished from switchers (n = 13) by their performance observed in a bilingual picture-naming task. Results: The non-switcher group demonstrated a better performance on the Tower of Hanoi and Ruff Figural Fluency task, faster reaction time in a Go/noGo and Divided Attention task, and produced significantly fewer errors in the Tower of Hanoi, Go/noGo, and Divided Attention tasks when compared to the switchers. Non-switchers performed significantly better on two verbal subtests of the Wechsler Adult Intelligence Scale (Information and Similarity), but not on the Performance subtests (Picture Completion, Block Design). Conclusions: The present results suggest that bilinguals with stronger language control have indeed a cognitive advantage in the administered tests involving executive functions, in particular inhibition, self-monitoring, problem solving, and generative fluency, and in two of the intelligence tests. What remains unclear is the direction of the relationship between executive functions and language control abilities.
Resumo:
Open Innovation is a relatively new concept which involves a change of paradigm in the R+D+i processes of companies whose aim is to create new technologies or new processes. If to this change, we add the need for innovation in the new green and sustainability economy, and we set out to create a collaborative platform with a learning space where this can happen, we will be facing an overwhelming challenge which requires the application of intelligent programming technologies and languages at the service of education.The aim of the Green IDI (Green Open Innovation) ¿ Economic development and job creation vector in SMEs, based on the environment and sustainability project is to create a platform where companies and individual researchers can perform open innovation processes in the field of sustainability and the environment.The Green IDI (Green Open Innovation) project is funded under the program INNPACTO by the Ministry of Science and Innovation of Spain and is being developed through a consortium formed by the following institutions: GRUPO ICA; COMPARTIA; GRUPO INTERCOM; CETAQUA and the Instituto de Investigación en Inteligencia Artificial (IIIA) from Consejo Superior de Investigaciones Científicas (CSIC). Also the consortium include FUNDACIÓ PRIVADA BARCELONA DIGITAL; PIMEC and UNIVERSITAT OBERTA DE CATALUNYA (UOC).Sustainability and positive action for the environment are considered the principle vector of economic development for companies. As Nicolás Scoli says (2007) ¿in short, preventing unnecessary consumption and the efficient consumption of resources means producing greater wealth with less. Both effects lead to reduced pollution linked to production and consumption¿.The Spanish Sustainable Development Strategy (EEDS) plan defends consumption and sustainable production linked to social and economic development by adhering to the commitment not to endanger ecosystems and abolishing the idea that economic growth is directly proportional to the deterioration of the environment.Uniting the Open Innovation and New Green Economy concepts leads to the "Green Open Innovation¿ Platform creation project.This article analyses the concept of open innovation and defines the importance of the new green and sustainable economy. Lastly, it proposes the creation of eLab. The eLab is defined as an Open Green Innovation Platform personal and collaborative education space which is fed by the interactions of users and which enables innovation processes based on new green economy concepts to be carried out.The creation of a personal learning environment such as eLab on the Green Open Innovation Platform meets the need to offer a collaborative space where platform users can improve their skills regarding the environment and sustainability based on collaborative synergies through Information and Communication Technologies.