770 resultados para Academic research
Resumo:
BACKGROUND The distribution of thrombus-containing lesions (TCLs) in an all-comer population admitted with a heterogeneous clinical presentation (stable, ustable angina, or an acute coronary syndrome) and treated with percutaneous coronary intervention is yet unclear, and the long-term prognostic implications are still disputed. This study sought to assess the distribution and prognostic implications of coronary thrombus, detected by coronary angiography, in a population recruited in all-comer percutaneous coronary intervention trials. METHODS AND RESULTS Patient-level data from 3 contemporary coronary stent trials were pooled by an independent academic research organization (Cardialysis, Rotterdam, the Netherlands). Clinical outcomes in terms of major adverse cardiac events (major adverse cardiac events, a composite of death, myocardial infarction, and repeat revascularization), death, myocardial infarction, and repeated revascularization were compared between patients with and without angiographic TCL. Preprocedural TCL was present in 257 patients (5.8%) and absent in 4193 (94.2%) patients. At 3-year follow-up, there was no difference for major adverse cardiac events (25.3 versus 25.4%; P=0.683); all-cause death (7.4 versus 6.8%; P=0.683); myocardial infarction (5.8 versus 6.0%; P=0.962), and any revascularizations (17.5 versus 17.7%; P=0.822) between patients with and without TCL. The comparison of outcomes in groups weighing the jeopardized myocardial by TCL also did not show a significant difference. TCL were seen more often in the first 2 segments of the right (43.6%) and left anterior descending (36.8%) coronary arteries. The association of TCL and bifurcation lesions was present in 40.1% of the prespecified segments. CONCLUSIONS TCL involved mainly the proximal coronary segments and did not have any effect on clinical outcomes. A more detailed thrombus burden quantification is required to investigate its prognostic implications. CLINICAL TRIAL REGISTRATION URL: http://www.clinicaltrials.gov. Unique identifiers: NCT00114972, NCT01443104, NCT00617084.
Resumo:
BACKGROUND Biomarkers of myocardial injury increase frequently during transcatheter aortic valve implantation (TAVI). The impact of postprocedural cardiac troponin (cTn) elevation on short-term outcomes remains controversial, and the association with long-term prognosis is unknown. METHODS AND RESULTS We evaluated 577 consecutive patients with severe aortic stenosis treated with TAVI between 2007 and 2012. Myocardial injury, defined according to the Valve Academic Research Consortium (VARC)-2 as post-TAVI cardiac troponin T (cTnT) >15× the upper limit of normal, occurred in 338 patients (58.1%). In multivariate analyses, myocardial injury was associated with higher risk of all-cause mortality at 30 days (adjusted hazard ratio [HR], 8.77; 95% CI, 2.07-37.12; P=0.003) and remained a significant predictor at 2 years (adjusted HR, 1.98; 95% CI, 1.36-2.88; P<0.001). Higher cTnT cutoffs did not add incremental predictive value compared with the VARC-2-defined cutoff. Whereas myocardial injury occurred more frequently in patients with versus without coronary artery disease (CAD), the relative impact of cTnT elevation on 2-year mortality did not differ between patients without CAD (adjusted HR, 2.59; 95% CI, 1.27-5.26; P=0.009) and those with CAD (adjusted HR, 1.71; 95% CI, 1.10-2.65; P=0.018; P for interaction=0.24). Mortality rates at 2 years were lowest in patients without CAD and no myocardial injury (11.6%) and highest in patients with complex CAD (SYNTAX score >22) and myocardial injury (41.1%). CONCLUSIONS VARC-2-defined cTnT elevation emerged as a strong, independent predictor of 30-day mortality and remained a modest, but significant, predictor throughout 2 years post-TAVI. The prognostic value of cTnT elevation was modified by the presence and complexity of underlying CAD with highest mortality risk observed in patients combining SYNTAX score >22 and evidence of myocardial injury.
Resumo:
El trabajo analiza, desde la perspectiva de los Estudios Sociales de la Ciencia y la Tecnología, el proceso de construcción de significados de utilidad de conocimientos científicos en el marco de interacciones entre actores heterogéneos en el caso de tres grupos de investigación académicos que orientan parte de su tarea hacia la resolución de problemas sociales. Como resultado, se presenta un análisis de las modificaciones que sufren los procesos de producción de conocimiento y organización del trabajo académico una vez que los investigadores ingresan en redes de relaciones con otros actores; las diferentes modalidades de vinculación entre productores y usuarios de conocimientos y los procesos de negociación de alternativas de uso y de definición de "demandas" en escenarios de interacción.
Resumo:
El trabajo analiza, desde la perspectiva de los Estudios Sociales de la Ciencia y la Tecnología, el proceso de construcción de significados de utilidad de conocimientos científicos en el marco de interacciones entre actores heterogéneos en el caso de tres grupos de investigación académicos que orientan parte de su tarea hacia la resolución de problemas sociales. Como resultado, se presenta un análisis de las modificaciones que sufren los procesos de producción de conocimiento y organización del trabajo académico una vez que los investigadores ingresan en redes de relaciones con otros actores; las diferentes modalidades de vinculación entre productores y usuarios de conocimientos y los procesos de negociación de alternativas de uso y de definición de "demandas" en escenarios de interacción.
Resumo:
El trabajo analiza, desde la perspectiva de los Estudios Sociales de la Ciencia y la Tecnología, el proceso de construcción de significados de utilidad de conocimientos científicos en el marco de interacciones entre actores heterogéneos en el caso de tres grupos de investigación académicos que orientan parte de su tarea hacia la resolución de problemas sociales. Como resultado, se presenta un análisis de las modificaciones que sufren los procesos de producción de conocimiento y organización del trabajo académico una vez que los investigadores ingresan en redes de relaciones con otros actores; las diferentes modalidades de vinculación entre productores y usuarios de conocimientos y los procesos de negociación de alternativas de uso y de definición de "demandas" en escenarios de interacción.
Resumo:
The mission of the Institute of Developing Economies, Japan External Trade Organization (IDE-JETRO) is to make intellectual contributions to the world. In 2006 IDE officially launched its Institutional Repository, ARRIDE based on DSpace to accomplish the mission more effectively. ARRIDE was designed as a three server structure; external server, internal server and development server. Since IDE has copyright on the articles produced by its research staff through their research activities, it can deposit these articles without asking authors for permission, which guarantees sustainability of the Repository. In order that contents in ARRIDE are accessed worldwide IDE has been providing various search engines with proper metadata. Among them, RePEc that is a decentralized database on social sciences is very important.
Resumo:
日本貿易振興機構アジア経済研究所は研究者数約150 名、50 年の歴史を持つ社会科学系の研究機関である。 日本における開発途上国研究の拠点として世界への知的 貢献をなすことを目指し、途上国の経済、社会、政治、 国際協力・援助に関する幅広い研究を行っている。当研 究所のような独立行政法人で機関リポジトリを公開して いる例は日本ではまだ数少ない。国立情報学研究所ウェ ブサイト内の国内の機関リポジトリ一覧(http://www. nii.ac.jp/irp/list/)によると、大学以外を母体とする機関 リポジトリ(共同リポジトリを除く)は2010 年3 月時 点で11 7 件中6 件のみである。 機関リポジトリは学術情報のオープンアクセス化の潮 流の中で、セルフアーカイビングによりそれを実現する 代表的な方法である。加えて、機関の研究成果を蓄積保 存し無料で広く公開することは研究所のミッションであ る「世界への知的貢献」に寄与し、また公的研究機関と しての説明責任を果たす上でも重要であると考えられ る。そのような観点から、当研究所の事例をご紹介して みたい。
Resumo:
Developing countries are experiencing unprecedented levels of economic growth. As a result, they will be responsible for most of the future growth in energy demand and greenhouse gas (GHG) emissions. Curbing GHG emissions in developing countries has become one of the cornerstones of a future international agreement under the United Nations Framework Convention for Climate Change (UNFCCC). However, setting caps for developing countries’ GHG emissions has encountered strong resistance in the current round of negotiations. Continued economic growth that allows poverty eradication is still the main priority for most developing countries, and caps are perceived as a constraint to future growth prospects. The development, transfer and use of low-carbon technologies have more positive connotations, and are seen as the potential path towards low-carbon development. So far, the success of the UNFCCC process in improving the levels of technology transfer (TT) to developing countries has been limited. This thesis analyses the causes for such limited success and seeks to improve on the understanding about what constitutes TT in the field of climate change, establish the factors that enable them in developing countries and determine which policies could be implemented to reinforce these factors. Despite the wide recognition of the importance of technology and knowledge transfer to developing countries in the climate change mitigation policy agenda, this issue has not received sufficient attention in academic research. Current definitions of climate change TT barely take into account the perspective of actors involved in actual climate change TT activities, while respective measurements do not bear in mind the diversity of channels through which these happen and the outputs and effects that they convey. Furthermore, the enabling factors for TT in non-BRIC (Brazil, Russia, India, China) developing countries have been seldom investigated, and policy recommendations to improve the level and quality of TTs to developing countries have not been adapted to the specific needs of highly heterogeneous countries, commonly denominated as “developing countries”. This thesis contributes to enriching the climate change TT debate from the perspective of a smaller emerging economy (Chile) and by undertaking a quantitative analysis of enabling factors for TT in a large sample of developing countries. Two methodological approaches are used to study climate change TT: comparative case study analysis and quantitative analysis. Comparative case studies analyse TT processes in ten cases based in Chile, all of which share the same economic, technological and policy frameworks, thus enabling us to draw conclusions on the enabling factors and obstacles operating in TT processes. The quantitative analysis uses three methodologies – principal component analysis, multiple regression analysis and cluster analysis – to assess the performance of developing countries in a number of enabling factors and the relationship between these factors and indicators of TT, as well as to create groups of developing countries with similar performances. The findings of this thesis are structured to provide responses to four main research questions: What constitutes technology transfer and how does it happen? Is it possible to measure technology transfer, and what are the main challenges in doing so? Which factors enable climate change technology transfer to developing countries? And how do different developing countries perform in these enabling factors, and how can differentiated policy priorities be defined accordingly? vi Resumen Los paises en desarrollo estan experimentando niveles de crecimiento economico sin precedentes. Como consecuencia, se espera que sean responsables de la mayor parte del futuro crecimiento global en demanda energetica y emisiones de Gases de Efecto de Invernadero (GEI). Reducir las emisiones de GEI en los paises en desarrollo es por tanto uno de los pilares de un futuro acuerdo internacional en el marco de la Convencion Marco de las Naciones Unidas para el Cambio Climatico (UNFCCC). La posibilidad de compromisos vinculantes de reduccion de emisiones de GEI ha sido rechazada por los paises en desarrollo, que perciben estos limites como frenos a su desarrollo economico y a su prioridad principal de erradicacion de la pobreza. El desarrollo, transferencia y uso de tecnologias bajas en carbono tiene connotaciones mas positivas y se percibe como la via hacia un crecimiento bajo en carbono. Hasta el momento, la UNFCCC ha tenido un exito limitado en la promocion de transferencias de tecnologia (TT) a paises en desarrollo. Esta tesis analiza las causas de este resultado y busca mejorar la comprension sobre que constituye transferencia de tecnologia en el area de cambio climatico, cuales son los factores que la facilitan en paises en desarrollo y que politicas podrian implementarse para reforzar dichos factores. A pesar del extendido reconocimiento sobre la importancia de la transferencia de tecnologia a paises en desarrollo en la agenda politica de cambio climatico, esta cuestion no ha sido suficientemente atendida por la investigacion existente. Las definiciones actuales de transferencia de tecnologia relacionada con la mitigacion del cambio climatico no tienen en cuenta la diversidad de canales por las que se manifiestan o los efectos que consiguen. Los factores facilitadores de TT en paises en desarrollo no BRIC (Brasil, Rusia, India y China) apenas han sido investigados, y las recomendaciones politicas para aumentar el nivel y la calidad de la TT no se han adaptado a las necesidades especificas de paises muy heterogeneos aglutinados bajo el denominado grupo de "paises en desarrollo". Esta tesis contribuye a enriquecer el debate sobre la TT de cambio climatico con la perspectiva de una economia emergente de pequeno tamano (Chile) y el analisis cuantitativo de factores que facilitan la TT en una amplia muestra de paises en desarrollo. Se utilizan dos metodologias para el estudio de la TT a paises en desarrollo: analisis comparativo de casos de estudio y analisis cuantitativo basado en metodos multivariantes. Los casos de estudio analizan procesos de TT en diez casos basados en Chile, para derivar conclusiones sobre los factores que facilitan u obstaculizan el proceso de transferencia. El analisis cuantitativo multivariante utiliza tres metodologias: regresion multiple, analisis de componentes principales y analisis cluster. Con dichas metodologias se busca analizar el posicionamiento de diversos paises en cuanto a factores que facilitan la TT; las relaciones entre dichos factores e indicadores de transferencia tecnologica; y crear grupos de paises con caracteristicas similares que podrian beneficiarse de politicas similares para la promocion de la transferencia de tecnologia. Los resultados de la tesis se estructuran en torno a cuatro preguntas de investigacion: .Que es la transferencia de tecnologia y como ocurre?; .Es posible medir la transferencia de tecnologias de bajo carbono?; .Que factores facilitan la transferencia de tecnologias de bajo carbono a paises en desarrollo? y .Como se puede agrupar a los paises en desarrollo en funcion de sus necesidades politicas para la promocion de la transferencia de tecnologias de bajo carbono?
Resumo:
En un mercado de educación superior cada vez más competitivo, la colaboración entre universidades es una efectiva estrategia para acceder al mercado global. El desarrollo de titulaciones conjuntas es un importante mecanismo para fortalecer las colaboraciones académicas y diversificar los conocimientos. Las titulaciones conjuntas están siendo cada vez más implementadas en las universidades de todo el mundo. En Europa, el proceso de Bolonia y el programa Erasmus, están fomentado el reconocimiento de titulaciones conjuntas y dobles y promoviendo la colaboración entre las instituciones académicas. En el imparable proceso de la globalización y convergencia educativa, el uso de sistemas de e-learning para soportar cursos tanto semipresencial como online es una tendencia en crecimiento. Dado que los sistemas de e-learning soportan una amplia variedad de cursos, es necesario encontrar una solución adecuada que permita a las universidades soportar y gestionar las titulaciones conjuntas a través de sus sistemas de e-learning en conformidad con los acuerdos de colaboración establecidos por las universidades participantes. Esta tesis doctoral abordará las siguientes preguntas de investigación: 1. ¿Qué factores deben tenerse en cuenta en la implementación y gestión de titulaciones conjuntas? 2. ¿Cómo pueden los sistemas actuales de e-learning soportar el desarrollo de titulaciones conjuntas? 3. ¿Qué otros servicios y sistemas necesitan ser adaptados por las universidades interesadas en participar en una titulación conjunta a través de sus sistemas de e-learning? La implementación de titulaciones conjuntas a través de sistemas de e-learning es compleja e implica retos técnicos, administrativos, culturales, financieros, jurídicos y de seguridad. Esta tesis doctoral propone una serie de contribuciones que pueden ayudar a resolver algunos de los retos identificados. En primer lugar se ha elaborado un modelo conceptual que incluye la información del contexto de las titulaciones conjuntas que es relevante para la implementación de estas titulaciones en los sistemas de e-learning. Después de definir el modelo conceptual, se ha propuesto una arquitectura basada en políticas para la implementación de titulaciones interinstitucionales a través de sistemas de e-learning de acuerdo a los términos estipulados en los acuerdos de colaboración que son firmados por las universidades participantes. El autor se ha centrado en el componente de gestión de flujos de trabajo de esta arquitectura. Por último y con el fin de permitir la interoperabilidad de repositorios de objetos educativos, los componentes básicos a implementar han sido identificados y validados. El uso de servicios multimedia en educación es una tendencia creciente, proporcionando servicios de e-learning que permiten mejorar la comunicación y la interacción entre profesores y alumnos. Dentro de estos servicios, nos hemos centrado en el uso de la videoconferencia y la grabación de clases como servicios adecuados para el desarrollo de cursos impartidos en escenarios de educación colaborativos. Las contribuciones han sido validadas en proyectos de investigación de ámbito nacional y europeo en los que el autor ha participado. Abstract In an increasingly competitive higher education market, collaboration between universities is an effective strategy for gaining access to the global market. The development of joint degrees is an important mechanism for strengthening academic research collaborations and diversifying knowledge. Joint degrees are becoming increasingly implemented in universities around the world. In Europe, the Bologna process and the Erasmus programme have encouraged both the global recognition of joint and double degrees and promoted close collaboration between academic institutions. In the unstoppable process of globalization and educational convergence, the use of e-learning systems for supporting both blended and online courses is becoming a growing trend. Since e-learning systems covers a wide range of courses, it becomes necessary to find a suitable solution that enables universities to support and manage joint degrees through their e-learning systems in accordance with the collaboration agreements established by the universities involved. This dissertation will address the following research questions: 1. What factors need to be considered in the implementation and management of joint degrees? 2. How can the current e-learning systems support the development of joint degrees? 3. What other services and systems need to be adapted by universities interested in participating in a joint degree through their e-learning systems? The implementation of joint degrees using e-learning systems is complex and involves technical, administrative, security, cultural, financial and legal challenges. This dissertation proposes a series of contributions to help solve some of the identified challenges. One of the cornerstones of this proposal is a conceptual model of all the relevant issues related to the support of joint degrees by means of e-learning systems. After defining the conceptual model, this dissertation proposes a policy-driven architecture for implementing inter-institutional degree collaborations through e-learning systems as stipulated by a collaboration agreement signed by two universities. The author has focused on the workflow management component of this architecture. Finally, the building blocks for achieving interoperability of learning object repositories have been identified and validated. The use of multimedia services in education is a growing trend, providing rich e-learning services that improve the communication and interaction between teachers and students. Within these e-learning services, we have focused on the use of videoconferencing and lecture recording as the best-suited services to support collaborative learning scenarios. The contributions have been validated within national and European research projects that the author has been involved in.
Resumo:
Fuel cycles are designed with the aim of obtaining the highest amount of energy possible. Since higher burnup values are reached, it is necessary to improve our disposal designs, traditionally based on the conservative assumption that they contain fresh fuel. The criticality calculations involved must consider burnup by making the most of the experimental and computational capabilities developed, respectively, to measure and predict the isotopic content of the spent nuclear fuel. These high burnup scenarios encourage a review of the computational tools to find out possible weaknesses in the nuclear data libraries, in the methodologies applied and their applicability range. Experimental measurements of the spent nuclear fuel provide the perfect framework to benchmark the most well-known and established codes, both in the industry and academic research activity. For the present paper, SCALE 6.0/TRITON and MONTEBURNS 2.0 have been chosen to follow the isotopic content of four samples irradiated in the Spanish Vandellós-II pressurized water reactor up to burnup values ranging from 40 GWd/MTU to 75 GWd/MTU. By comparison with the experimental data reported for these samples, we can probe the applicability of these codes to deal with high burnup problems. We have developed new computational tools within MONTENBURNS 2.0. They make possible to handle an irradiation history that includes geometrical and positional changes of the samples within the reactor core. This paper describes the irradiation scenario against which the mentioned codes and our capabilities are to be benchmarked.
Resumo:
Automatic grading of programming assignments is an important topic in academic research. It aims at improving the level of feedback given to students and optimizing the professor time. Several researches have reported the development of software tools to support this process. Then, it is helpfulto get a quickly and good sight about their key features. This paper reviews an ample set of tools forautomatic grading of programming assignments. They are divided in those most important mature tools, which have remarkable features; and those built recently, with new features. The review includes the definition and description of key features e.g. supported languages, used technology, infrastructure, etc. The two kinds of tools allow making a temporal comparative analysis. This analysis infrastructure, etc. The two kinds of tools allow making a temporal comparative analysis. This analysis shows good improvements in this research field, these include security, more language support, plagiarism detection, etc. On the other hand, the lack of a grading model for assignments is identified as an important gap in the reviewed tools. Thus, a characterization of evaluation metrics to grade programming assignments is provided as first step to get a model. Finally new paths in this research field are proposed.
Resumo:
After the extensive research on the capabilities of the Boundary Integral Equation Method produced during the past years the versatility of its applications has been well founded. Maybe the years to come will see the in-depth analysis of several conflictive points, for example, adaptive integration, solution of the system of equations, etc. This line is clear in academic research. In this paper we comment on the incidence of the manner of imposing the boundary conditions in 3-D coupled problems. Here the effects are particularly magnified: in the first place by the simple model used (constant elements) and secondly by the process of solution, i.e. first a potential problem is solved and then the results are used as data for an elasticity problem. The errors add to both processes and small disturbances, unimportant in separated problems, can produce serious errors in the final results. The specific problem we have chosen is especially interesting. Although more general cases (i.e. transient)can be treated, here the domain integrals can be converted into boundary ones and the influence of the manner in which boundary conditions are applied will reflect the whole importance of the problem.
Resumo:
By 2050 it is estimated that the number of worldwide Alzheimer?s disease (AD) patients will quadruple from the current number of 36 million people. To date, no single test, prior to postmortem examination, can confirm that a person suffers from AD. Therefore, there is a strong need for accurate and sensitive tools for the early diagnoses of AD. The complex etiology and multiple pathogenesis of AD call for a system-level understanding of the currently available biomarkers and the study of new biomarkers via network-based modeling of heterogeneous data types. In this review, we summarize recent research on the study of AD as a connectivity syndrome. We argue that a network-based approach in biomarker discovery will provide key insights to fully understand the network degeneration hypothesis (disease starts in specific network areas and progressively spreads to connected areas of the initial loci-networks) with a potential impact for early diagnosis and disease-modifying treatments. We introduce a new framework for the quantitative study of biomarkers that can help shorten the transition between academic research and clinical diagnosis in AD.
Resumo:
Recientemente, el paradigma de la computación en la nube ha recibido mucho interés por parte tanto de la industria como del mundo académico. Las infraestructuras cloud públicas están posibilitando nuevos modelos de negocio y ayudando a reducir costes. Sin embargo, una compañía podría desear ubicar sus datos y servicios en sus propias instalaciones, o tener que atenerse a leyes de protección de datos. Estas circunstancias hacen a las infraestructuras cloud privadas ciertamente deseables, ya sea para complementar a las públicas o para sustituirlas por completo. Por desgracia, las carencias en materia de estándares han impedido que las soluciones para la gestión de infraestructuras privadas se hayan desarrollado adecuadamente. Además, la multitud de opciones disponibles ha creado en los clientes el miedo a depender de una tecnología concreta (technology lock-in). Una de las causas de este problema es la falta de alineación entre la investigación académica y los productos comerciales, ya que aquella está centrada en el estudio de escenarios idealizados sin correspondencia con el mundo real, mientras que éstos consisten en soluciones desarrolladas sin tener en cuenta cómo van a encajar con los estándares más comunes o sin preocuparse de hacer públicos sus resultados. Con objeto de resolver este problema, propongo un sistema de gestión modular para infraestructuras cloud privadas enfocado en tratar con las aplicaciones en lugar de centrarse únicamente en los recursos hardware. Este sistema de gestión sigue el paradigma de la computación autónoma y está diseñado en torno a un modelo de información sencillo, desarrollado para ser compatible con los estándares más comunes. Este modelo divide el entorno en dos vistas, que sirven para separar aquello que debe preocupar a cada actor involucrado del resto de información, pero al mismo tiempo permitiendo relacionar el entorno físico con las máquinas virtuales que se despliegan encima de él. En dicho modelo, las aplicaciones cloud están divididas en tres tipos genéricos (Servicios, Trabajos de Big Data y Reservas de Instancias), para que así el sistema de gestión pueda sacar partido de las características propias de cada tipo. El modelo de información está complementado por un conjunto de acciones de gestión atómicas, reversibles e independientes, que determinan las operaciones que se pueden llevar a cabo sobre el entorno y que es usado para hacer posible la escalabilidad en el entorno. También describo un motor de gestión encargado de, a partir del estado del entorno y usando el ya mencionado conjunto de acciones, la colocación de recursos. Está dividido en dos niveles: la capa de Gestores de Aplicación, encargada de tratar sólo con las aplicaciones; y la capa del Gestor de Infraestructura, responsable de los recursos físicos. Dicho motor de gestión obedece un ciclo de vida con dos fases, para así modelar mejor el comportamiento de una infraestructura real. El problema de la colocación de recursos es atacado durante una de las fases (la de consolidación) por un resolutor de programación entera, y durante la otra (la online) por un heurístico hecho ex-profeso. Varias pruebas han demostrado que este acercamiento combinado es superior a otras estrategias. Para terminar, el sistema de gestión está acoplado a arquitecturas de monitorización y de actuadores. Aquella estando encargada de recolectar información del entorno, y ésta siendo modular en su diseño y capaz de conectarse con varias tecnologías y ofrecer varios modos de acceso. ABSTRACT The cloud computing paradigm has raised in popularity within the industry and the academia. Public cloud infrastructures are enabling new business models and helping to reduce costs. However, the desire to host company’s data and services on premises, and the need to abide to data protection laws, make private cloud infrastructures desirable, either to complement or even fully substitute public oferings. Unfortunately, a lack of standardization has precluded private infrastructure management solutions to be developed to a certain level, and a myriad of diferent options have induced the fear of lock-in in customers. One of the causes of this problem is the misalignment between academic research and industry ofering, with the former focusing in studying idealized scenarios dissimilar from real-world situations, and the latter developing solutions without taking care about how they f t with common standards, or even not disseminating their results. With the aim to solve this problem I propose a modular management system for private cloud infrastructures that is focused on the applications instead of just the hardware resources. This management system follows the autonomic system paradigm, and is designed around a simple information model developed to be compatible with common standards. This model splits the environment in two views that serve to separate the concerns of the stakeholders while at the same time enabling the traceability between the physical environment and the virtual machines deployed onto it. In it, cloud applications are classifed in three broad types (Services, Big Data Jobs and Instance Reservations), in order for the management system to take advantage of each type’s features. The information model is paired with a set of atomic, reversible and independent management actions which determine the operations that can be performed over the environment and is used to realize the cloud environment’s scalability. From the environment’s state and using the aforementioned set of actions, I also describe a management engine tasked with the resource placement. It is divided in two tiers: the Application Managers layer, concerned just with applications; and the Infrastructure Manager layer, responsible of the actual physical resources. This management engine follows a lifecycle with two phases, to better model the behavior of a real infrastructure. The placement problem is tackled during one phase (consolidation) by using an integer programming solver, and during the other (online) with a custom heuristic. Tests have demonstrated that this combined approach is superior to other strategies. Finally, the management system is paired with monitoring and actuators architectures. The former able to collect the necessary information from the environment, and the later modular in design and capable of interfacing with several technologies and ofering several access interfaces.
Resumo:
As sustainability reporting (SR) practices have being increasingly adopted by corporations over the last twenty years, most of the existing literature on SR has stressed the role of external determinants (such as institutional and stakeholder pressures) in explaining this uptake. However, given that recent evidence points to a broader range of motives and uses (both external and internal) of SR, we contend that its role within company-level activities deserves greater academic attention. In order to address this research gap, this paper seeks to provide a more detailed examination of the organizational characteristics acting as drivers and/or barriers of SR integration within corporate sustainability practices at the company-level. More specifically, we suggest that substantive SR implementation can be predicted by assessing the level of fit between the organization and the SR framework being adopted. Building on this hypothesis, our theoretical model defines three forms of fit (technical, cultural and political) and identifies organizational characteristics associated to each of these fits. Finally, implications for academic research, businesses and policy-makers are derived.