966 resultados para stored procedure
Resumo:
This paper presents a new verification procedure for sound source coverage according to ISO 140?5 requirements. The ISO 140?5 standard applies to the measurement of façade insulation and requires a sound source able to achieve a sufficiently uniform sound field in free field conditions on the façade under study. The proposed method involves the electroacoustic characterisation of the sound source in laboratory free field conditions (anechoic room) and the subsequent prediction by computer simulation of the sound free field radiated on a rectangular surface equal in size to the façade being measured. The loudspeaker is characterised in an anechoic room under laboratory controlled conditions, carefully measuring directivity, and then a computer model is designed to calculate the acoustic free field coverage for different loudspeaker positions and façade sizes. For each sound source position, the method provides the maximum direct acoustic level differences on a façade specimen and therefore determines whether the loudspeaker verifies the maximum allowed level difference of 5 dB (or 10 dB for façade dimensions greater than 5 m) required by the ISO standard. Additionally, the maximum horizontal dimension of the façade meeting the standard is calculated and provided for each sound source position, both with the 5 dB and 10 dB criteria. In the last section of the paper, the proposed procedure is compared with another method used by the authors in the past to achieve the same purpose: in situ outdoor measurements attempting to recreate free field conditions. From this comparison, it is concluded that the proposed method is able to reproduce the actual measurements with high accuracy, for example, the ground reflection effect, at least at low frequencies, which is difficult to avoid in the outdoor measurement method, and it is fully eliminated with the proposed method to achieve the free field requisite.
Resumo:
Many advantages can be got in combining finite and boundary elements.It is the case, for example, of unbounded field problems where boundary elements can provide the appropriate conditions to represent the infinite domain while finite elements are suitable for more complex properties in the near domain. However, in spite of it, other disadvantages can appear. It would be, for instance, the loss of symmetry in the finite elements stiffness matrix, when the combination is made. On the other hand, in our days, with the strong irruption of the parallel proccessing the techniques of decomposition of domains are getting the interest of numerous scientists. With their application it is possible to separate the resolution of a problem into several subproblems. That would be beneficial in the combinations BEM-FEM as the loss of symmetry would be avoided and every technique would be applicated separately. Evidently for the correct application of these techniques it is necessary to establish the suitable transmission conditions in the interface between BEM domain and FEM domain. In this paper, one parallel method is presented which is based in the interface operator of Steklov Poincarè.
Resumo:
Mealiness is a negative attribute of sensory texture that combines the sensation of a disaggregated tissue with the sensation of lack of juiciness. Since January 1996, a wide EC Project entitled : "Mealiness in fruits. Consumers perception and means for detection'" is being carried out. Within it, three sensory panels have been trained at : the Institute of Food Research (IFR, United Kingdom), the Instituto de Agroquímica y Tecnología de los Alimentos (IATA, Spain) and the Institut voor Agrotechnologisch Onderzoek (ATO-DLO, Netherlands) to assess mealiness in apples. In all three cases, mealiness has been described as a multidimensional sensory descriptor capable of gathering the loss of consistency (of crispness and of hardness) and of juiciness. Also within the EC Project several instrumental procedures have been tested for mealiness assessment. In this sense the Physical Properties Laboratory (ETS1A-UPM) has focused its aims in a first stage on performing instrumental tests for assessing some textural descriptors as crispiness, hardness and juiciness. The results obtained within these tests have shown to correlate well with the sensory measurements (Barreiro et Ruiz-Altisent, 1997) in apples, but also have succeed when trying to generate several texture degradation levels on peaches from which mealiness appears to be the last stage (Ortiz et al. 1997).
Resumo:
Padding materials are commonly used in fruit packing lines with the objective of diminishing impact damage in postharvest handling. Two sensors, instrumented sphere IS 100 and impact tester, have been compared to analyze the performance of six different padding materials used in Spanish fruit packing lines. Padding materials tested have been classified according to their capability to decrease impact intensities inflicted to fruit in packing lines. A procedure to test padding materials has been developed for "Golden" apples. Its basis is a logistic regression to predict bruise probability in fruit. The model combines two kinds of parameters: padding material parameters measured with IS, and fruit properties.
Resumo:
biomecanica de la natación
Resumo:
With the consolidation of the new solid state lighting LEOs devices, te5t1n9 the compliance 01 lamps based on this technology lor Solar Home Systems (SHS) have been analyzed. The definition of the laboratory procedures to be used with final products 15 a necessary step in arder to be able to assure the quality of the lamps prior to be installed [1]. As well as with CFL technology. particular attention has been given to simplicity and technical affordability in arder to facilitate the implementation of the test with basie and simple laboratory too15 even on the same SHS electrification program locations. The block of test procedures has been applied to a set of 14 low-cost lamps. They apply to lamp resistance, reliability and performance under normal, extreme and abnormal operating conditions as a simple but complete quality meter tool 01 any LEO bulb.
Resumo:
A major research area is the representation of knowledge for a given application in a compact manner such that desired information relating to this knowledge is easily recoverable. A complicated procedure may be required to recover the information from the stored representation and convert it back to usable form. Coder/decoder are the devices dedicated to that task. In this paper the capabilities that an Optical Programmable Logic Cell offers as a basic building block for coding and decoding are analyzed. We have previously published an Optically Programmable Logic Cells (OPLC), for applications as a chaotic generator or as basic element for optical computing. In optical computing previous studies these cells have been analyzed as full-adder units, being this element a basic component for the arithmetic logic structure in computing. Another application of this unit is reported in this paper. Coder and decoder are basic elements in computers, for example, in connections between processors and memory addressing. Moreover, another main application is the generation of signals for machine controlling from a certain instruction. In this paper we describe the way to obtain a coder/decoder with the OPLC and which type of applications may be the best suitable for this type of cell.
Resumo:
The definition of technical specifications and the corresponding laboratory procedures are necessary steps in order to assure the quality of the devices prior to be installed in Solar Home Systems (SHS). To clarify and unify criteria a European project supported the development of the Universal Technical Standard for Solar Home Systems (UTSfSHS). Its principles were to generate simple and affordable technical requirements to be optimized in order to facilitate the implementation of tests with basic and simple laboratory tools even on the same SHS electrification program countries. These requirements cover the main aspects of this type of installations and its lighting chapter was developed based on the most used technology at that time: fluorescent tubes and CFLs. However, with the consolidation of the new LED solid state lighting devices, particular attention is being given to this matter and new procedures are required. In this work we develop a complete set of technical specifications and test procedures that have been designed within the frame of the UTSfSHS, based on an intense review of the scientific and technical publications related to LED lighting and their practical application. They apply to lamp reliability, performance and safety under normal, extreme and abnormal operating conditions as a simple but complete quality meter tool for any LED bulb.
Optimización de la densidad de energía en vigas de material compuesto (PRF) sometidas a flexión pura
Resumo:
Las necesidades energéticas actuales requieren el desarrollo de tecnologías eficaces y eficientes en producción, transporte y distribución de energía. Estas necesidades han impulsado nuevos desarrollos en el ámbito energético, entre los cuales se encuentran sistemas de almacenamiento de energía. El avance en ingeniería de materiales permite pensar en la posibilidad de almacenamiento mediante deformación elástica de vigas. Concretamente se parte de un concepto de mecanismo acumulador de energía basado en la deformación elástica de resortes espirales a torsión. Dichos resortes se pueden considerar como elementos vigas sometidos a flexión pura y grandes deflexiones. Esta Tesis de centra en el diseño y optimización de estos elementos con el fin de maximizar la densidad de energía que son capaces de absorber. El proceso de optimización comienza con la identificación del factor crítico del que depende dicho proceso, en este caso de trata de la densidad de energía. Dicho factor depende de la geometría de la sección resistente y del material empleado en su construcción. En los últimos años ha existido un gran desarrollo de los materiales compuestos de tipo polimérico reforzados con fibras (PRF). Estos materiales están sustituyendo gradualmente a otros materiales, como los metales, debido principalmente a su excelente relación entre propiedades mecánicas y peso. Por otro lado, analizando las posibles geometrías para la sección resistente, se observó que la más adecuada es una estructura tipo sándwich. Se implementa así un procedimiento de diseño de vigas sándwich sometidas a flexión pura, con las pieles fabricadas en materiales compuestos tipo PRF y un núcleo que debe garantizar el bajo peso de la estructura. Se desarrolla así un procedimiento sistemático que se puede particularizar dependiendo de los parámetros de entrada de la viga, y que tiene en cuenta y analiza la aparición de todos los posibles modos de fallo posibles. Así mismo se desarrollan una serie de mapas o ábacos de diseño que permiten seleccionar rápidamente las dimensiones preliminares de la viga. Finalmente se llevan a cabo ensayos que permiten, por un lado, validar el concepto del mecanismo acumulador de energía a través del ensayo de un muelle con sección monolítica, y por otro validar los distintos diseños de vigas sándwich propuestos y mostrar el incremento de la densidad de energía con respecto a la alternativa monolítica. Como líneas futuras de investigación se plantean la investigación en nuevos materiales, como la utilización de nanotubos de carbono, y la optimización del mecanismo de absorción de energía; optimizando el mecanismo de absorción a flexión pura e implementando sistemas que permitan acumular energía mediante la deformación elástica debida a esfuerzos de tracción-compresión. ABSTRACT Energy supply requires the development of effective and efficient technologies for the production, transport and distribution of energy. In recent years, many energy storage systems have been developed. Advances in the field of materials engineering has allowed the development of new concepts as the energy storage by elastic deformation of beams. Particularly, in this Thesis an energy storage device based on the elastic deformation of torsional springs has been studied. These springs can be considered as beam elements subjected to pure bending loads and large deflections. This Thesis is focused on the design and optimization of these beam elements in order to maximize its density of stored energy. The optimization process starts with the identification of the critical factors for the elastic energy storage: the density. This factor depends on the geometry of the cross section of the beam and the materials from which it is made. In the last 20 years, major advances in the field of composite materials have been made, particularly in the field of fiber reinforced polymers (FRP). This type of material is substituting gradually metallic materials to their excellent weight-mechanical properties ratio. In the other side, several possible geometries are analyzed for its use in the cross section of the beam; it was concluded that the best option, for maximum energy density, is using a sandwich beam. A design procedure for sandwich beams with skins made up with FRP composites and a light weight core is developed. This procedure can be particularized for different input parameters and it analyzes all the possible failure modes. Abacus and failure mode maps have been developed in order to simplify the design process. Finally several tested was made. Firstly, a prototype of the energy storage system which uses a monolithic composite beam was tested in order to validate the concept of the energy storage by elastic deformation. After that sandwich beam samples are built and tested, validating the design and showing the increase of energy density with respect to the monolithic beam. As futures research lines the following are proposed: research in new materials, as carbon nanotubes; and the optimization of the energy storage mechanism, that means optimizing the pure bending storage mechanism and developing new ones based on traction-compression mechanisms.
Resumo:
El comportamiento mecánico de muchos materiales biológicos y poliméricos en grandes deformaciones se puede describir adecuadamente mediante formulaciones isocóricas hiperelásticas y viscoelásticas. Las ecuaciones de comportamiento elástico y viscoelástico y las formulaciones computacionales para materiales incompresibles isótropos en deformaciones finitas están ampliamente desarrolladas en la actualidad. Sin embargo, el desarrollo de modelos anisótropos no lineales y de sus correspondientes formulaciones computacionales sigue siendo un tema de investigación de gran interés. Cuando se consideran grandes deformaciones, existen muchas medidas de deformación disponibles con las que poder formular las ecuaciones de comportamiento. Los modelos en deformaciones cuadráticas facilitan la implementación en códigos de elementos finitos, ya que estas medidas surgen de forma natural en la formulación. No obstante, pueden dificultar la interpretación de los modelos y llevar a resultados pocos realistas. El uso de deformaciones logarítmicas permite el desarrollo de modelos más simples e intuitivos, aunque su formulación computacional debe ser adaptada a las exigencias del programa. Como punto de partida, en esta tesis se demuestra que las deformaciones logarítmicas representan la extensión natural de las deformaciones infinitesimales, tanto axiales como angulares, al campo de las grandes deformaciones. Este hecho permite explicar la simplicidad de las ecuaciones resultantes. Los modelos hiperelásticos predominantes en la actualidad están formulados en invariantes de deformaciones cuadráticas. Estos modelos, ya sean continuos o microestructurales, se caracterizan por tener una forma analítica predefinida. Su expresión definitiva se calcula mediante un ajuste de curvas a datos experimentales. Un modelo que no sigue esta metodología fue desarrollado por Sussman y Bathe. El modelo es sólo válido para isotropía y queda definido por una función de energía interpolada con splines, la cual reproduce los datos experimentales de forma exacta. En esta tesis se presenta su extensión a materiales transversalmente isótropos y ortótropos utilizando deformaciones logarítmicas. Asimismo, se define una nueva propiedad que las funciones de energía anisótropas deben satisfacer para que su convergencia al caso isótropo sea correcta. En visco-hiperelasticidad, aparte de las distintas funciones de energía disponibles, hay dos aproximaciones computational típicas basadas en variables internas. El modelo original de Simó está formulado en tensiones y es válido para materiales anisótropos, aunque sólo es adecuado para pequeñas desviaciones con respecto al equilibrio termodinámico. En cambio, el modelo basado en deformaciones de Reese y Govindjee permite grandes deformaciones no equilibradas pero es, en esencia, isótropo. Las formulaciones anisótropas en este último contexto son microestructurales y emplean el modelo isótropo para cada uno de los constituyentes. En esta tesis se presentan dos formulaciones fenomenológicas viscoelásticas definidas mediante funciones hiperelásticas anisótropas y válidas para grandes desviaciones con respecto al equilibrio termodinámico. El primero de los modelos está basado en la descomposición multiplicativa de Sidoroff y requiere un comportamiento viscoso isótropo. La formulación converge al modelo de Reese y Govindjee en el caso especial de isotropía elástica. El segundo modelo se define a partir de una descomposición multiplicativa inversa. Esta formulación está basada en una descripción co-rotacional del problema, es sustancialmente más compleja y puede dar lugar a tensores constitutivos ligeramente no simétricos. Sin embargo, su rango de aplicación es mucho mayor ya que permite un comportamiento anisótropo tanto elástico como viscoso. Varias simulaciones de elementos finitos muestran la gran versatilidad de estos modelos cuando se combinan con funciones hiperelásticas formadas por splines. ABSTRACT The mechanical behavior of many polymeric and biological materials may be properly modelled be means of isochoric hyperelastic and viscoelastic formulations. These materials may sustain large strains. The viscoelastic computational formulations for isotropic incompressible materials at large strains may be considered well established; for example Ogden’s hyperelastic function and the visco-hyperelastic model of Reese and Govindjee are well known models for isotropy. However, anisotropic models and computational procedures both for hyperelasticity and viscohyperelasticity are still under substantial research. Anisotropic hyperelastic models are typically based on structural invariants obtained from quadratic strain measures. These models may be microstructurallybased or phenomenological continuum formulations, and are characterized by a predefined analytical shape of the stored energy. The actual final expression of the stored energy depends on some material parameters which are obtained from an optimization algorithm, typically the Levenberg-Marquardt algorithm. We present in this work anisotropic spline-based hyperelastic stored energies in which the shape of the stored energy is obtained as part of the procedure and which (exactly in practice) replicates the experimental data. These stored energies are based on invariants obtained from logarithmic strain measures. These strain measures preserve the metric and the physical meaning of the trace and deviator operators and, hence, are interesting and meaningful for anisotropic formulations. Furthermore, the proposed stored energies may be formulated in order to have material-symmetries congruency both from a theoretical and from a numerical point of view, which are new properties that we define in this work. On the other hand, visco-hyperelastic formulations for anisotropic materials are typically based on internal stress-like variables following a procedure used by Sim´o. However, it can be shown that this procedure is not adequate for large deviations from thermodynamic equilibrium. In contrast, a formulation given by Reese and Govindjee is valid for arbitrarily large deviations from thermodynamic equilibrium but not for anisotropic stored energy functions. In this work we present two formulations for visco-hyperelasticity valid for anisotropic stored energies and large deviations from thermodynamic equilibrium. One of the formulations is based on the Sidoroff multiplicative decomposition and converges to the Reese and Govindjee formulation for the case of isotropy. However, the formulation is restricted to isotropy for the viscous component. The second formulation is based on a reversed multiplicative decomposition. This last formulation is substantially more complex and based on a corotational description of the problem. It can also result in a slightly nonsymmetric tangent. However, the formulation allows for anisotropy not only in the equilibrated and non-equilibrated stored energies, but also in the viscous behavior. Some examples show finite element implementation, versatility and interesting characteristics of the models.
Resumo:
Plant surfaces have been found to have a major chemical and physical heterogeneity and play a key protecting role against multiple stress factors. During the last decade, there is a raising interest in examining plant surface properties for the development of biomimetic materials. Contact angle measurement of different liquids is a common tool for characterizing synthetic materials, which is just beginning to be applied to plant surfaces. However, some studies performed with polymers and other materials showed that for the same surface, different surface free energy values may be obtained depending on the number and nature of the test liquids analyzed, materials' properties, and surface free energy calculation methods employed. For 3 rough and 3 rather smooth plant materials, we calculated their surface free energy using 2 or 3 test liquids and 3 different calculation methods. Regardless of the degree of surface roughness, the methods based on 2 test liquids often led to the under- or over-estimation of surface free energies as compared to the results derived from the 3-Liquids method. Given the major chemical and structural diversity of plant surfaces, it is concluded that 3 different liquids must be considered for characterizing materials of unknown physico-chemical properties, which may significantly differ in terms of polar and dispersive interactions. Since there are just few surface free energy data of plant surfaces with the aim of standardizing the calculation procedure and interpretation of the results among for instance, different species, organs, or phenological states, we suggest the use of 3 liquids and the mean surface tension values provided in this study.
Resumo:
Dendritic spines are thin protrusions that cover the dendritic surface of numerous neurons in the brain and whose function seems to play a key role in neural circuits. The correct segmentation of those structures is difficult due to their small size and the resulting spines can appear incomplete. This paper presents a four-step procedure for the complete reconstruction of dendritic spines. The haptically driven procedure is intended to work as an image processing stage before the automatic segmentation step giving the final representation of the dendritic spines. The procedure is designed to allow both the navigation and the volume image editing to be carried out using a haptic device. A use case employing our procedure together with a commercial software package for the segmentation stage is illustrated. Finally, the haptic editing is evaluated in two experiments; the first experiment concerns the benefits of the force feedback and the second checks the suitability of the use of a haptic device as input. In both cases, the results shows that the procedure improves the editing accuracy.
Resumo:
A low-cost vibration monitoring system has been developed and installed on an urban steel- plated stress-ribbon footbridge. The system continuously measures: the acceleration (using 18 triaxial MEMS accelerometers distributed along the structure), the ambient temperature and the wind velocity and direction. Automated output-only modal parameter estimation based on the Stochastic Subspace Identification (SSI) is carried out in order to extract the modal parameters, i.e., the natural frequencies, damping ratios and modal shapes. Thus, this paper analyzes the time evolution of the modal parameters over a whole-year data monitoring. Firstly, for similar environmental/operational factors, the uncertainties associated to the time window size used are studied and quantified. Secondly, a methodology to track the vibration modes has been established since several of them with closely-spaced natural frequencies are identified. Thirdly, the modal parameters have been correlated against external factors. It has been shown that this stress-ribbon structure is highly sensitive to temperature variation (frequency changes of more than 20%) with strongly seasonal and daily trends
Resumo:
RDB to RDF Mapping Language (R2RML) es una recomendación del W3C que permite especificar reglas para transformar bases de datos relacionales a RDF. Estos datos en RDF se pueden materializar y almacenar en un sistema gestor de tripletas RDF (normalmente conocidos con el nombre triple store), en el cual se pueden evaluar consultas SPARQL. Sin embargo, hay casos en los cuales la materialización no es adecuada o posible, por ejemplo, cuando la base de datos se actualiza frecuentemente. En estos casos, lo mejor es considerar los datos en RDF como datos virtuales, de tal manera que las consultas SPARQL anteriormente mencionadas se traduzcan a consultas SQL que se pueden evaluar sobre los sistemas gestores de bases de datos relacionales (SGBD) originales. Para esta traducción se tienen en cuenta los mapeos R2RML. La primera parte de esta tesis se centra en la traducción de consultas. Se propone una formalización de la traducción de SPARQL a SQL utilizando mapeos R2RML. Además se proponen varias técnicas de optimización para generar consultas SQL que son más eficientes cuando son evaluadas en sistemas gestores de bases de datos relacionales. Este enfoque se evalúa mediante un benchmark sintético y varios casos reales. Otra recomendación relacionada con R2RML es la conocida como Direct Mapping (DM), que establece reglas fijas para la transformación de datos relacionales a RDF. A pesar de que ambas recomendaciones se publicaron al mismo tiempo, en septiembre de 2012, todavía no se ha realizado un estudio formal sobre la relación entre ellas. Por tanto, la segunda parte de esta tesis se centra en el estudio de la relación entre R2RML y DM. Se divide este estudio en dos partes: de R2RML a DM, y de DM a R2RML. En el primer caso, se estudia un fragmento de R2RML que tiene la misma expresividad que DM. En el segundo caso, se representan las reglas de DM como mapeos R2RML, y también se añade la semántica implícita (relaciones de subclase, 1-N y M-N) que se puede encontrar codificada en la base de datos. Esta tesis muestra que es posible usar R2RML en casos reales, sin necesidad de realizar materializaciones de los datos, puesto que las consultas SQL generadas son suficientemente eficientes cuando son evaluadas en el sistema gestor de base de datos relacional. Asimismo, esta tesis profundiza en el entendimiento de la relación existente entre las dos recomendaciones del W3C, algo que no había sido estudiado con anterioridad. ABSTRACT. RDB to RDF Mapping Language (R2RML) is a W3C recommendation that allows specifying rules for transforming relational databases into RDF. This RDF data can be materialized and stored in a triple store, so that SPARQL queries can be evaluated by the triple store. However, there are several cases where materialization is not adequate or possible, for example, if the underlying relational database is updated frequently. In those cases, RDF data is better kept virtual, and hence SPARQL queries over it have to be translated into SQL queries to the underlying relational database system considering that the translation process has to take into account the specified R2RML mappings. The first part of this thesis focuses on query translation. We discuss the formalization of the translation from SPARQL to SQL queries that takes into account R2RML mappings. Furthermore, we propose several optimization techniques so that the translation procedure generates SQL queries that can be evaluated more efficiently over the underlying databases. We evaluate our approach using a synthetic benchmark and several real cases, and show positive results that we obtained. Direct Mapping (DM) is another W3C recommendation for the generation of RDF data from relational databases. While R2RML allows users to specify their own transformation rules, DM establishes fixed transformation rules. Although both recommendations were published at the same time, September 2012, there has not been any study regarding the relationship between them. The second part of this thesis focuses on the study of the relationship between R2RML and DM. We divide this study into two directions: from R2RML to DM, and from DM to R2RML. From R2RML to DM, we study a fragment of R2RML having the same expressive power than DM. From DM to R2RML, we represent DM transformation rules as R2RML mappings, and also add the implicit semantics encoded in databases, such as subclass, 1-N and N-N relationships. This thesis shows that by formalizing and optimizing R2RML-based SPARQL to SQL query translation, it is possible to use R2RML engines in real cases as the resulting SQL is efficient enough to be evaluated by the underlying relational databases. In addition to that, this thesis facilitates the understanding of bidirectional relationship between the two W3C recommendations, something that had not been studied before.
Resumo:
Contexto: La adopción y adaptación de las metodologías ágiles ha sido objeto de estudio en la comunidad y diversas aproximaciones han sido propuestas como solución al problema. La mayoría de las organizaciones comienzan introduciendo prácticas ágiles de manera progresiva, las cuales, junto a las buenas prácticas ya existentes, conforman una nueva metodología ágil adaptada. El problema más importante con el que se enfrentan las organizaciones al adaptar una metodología ágil, son las necesidades específicas del contexto de cada proyecto, porque a pesar de haber definido la adaptación de la metodología en base a las buenas prácticas y procesos existentes en la organización, las condiciones particulares del proyecto y del equipo que lo va a desarrollar, exigen adaptaciones específicas. Otro aspecto importante a resolver es el poder implantar esta adopción y adaptación en toda la organización. Las metodologías ágiles no incluyen actividades que asistan a las organizaciones a solucionar este problema, porque su esfuerzo lo centran en el equipo y el proyecto. Objetivo: El objetivo principal de este trabajo de investigación consiste en “definir un marco de trabajo de adaptación de metodologías ágiles orientado a contextos que permita su implantación en toda la organización a través de ciclos de mejora basados en la reutilización de experiencias adquiridas”. Este marco de trabajo permitirá organizar, gestionar y validar la adaptación de las metodologías ágiles, desplegando en toda la organización las experiencias adquiridas, incorporando pequeñas mejoras en el proceso, y facilitando su reutilización. Todo esto, con el apoyo de la alta dirección. Método: Este trabajo se inició con una investigación exploratoria acompañada de una revisión sistemática para conocer el estado del arte del objeto de estudio, detectando las principales carencias y necesidades de resolución, a partir de las cuales, se delimitó el alcance del problema. Posteriormente, se definieron las hipótesis de investigación y se planteó una solución al problema a través de la definición del marco de trabajo, cuyos resultados fueron evaluados y validados a través de la realización de un caso de estudio. Finalmente, se obtuvieron las conclusiones y se establecieron las líneas futuras de investigación. Resolución: El marco de trabajo presenta una alternativa de adaptación de las metodologías ágiles orientada a contextos, y facilita su implantación en toda la organización. El marco de trabajo idéntica los contextos en los que se desarrollan los proyectos y analiza la causa de los problemas que afectan a la consecución de los objetivos de negocio en cada contexto. Posteriormente, identifica las acciones de adaptación requeridas para reducir o eliminar estas causas y las incorpora en la definición de la metodología ágil, creando de esta manera, una metodología adaptada para cada contexto manteniendo un nivel de agilidad aceptable. Cuando se inicia un nuevo proyecto, se identifica su contexto y se aplica la metodología ágil adaptada definida para dicho contexto. Al final de cada iteración, se analizan las medidas obtenidas y se las compara con la media aplicando una técnica para la evaluación de la consecución de los objetivos. En base a esta comparativa, y a la experiencia adquirida por el equipo, se realizan ajustes en las acciones de adaptación para intentar conseguir una mejora en la siguiente iteración. Una vez finalizado el proyecto, se conoce el impacto de los resultados conseguidos y se puede determinar si se ha alcanzado una mejora en el contexto del proyecto. De ser así, la metodología adaptada es almacenada en un repositorio de experiencias para poder ser reutilizada por otros equipos de desarrollo de la organización. Resultados: El trabajo de investigación fue evaluado y validado satisfactoriamente a través de su experimentación en un caso de estudio, aprobando las hipótesis establecidas. Los aportes y principales resultados de esta investigación son: Definición de un marco de trabajo integral de adaptación de metodologías ágiles, con capacidad de facilitar su implantación en toda la organización. Procedimiento para identificar contextos o escenarios de adaptación. Procedimiento para definir unidades de adaptación a partir de los inhibidores de los objetivos de negocio de la organización. Técnica para analizar la consecución de objetivos en propuestas de mejora de procesos. Conclusiones: Son tres las ventajas principales que han quedado evidenciadas durante la realización del trabajo de investigación: El marco de trabajo proporciona un mecanismo capaz de optimizar el rendimiento de la metodología ágil adaptada al orientar el proceso a las necesidades específicas del contexto del proyecto. El marco de trabajo permite conservar el conocimiento empírico adquirido por el equipo de desarrollo al registrarlo como experiencia adquirida para la organización. El marco de trabajo optimiza y reduce los tiempos del proceso de implantación de la metodología ágil adaptada en otros equipos de la organización. ABSTRACT Context: The adoption and tailoring of agile methods has been studied in the community and various approaches have been proposed as a solution. Most organizations start introducing agile practices gradually, which ones, together with existing good practices, make a new agile method tailored. When an organization starts to adapt an agile method, the context-specific needs of each project are the main trouble, because even though the agile method tailored has been defined based on best practices and processes in the organization, the specific conditions of the project and its team, require specific adaptations. Another important aspect to be solved is to implement this adoption and adaptation throughout all the organization. Agile methods do not include specific activities for helping organizations to solve this problem, because their effort is focused on the team and the project. Objective: The main objective of this research is to “define a tailoring framework for agile methods oriented to contexts that allows its deployment throughout all the organization using improvement cycles based on the reuse of lessons learned”. This framework will allow organize, manage and validate the tailoring of agile methods, adding small improvements in the process, and facilitating reuse. All this, with the support of senior management. Method: This work began with an exploratory investigation accompanied by a systematic review to determine the state of the art about the object of study, after major gaps and needs resolution were identified, and the scope of the problem was delimited. Subsequently, the research hypotheses were defined and the framework was developed for solving the research problem. The results were evaluated and validated through a case study. Finally, conclusions were drawn and future research were established. Resolution The framework presents an alternative for tailoring agile methodologies and facilitates its implementation throughout all the organization. The framework identifies the contexts or scenarios in which software development projects are developed and analyses the causes of the problems affecting the achievement of business goals in each context. Then the adaptation actions required to reduce or eliminate these causes are defined and incorporated into the definition of agile method. Thus, a method tailored for each context is created. When a new project is started, the context in which it will be developed is identified and the tailored agile method to that context is applied. At the end of each iteration of the project, the measurements obtained are analysed and compared with the historical average of context to analyse the improvement in business goals. Based on this comparison, and the experience gained by the project team, adjustments are made in adaptation actions to try to achieve an improvement in the next iteration. Once the project is completed, the impact of the achieved results are known and it can determine if it has reached an improvement in the context of the project. If so, the tailored agile method is stored in a repository of experiences to be reused by other development teams in the organization. Results The goal of this research was successfully evaluated and validated through experimentation in a case study, so the research hypotheses were approved. The contributions and main results of this research are: A framework for tailoring agile methods, that allows its deployment throughout all the organization. A procedure for identifying adaptation scenarios or contexts. A procedure for defining adaptation units from inhibitors of the business goals of the organization. A technique for analysing the achievement of goals in process improvement proposals. Conclusions: There are three main advantages that have been highlighted during the research: The framework provides a mechanism to optimize the performance of agile methods because it guides the process based on the specific needs of the project context. The framework preserves the empirical knowledge acquired by the development team because it registers this empirical knowledge as experience for the organization. The framework streamlines and shortens the process of deploying the tailored agile method to other teams in your organization.