937 resultados para key features
Resumo:
The regulation of muscle differentiation, like cell differentiation in general, is only now beginning to be understood. Here are described several key features to myogenesis: a beginning, some intermediary events, and an endpoint. Muscle differentiation proceeds spontaneously when myoblasts are cultured in serum-poor medium. Transforming growth factor type $\beta$ (TGF$\beta$), a component of fetal serum, was found to potently suppress muscle differentiation. Prolonged blockade of differentiation required replenishing TGF$\beta$. When TGF$\beta$ was removed, cells rapidly differentiated. Both TGF$\beta$ and RAS, which also blocks myogenesis, suppress the genes for a series of muscle-specific proteins. Regions that regulate transcription of one such gene, muscle creatine kinase (mck), were located by linking progressively smaller parts of the mck 5$\sp\prime$ region to the marker gene cat and testing the constructs for regulated expression of cat in myoblasts and muscle cells. The mck promoter is not muscle-specific but requires activation. Two enhancers were found: a weak, developmentally regulated enhancer within the first intron, and a strong, compact, and tightly developmentally regulated enhancer about 1.2 Kb upstream of the transcription start site. Activity of this enhancer is eliminated by activated ras. Suppression of activated N-RAS restores potency to the upstream enhancer. Further deletion shows the mck 5$\sp\prime$ enhancer to contain an enhancer core with low but significant muscle-specific activity, and at least one peripheral element that augments core activity. The core and this peripheral element were comprised almost entirely of factor-binding motifs. The peripheral element was inactive as a single copy, but was constitutively active in multiple copies. Regions flanking the peripheral element augmented its activity and conferred partial muscle-specificity. The enhancer core is also modulated by its 5$\sp\prime$ flanking region in a complex manner. Site-specific mutants covering most of the enhancer core and interesting flanking sequences have been made; all mutants tested diminish the activity of the 5$\sp\prime$ enhancer. Alteration of the site to which MyoD1 is reported to bind completely inactivates the enhancer. A theoretical analysis of cooperativity is presented, through which the binding of a constitutively expressed nuclear factor is shown to have weak positive cooperativity. In summary, TGF$\beta$, RAS, and enhancer-binding factors are found to be initial, intermediary, and final regulators, respectively, of muscle differentiation. ^
Resumo:
The Late Paleocene and Early Eocene were characterised by warm greenhouse climates, punctuated by a series of rapid warming and ocean acidification events known as "hyperthermals", thought to have been paced or triggered by orbital cycles. While these hyperthermals, such as the Paleocene Eocene Thermal Maximum (PETM), have been studied in great detail, the background low-amplitude cycles seen in carbon and oxygen-isotope records throughout the Paleocene-Eocene have hitherto not been resolved. Here we present a 7.7 million year (myr) long, high-resolution, orbitally-tuned, benthic foraminiferal stable-isotope record spanning the late Paleocene and early Eocene interval (~52.5 - 60.5 Ma) from Ocean Drilling Program (ODP) Site 1262, South Atlantic. This high resolution (~2-4 kyr) record allows the changing character and phasing of orbitally-modulated cycles to be studied in unprecedented detail as it reflects the long-term trend in carbon cycle and climate over this interval. The main pacemaker in the benthic oxygen-isotope (d18O) and carbon-isotope (d13C) records from ODP Site 1262, are the long (405 kyr) and short (100 kyr) eccentricity cycles, and precession (21 kyr). Obliquity (41 kyr) is almost absent throughout the section except for a few brief intervals where it has a relatively weak influence. During the course of the Early Paleogene record, and particularly in the latest Paleocene, eccentricity-paced negative carbon-isotope excursions (d13C, CIEs) and coeval negative oxygen-isotope (d18O) excursions correspond to low carbonate (CaCO3) and coarse fraction (%CF) values due to increased carbonate dissolution, suggesting shoaling of the lysocline and accompanied changes in the global exogenic carbon cycle. These negative CIEs and d18O events coincide with maxima in eccentricity, with changes in d18O leading changes in d13C by ~6 (±5) kyr in the 405-kyr band and by ~3 (±1) kyr in the higher frequency 100-kyr band on average. However, these phase lags are not constant, with the lag in the 405-kyr band extending from ~4 (±5) kyr to ~21 (±2) kyr from the late Paleocene to the early Eocene, suggesting a progressively weaker coupling of climate and the carbon-cycle with time. The higher amplitude 405-kyr cycles in the latest Paleocene are associated with changes in bottom water temperature of 2-4ºC, while the most prominent 100 kyr-paced cycles can be accompanied by changes of up to 1.5ºC. Comparison of the 1262 record with a lower resolution, but orbitally-tuned benthic record for Site 1209 in the Pacific allows for verification of key features of the benthic isotope records which are global in scale including a key warming step at 57.7 Ma.
Resumo:
Fluctuations in the length of 72 glaciers in the Northern and Southern Patagonia Icefield (NPI and SPI, respectively) and the Cordillera Darwin Icefield (CDI) were estimated between 1945 and 2005. The information obtained from historical maps based on 1945 aerial photographs was compared to ASTER and Landsat satellite images and to information found in the literature. The majority of glaciers have retreated considerably, with maximum values of 12.2 km for Marinelli Glacier in the CDI, 11.6 km for O'Higgins Glacier in the SPI and 5.7 km for San Rafael Glacier in the NPI. Among the 20 glaciers that have retreated the most relative to their size, small (less than 50 km**2) and medium (between 50 and 200 km**2) glaciers are the most affected. However, no direct relation between glacier retreat and size was found for the 72 glaciers studied. The highest percentage retreat in the CDI was by the CDI-03 Glacier (37.9%) and Marinelli Glacier (37.6%). In the SPI, relative retreats were heterogeneous and fluctuated between 27.2% (Amelia Glacier) and 0.4% (Viedma Glacier). In the NPI, relative retreat was very high for Strindberg and Cachet glaciers (35.9% and 27.6%, respectively) but for the remaining glaciers in this icefield it ranged between 11.8% (Piscis Glacier) and 3.6% (San Quintin Glacier). In addition to surface area, the surface slope (calculated on the basis of the DEM SRTM) was also related to the relative retreat and no straightforward relation was found. From a global point of view, we suggest that glacier retreat in the region is controlled firstly by atmospheric warming, as it has been reported in this area. Besides the general increase in temperature observed, no signal of a geographical pattern for the fluctuations in glacier length was found. Consequently, glaciers appear to initially react to local conditions most probably induced by their exposition, geometry and hypsometry. The heterogeneity of rates of retreat suggests that differences in basin geometry, glacier dynamics and response time are key features to explain fluctuations of each glacier.
Resumo:
Culverts are very common in recent railway lines. Wild life corridors and drainage conducts often fall in this category of partially buried structures. Their dynamic behavior has received far less attention than other structures such as bridges but its large number makes that study an interesting challenge from the point of view of safety and savings. In this paper a complete study of a culvert, including on-site measurements as well as numerical modelling, will be presented. The structure belongs to the high speed railway line linking Segovia and Valladolid, in Spain. The line was opened to traffic in 2004. Its dimensions (3x3m) are the most frequent along the line. Other factors such as reduced overburden (0.6m) and an almost right angle with the track axis make it an interesting example to extract generalized conclusions. On site measurements have been performed in the structure recording the dynamic response at selected points of the structure during the passage of high speed trains at speeds ranging between 200 and 300km/h. The measurements by themselves provide a good insight into the main features of the dynamic behaviour of the structure. A 3D finite element model of the structure, representing its key features was also studied as it allows further understanding of the dynamic response to the train loads . In the paper the discrepancies between predicted and measured vibration levels will be analyzed and some advices on numerical modelling will be proposed
Resumo:
Underpasses are common in modern railway lines. Wildlife corridors and drainage conduits often fall into this category of partially buried structures. Their dynamic behavior has received far less attention than that of other structures such as bridges, but their large number makes their study an interesting challenge from the viewpoint of safety and cost savings. Here, we present a complete study of a culvert, including on-site measurements and numerical modeling. The studied structure belongs to the high-speed railway line linking Segovia and Valladolid in Spain. The line was opened to traffic in 2004. On-site measurements were performed for the structure by recording the dynamic response at selected points of the structure during the passage of high-speed trains at speeds ranging between 200 and 300 km/h. The measurements provide not only reference values suitable for model fitting, but also a good insight into the main features of the dynamic behavior of this structure. Finite element techniques were used to model the dynamic behavior of the structure and its key features. Special attention is paid to vertical accelerations, the values of which should be limited to avoid track instability according to Eurocode. This study furthers our understanding of the dynamic response of railway underpasses to train loads.
Resumo:
In just a few years cloud computing has become a very popular paradigm and a business success story, with storage being one of the key features. To achieve high data availability, cloud storage services rely on replication. In this context, one major challenge is data consistency. In contrast to traditional approaches that are mostly based on strong consistency, many cloud storage services opt for weaker consistency models in order to achieve better availability and performance. This comes at the cost of a high probability of stale data being read, as the replicas involved in the reads may not always have the most recent write. In this paper, we propose a novel approach, named Harmony, which adaptively tunes the consistency level at run-time according to the application requirements. The key idea behind Harmony is an intelligent estimation model of stale reads, allowing to elastically scale up or down the number of replicas involved in read operations to maintain a low (possibly zero) tolerable fraction of stale reads. As a result, Harmony can meet the desired consistency of the applications while achieving good performance. We have implemented Harmony and performed extensive evaluations with the Cassandra cloud storage on Grid?5000 testbed and on Amazon EC2. The results show that Harmony can achieve good performance without exceeding the tolerated number of stale reads. For instance, in contrast to the static eventual consistency used in Cassandra, Harmony reduces the stale data being read by almost 80% while adding only minimal latency. Meanwhile, it improves the throughput of the system by 45% while maintaining the desired consistency requirements of the applications when compared to the strong consistency model in Cassandra.
Resumo:
Sampling a network with a given probability distribution has been identified as a useful operation. In this paper we propose distributed algorithms for sampling networks, so that nodes are selected by a special node, called the source, with a given probability distribution. All these algorithms are based on a new class of random walks, that we call Random Centrifugal Walks (RCW). A RCW is a random walk that starts at the source and always moves away from it. Firstly, an algorithm to sample any connected network using RCW is proposed. The algorithm assumes that each node has a weight, so that the sampling process must select a node with a probability proportional to its weight. This algorithm requires a preprocessing phase before the sampling of nodes. In particular, a minimum diameter spanning tree (MDST) is created in the network, and then nodes weights are efficiently aggregated using the tree. The good news are that the preprocessing is done only once, regardless of the number of sources and the number of samples taken from the network. After that, every sample is done with a RCW whose length is bounded by the network diameter. Secondly, RCW algorithms that do not require preprocessing are proposed for grids and networks with regular concentric connectivity, for the case when the probability of selecting a node is a function of its distance to the source. The key features of the RCW algorithms (unlike previous Markovian approaches) are that (1) they do not need to warm-up (stabilize), (2) the sampling always finishes in a number of hops bounded by the network diameter, and (3) it selects a node with the exact probability distribution.
Resumo:
Geologic storage of carbon dioxide (CO2) has been proposed as a viable means for reducing anthropogenic CO2 emissions. Once injection begins, a program for measurement, monitoring, and verification (MMV) of CO2 distribution is required in order to: a) research key features, effects and processes needed for risk assessment; b) manage the injection process; c) delineate and identify leakage risk and surface escape; d) provide early warnings of failure near the reservoir; and f) verify storage for accounting and crediting. The selection of the methodology of monitoring (characterization of site and control and verification in the post-injection phase) is influenced by economic and technological variables. Multiple Criteria Decision Making (MCDM) refers to a methodology developed for making decisions in the presence of multiple criteria. MCDM as a discipline has only a relatively short history of 40 years, and it has been closely related to advancements on computer technology. Evaluation methods and multicriteria decisions include the selection of a set of feasible alternatives, the simultaneous optimization of several objective functions, and a decision-making process and evaluation procedures that must be rational and consistent. The application of a mathematical model of decision-making will help to find the best solution, establishing the mechanisms to facilitate the management of information generated by number of disciplines of knowledge. Those problems in which decision alternatives are finite are called Discrete Multicriteria Decision problems. Such problems are most common in reality and this case scenario will be applied in solving the problem of site selection for storing CO2. Discrete MCDM is used to assess and decide on issues that by nature or design support a finite number of alternative solutions. Recently, Multicriteria Decision Analysis has been applied to hierarchy policy incentives for CCS, to assess the role of CCS, and to select potential areas which could be suitable to store. For those reasons, MCDM have been considered in the monitoring phase of CO2 storage, in order to select suitable technologies which could be techno-economical viable. In this paper, we identify techniques of gas measurements in subsurface which are currently applying in the phase of characterization (pre-injection); MCDM will help decision-makers to hierarchy the most suitable technique which fit the purpose to monitor the specific physic-chemical parameter.
Resumo:
Los avances en el hardware permiten disponer de grandes volúmenes de datos, surgiendo aplicaciones que deben suministrar información en tiempo cuasi-real, la monitorización de pacientes, ej., el seguimiento sanitario de las conducciones de agua, etc. Las necesidades de estas aplicaciones hacen emerger el modelo de flujo de datos (data streaming) frente al modelo almacenar-para-despuésprocesar (store-then-process). Mientras que en el modelo store-then-process, los datos son almacenados para ser posteriormente consultados; en los sistemas de streaming, los datos son procesados a su llegada al sistema, produciendo respuestas continuas sin llegar a almacenarse. Esta nueva visión impone desafíos para el procesamiento de datos al vuelo: 1) las respuestas deben producirse de manera continua cada vez que nuevos datos llegan al sistema; 2) los datos son accedidos solo una vez y, generalmente, no son almacenados en su totalidad; y 3) el tiempo de procesamiento por dato para producir una respuesta debe ser bajo. Aunque existen dos modelos para el cómputo de respuestas continuas, el modelo evolutivo y el de ventana deslizante; éste segundo se ajusta mejor en ciertas aplicaciones al considerar únicamente los datos recibidos más recientemente, en lugar de todo el histórico de datos. En los últimos años, la minería de datos en streaming se ha centrado en el modelo evolutivo. Mientras que, en el modelo de ventana deslizante, el trabajo presentado es más reducido ya que estos algoritmos no sólo deben de ser incrementales si no que deben borrar la información que caduca por el deslizamiento de la ventana manteniendo los anteriores tres desafíos. Una de las tareas fundamentales en minería de datos es la búsqueda de agrupaciones donde, dado un conjunto de datos, el objetivo es encontrar grupos representativos, de manera que se tenga una descripción sintética del conjunto. Estas agrupaciones son fundamentales en aplicaciones como la detección de intrusos en la red o la segmentación de clientes en el marketing y la publicidad. Debido a las cantidades masivas de datos que deben procesarse en este tipo de aplicaciones (millones de eventos por segundo), las soluciones centralizadas puede ser incapaz de hacer frente a las restricciones de tiempo de procesamiento, por lo que deben recurrir a descartar datos durante los picos de carga. Para evitar esta perdida de datos, se impone el procesamiento distribuido de streams, en concreto, los algoritmos de agrupamiento deben ser adaptados para este tipo de entornos, en los que los datos están distribuidos. En streaming, la investigación no solo se centra en el diseño para tareas generales, como la agrupación, sino también en la búsqueda de nuevos enfoques que se adapten mejor a escenarios particulares. Como ejemplo, un mecanismo de agrupación ad-hoc resulta ser más adecuado para la defensa contra la denegación de servicio distribuida (Distributed Denial of Services, DDoS) que el problema tradicional de k-medias. En esta tesis se pretende contribuir en el problema agrupamiento en streaming tanto en entornos centralizados y distribuidos. Hemos diseñado un algoritmo centralizado de clustering mostrando las capacidades para descubrir agrupaciones de alta calidad en bajo tiempo frente a otras soluciones del estado del arte, en una amplia evaluación. Además, se ha trabajado sobre una estructura que reduce notablemente el espacio de memoria necesario, controlando, en todo momento, el error de los cómputos. Nuestro trabajo también proporciona dos protocolos de distribución del cómputo de agrupaciones. Se han analizado dos características fundamentales: el impacto sobre la calidad del clustering al realizar el cómputo distribuido y las condiciones necesarias para la reducción del tiempo de procesamiento frente a la solución centralizada. Finalmente, hemos desarrollado un entorno para la detección de ataques DDoS basado en agrupaciones. En este último caso, se ha caracterizado el tipo de ataques detectados y se ha desarrollado una evaluación sobre la eficiencia y eficacia de la mitigación del impacto del ataque. ABSTRACT Advances in hardware allow to collect huge volumes of data emerging applications that must provide information in near-real time, e.g., patient monitoring, health monitoring of water pipes, etc. The data streaming model emerges to comply with these applications overcoming the traditional store-then-process model. With the store-then-process model, data is stored before being consulted; while, in streaming, data are processed on the fly producing continuous responses. The challenges of streaming for processing data on the fly are the following: 1) responses must be produced continuously whenever new data arrives in the system; 2) data is accessed only once and is generally not maintained in its entirety, and 3) data processing time to produce a response should be low. Two models exist to compute continuous responses: the evolving model and the sliding window model; the latter fits best with applications must be computed over the most recently data rather than all the previous data. In recent years, research in the context of data stream mining has focused mainly on the evolving model. In the sliding window model, the work presented is smaller since these algorithms must be incremental and they must delete the information which expires when the window slides. Clustering is one of the fundamental techniques of data mining and is used to analyze data sets in order to find representative groups that provide a concise description of the data being processed. Clustering is critical in applications such as network intrusion detection or customer segmentation in marketing and advertising. Due to the huge amount of data that must be processed by such applications (up to millions of events per second), centralized solutions are usually unable to cope with timing restrictions and recur to shedding techniques where data is discarded during load peaks. To avoid discarding of data, processing of streams (such as clustering) must be distributed and adapted to environments where information is distributed. In streaming, research does not only focus on designing for general tasks, such as clustering, but also in finding new approaches that fit bests with particular scenarios. As an example, an ad-hoc grouping mechanism turns out to be more adequate than k-means for defense against Distributed Denial of Service (DDoS). This thesis contributes to the data stream mining clustering technique both for centralized and distributed environments. We present a centralized clustering algorithm showing capabilities to discover clusters of high quality in low time and we provide a comparison with existing state of the art solutions. We have worked on a data structure that significantly reduces memory requirements while controlling the error of the clusters statistics. We also provide two distributed clustering protocols. We focus on the analysis of two key features: the impact on the clustering quality when computation is distributed and the requirements for reducing the processing time compared to the centralized solution. Finally, with respect to ad-hoc grouping techniques, we have developed a DDoS detection framework based on clustering.We have characterized the attacks detected and we have evaluated the efficiency and effectiveness of mitigating the attack impact.
Resumo:
Automatic grading of programming assignments is an important topic in academic research. It aims at improving the level of feedback given to students and optimizing the professor time. Several researches have reported the development of software tools to support this process. Then, it is helpfulto get a quickly and good sight about their key features. This paper reviews an ample set of tools forautomatic grading of programming assignments. They are divided in those most important mature tools, which have remarkable features; and those built recently, with new features. The review includes the definition and description of key features e.g. supported languages, used technology, infrastructure, etc. The two kinds of tools allow making a temporal comparative analysis. This analysis infrastructure, etc. The two kinds of tools allow making a temporal comparative analysis. This analysis shows good improvements in this research field, these include security, more language support, plagiarism detection, etc. On the other hand, the lack of a grading model for assignments is identified as an important gap in the reviewed tools. Thus, a characterization of evaluation metrics to grade programming assignments is provided as first step to get a model. Finally new paths in this research field are proposed.
Resumo:
Entre los requisitos que deben cumplir las estructuras se debe garantizar que estas posean la durabilidad necesaria para permanecer en servicio a lo largo de todo el periodo de vida útil para el que han sido proyectadas. Para conseguir este objetivo las normativas han ido incorporando prescripciones para el diseño del hormigón, en base a distintas clases de exposición dependiendo del origen y magnitud de la agresividad exterior. En ambientes con una elevada agresividad, una de las comprobaciones que debe cumplir el hormigón es que tenga una permeabilidad inferior a los valores máximos fijados según la clase de exposición, y que en caso de considerar como ensayo de referencia el de penetración de agua, analiza el frente de penetración limitando las profundidades de penetración media y máxima. Adicionalmente a las condiciones de diseño según el tipo de ambiente, principalmente basadas en la dosificación del hormigón en términos de la relación agua/cemento y el mínimo contenido de cemento y el recubrimiento de las armaduras, durante la vida en servicio las estructuras pueden están solicitadas por distintas acciones imprevistas que pueden provocar cambios en la microestructura interna del hormigón que modifican su permeabilidad y resistencia, y por tanto pueden alterar la durabilidad inicialmente prevista. Es conocido el efecto de cansancio del hormigón cuando está solicitado por cargas de compresión mantenidas en el tiempo, provocando bajas en su resistencia debido al incremento de la microfisuración. Dada la relación entre la permeabilidad y la microfisuración del hormigón, es previsible el aumento de la permeabilidad en hormigones que han sido precomprimidos durante un periodo largo de tiempo. Los estudios de la permeabilidad en hormigones previamente comprimidos se han realizado analizando periodos de tiempo de compresión cortos que no permiten evaluar el efecto del cansancio sobre la permeabilidad. La presente tesis doctoral investiga la permeabilidad y resistencia a tracción en hormigones que previamente han sido comprimidos en carga mantenida durante distintos plazos de tiempo, al objeto de conocer su evolución en base al tiempo de precompresión. La investigación se apoya en el estudio de otras dos variables como son el tipo de hormigón de acuerdo a su dosificación según el tipo de ambiente considerando una agresividad baja, media o alta, y el grado de compresión aplicado respecto de su carga última de rotura. En los resultados del plan experimental desarrollado se ha obtenido que la permeabilidad presenta un incremento significante con el tiempo de precompresión, que dependiendo del valor inicial de la permeabilidad que tiene el hormigón puede provocar que hormigones que previamente satisfacen las limitaciones de permeabilidad pasen a incumplirlas, pudiendo afectar a su durabilidad. También se confirma la influencia del tiempo de precompresión sobre la resistencia a tracción obteniendo bajas de resistencia importantes en los casos pésimos ensayados, que deben ser tenidas en consideración en tanto afectan a la capacidad resistente del hormigón como a otros aspectos fundamentales como el anclaje de las armaduras en el hormigón armado y pretensado. One of the requirements that structures must meet is to guarantee their durability to remain in service throughout all the working life period for which they have been designed. To achieve this goal, building standards and codes have included specifications for the design of concrete structures, based on different exposure classes depending on the environmental conditions and their origin and magnitude. In severe aggressive environments, one of the specifications the concrete must meet is to have a permeability lower than the maximum values set for a certain exposure class. If this parameter is referenced to water penetration on specimens, then the average and maximum depths of front penetration are analyzed. In addition to the design conditions depending on the exposure class, which regulate the dosage of concrete in terms of the water/cement ratio, minimum samples that have been pre-compressed for a long period of time. Previous studies on permeability have been carried on pre-compressed concrete elements analyzing short periods of time. However, they have not studied the effects of compression forces on concrete in the long term. This Thesis investigates permeability and tensile strength of concrete samples that have been previously compressed under loads applied for different periods of time. The goal is to understand its evolution based on the time exposed to compression. The research variables also include the type of concrete according to the dosage used - depending on the environmental exposure it will have low, medium or high aggressiveness-, and the amount of compression applied in relation to its failure load. Results of the experimental tests showed that permeability increases significantly over the time of pre-compression. Depending on the initial value of permeability, this change could make the concrete not meet the original permeability restrictions and therefore affect its durability. These investigations also confirmed the influence of time of pre-compression in tensile strength, where some cases showed a significant decrease of resistance. These issues must be taken into consideration as they affect the bearing capacity of the material and other key features such as the anchoring of steel bars in reinforced and pre-stressed concrete. amount of cement content and the minimum concrete cover of the steel bars, during their working life structures may be subject to various unforeseen actions. As a result, the concrete’s internal microstructure might be affected, changing its permeability and resistance, and possibly altering the original specified durability. It is a known fact that when concrete is loaded in compression maintained over a long time, its resistance to compression forces is diminished due to the increase in micro-cracking. Considering the relationship between permeability and microcracking of concrete, an increase in permeability may be expected in concrete
Resumo:
Antecedentes: Esta investigación se enmarca principalmente en la replicación y secundariamente en la síntesis de experimentos en Ingeniería de Software (IS). Para poder replicar, es necesario disponer de todos los detalles del experimento original. Sin embargo, la descripción de los experimentos es habitualmente incompleta debido a la existencia de conocimiento tácito y a la existencia de otros problemas tales como: La carencia de un formato estándar de reporte, la inexistencia de herramientas que den soporte a la generación de reportes experimentales, etc. Esto provoca que no se pueda reproducir fielmente el experimento original. Esta problemática limita considerablemente la capacidad de los experimentadores para llevar a cabo replicaciones y por ende síntesis de experimentos. Objetivo: La investigación tiene como objetivo formalizar el proceso experimental en IS, de modo que facilite la comunicación de información entre experimentadores. Contexto: El presente trabajo de tesis doctoral ha sido desarrollado en el seno del Grupo de Investigación en Ingeniería del Software Empírica (GrISE) perteneciente a la Escuela Técnica Superior de Ingenieros Informáticos (ETSIINF) de la Universidad Politécnica de Madrid (UPM), como parte del proyecto TIN2011-23216 denominado “Tecnologías para la Replicación y Síntesis de Experimentos en Ingeniería de Software”, el cual es financiado por el Gobierno de España. El grupo GrISE cumple a la perfección con los requisitos necesarios (familia de experimentos establecida, con al menos tres líneas experimentales y una amplia experiencia en replicaciones (16 replicaciones hasta 2011 en la línea de técnicas de pruebas de software)) y ofrece las condiciones para que la investigación se lleve a cabo de la mejor manera, como por ejemplo, el acceso total a su información. Método de Investigación: Para cumplir este objetivo se opta por Action Research (AR) como el método de investigación más adecuado a las características de la investigación, para obtener resultados a través de aproximaciones sucesivas que abordan los problemas concretos de comunicación entre experimentadores. Resultados: Se formalizó el modelo conceptual del ciclo experimental desde la perspectiva de los 3 roles principales que representan los experimentadores en el proceso experimental, siendo estos: Gestor de la Investigación (GI), Gestor del Experimento (GE) y Experimentador Senior (ES). Por otra parte, se formalizó el modelo del ciclo experimental, a través de: Un workflow del ciclo y un diagrama de procesos. Paralelamente a la formalización del proceso experimental en IS, se desarrolló ISRE (de las siglas en inglés Infrastructure for Sharing and Replicating Experiments), una prueba de concepto de entorno de soporte a la experimentación en IS. Finalmente, se plantearon guías para el desarrollo de entornos de soporte a la experimentación en IS, en base al estudio de las características principales y comunes de los modelos de las herramientas de soporte a la experimentación en distintas disciplinas experimentales. Conclusiones: La principal contribución de la investigación esta representada por la formalización del proceso experimental en IS. Los modelos que representan la formalización del ciclo experimental, así como la herramienta ISRE, construida a modo de evaluación de los modelos, fueron encontrados satisfactorios por los experimentadores del GrISE. Para consolidar la validez de la formalización, consideramos que este estudio debería ser replicado en otros grupos de investigación representativos en la comunidad de la IS experimental. Futuras Líneas de Investigación: El cumplimiento de los objetivos, de la mano con los hallazgos alcanzados, han dado paso a nuevas líneas de investigación, las cuales son las siguientes: (1) Considerar la construcción de un mecanismo para facilitar el proceso de hacer explícito el conocimiento tácito de los experimentadores por si mismos de forma colaborativa y basados en el debate y el consenso , (2) Continuar la investigación empírica en el mismo grupo de investigación hasta cubrir completamente el ciclo experimental (por ejemplo: experimentos nuevos, síntesis de resultados, etc.), (3) Replicar el proceso de investigación en otros grupos de investigación en ISE, y (4) Renovar la tecnología de la prueba de concepto, tal que responda a las restricciones y necesidades de un entorno real de investigación. ABSTRACT Background: This research addresses first and foremost the replication and also the synthesis of software engineering (SE) experiments. Replication is impossible without access to all the details of the original experiment. But the description of experiments is usually incomplete because knowledge is tacit, there is no standard reporting format or there are hardly any tools to support the generation of experimental reports, etc. This means that the original experiment cannot be reproduced exactly. These issues place considerable constraints on experimenters’ options for carrying out replications and ultimately synthesizing experiments. Aim: The aim of the research is to formalize the SE experimental process in order to facilitate information communication among experimenters. Context: This PhD research was developed within the empirical software engineering research group (GrISE) at the Universidad Politécnica de Madrid (UPM)’s School of Computer Engineering (ETSIINF) as part of project TIN2011-23216 entitled “Technologies for Software Engineering Experiment Replication and Synthesis”, which was funded by the Spanish Government. The GrISE research group fulfils all the requirements (established family of experiments with at least three experimental lines and lengthy replication experience (16 replications prior to 2011 in the software testing techniques line)) and provides favourable conditions for the research to be conducted in the best possible way, like, for example, full access to information. Research Method: We opted for action research (AR) as the research method best suited to the characteristics of the investigation. Results were generated successive rounds of AR addressing specific communication problems among experimenters. Results: The conceptual model of the experimental cycle was formalized from the viewpoint of three key roles representing experimenters in the experimental process. They were: research manager, experiment manager and senior experimenter. The model of the experimental cycle was formalized by means of a workflow and a process diagram. In tandem with the formalization of the SE experimental process, infrastructure for sharing and replicating experiments (ISRE) was developed. ISRE is a proof of concept of a SE experimentation support environment. Finally, guidelines for developing SE experimentation support environments were designed based on the study of the key features that the models of experimentation support tools for different experimental disciplines had in common. Conclusions: The key contribution of this research is the formalization of the SE experimental process. GrISE experimenters were satisfied with both the models representing the formalization of the experimental cycle and the ISRE tool built in order to evaluate the models. In order to further validate the formalization, this study should be replicated at other research groups representative of the experimental SE community. Future Research Lines: The achievement of the aims and the resulting findings have led to new research lines, which are as follows: (1) assess the feasibility of building a mechanism to help experimenters collaboratively specify tacit knowledge based on debate and consensus, (2) continue empirical research at the same research group in order to cover the remainder of the experimental cycle (for example, new experiments, results synthesis, etc.), (3) replicate the research process at other ESE research groups, and (4) update the tools of the proof of concept in order to meet the constraints and needs of a real research environment.
Resumo:
During the last few decades, new imaging techniques like X-ray computed tomography have made available rich and detailed information of the spatial arrangement of soil constituents, usually referred to as soil structure. Mathematical morphology provides a plethora of mathematical techniques to analyze and parameterize the geometry of soil structure. They provide a guide to design the process from image analysis to the generation of synthetic models of soil structure in order to investigate key features of flow and transport phenomena in soil. In this work, we explore the ability of morphological functions built over Minkowski functionals with parallel sets of the pore space to characterize and quantify pore space geometry of columns of intact soil. These morphological functions seem to discriminate the effects on soil pore space geometry of contrasting management practices in a Mediterranean vineyard, and they provide the first step toward identifying the statistical significance of the observed differences.
Resumo:
La gestión de los recursos hídricos se convierte en un reto del presente y del futuro frente a un panorama de continuo incremento de la demanda de agua debido al crecimiento de la población, el crecimiento del desarrollo económico y los posibles efectos del calentamiento global. La política hidráulica desde los años 60 en España se ha centrado en la construcción de infraestructuras que han producido graves alteraciones en el régimen natural de los ríos. Estas alteraciones han provocado y acrecentado los impactos sobre los ecosistemas fluviales y ribereños. Desde los años 90, sin embargo, ha aumentado el interés de la sociedad para conservar estos ecosistemas. El concepto de caudales ambientales consiste en un régimen de caudales que simula las características principales del régimen natural. Los caudales ambientales están diseñados para conservar la estructura y funcionalidad de los ecosistemas asociados al régimen fluvial, bajo la hipótesis de que los elementos que conforman estos ecosistemas están profundamente adaptados al régimen natural de caudales, y que cualquier alteración del régimen natural puede provocar graves daños a todo el sistema. El método ELOHA (Ecological Limits of Hydrological Alteration) tiene como finalidad identificar las componentes del régimen natural de caudales que son clave para mantener el equilibrio de los ecosistemas asociados, y estimar los límites máximos de alteración de estas componentes para garantizar su buen estado. Esta tesis presenta la aplicación del método ELOHA en la cuenca del Ebro. La cuenca del Ebro está profundamente regulada e intervenida por el hombre, y sólo las cabeceras de los principales afluentes del Ebro gozan todavía de un régimen total o cuasi natural. La tesis se estructura en seis capítulos que desarrollan las diferentes partes del método. El primer capítulo explica cómo se originó el concepto “caudales ambientales” y en qué consiste el método ELOHA. El segundo capítulo describe el área de estudio. El tercer capítulo realiza una clasificación de los regímenes naturales de la cuenca (RNC) del Ebro, basada en series de datos de caudal mínimamente alterado y usando exclusivamente parámetros hidrológicos. Se identificaron seis tipos diferentes de régimen natural: pluvial mediterráneo, nivo-pluvial, pluvial mediterréaneo con una fuerte componente del caudal base, pluvial oceánico, pluvio-nival oceánico y Mediterráneo. En el cuarto capítulo se realiza una regionalización a toda la cuenca del Ebro de los seis RNC encontrados en la cueca. Mediante parámetros climáticos y fisiográficos se extrapola la información del tipo de RNC a puntos donde no existen datos de caudal inalterado. El patrón geográfico de los tipos de régimen fluvial obtenido con la regionalización resultó ser coincidente con el patrón obtenido a través de la clasificación hidrológica. El quinto capítulo presenta la validación biológica de los procesos de clasificación anteriores: clasificación hidrológica y regionalización. La validación biológica de los tipos de regímenes fluviales es imprescindible, puesto que los diferentes tipos de régimen fluvial van a servir de unidades de gestión para favorecer el mantenimiento de los ecosistemas fluviales. Se encontraron diferencias significativas entre comunidades biológicas en cinco de los seis tipos de RNC encontrados en la cuenca. Finalmente, en el sexto capítulo se estudian las relaciones hidro-ecológicas existentes en tres de los seis tipos de régimen fluvial encontrados en la cuenca del Ebro. Mediante la construcción de curvas hidro-ecológicas a lo largo de un gradiente de alteración hidrológica, se pueden sugerir los límites de alteración hidrológica (ELOHAs) para garantizar el buen estado ecológico en cada uno de los tipos fluviales estudiados. Se establecieron ELOHAs en tres de los seis tipos de RNC de la cuenca del Ebro Esta tesis, además, pone en evidencia la falta de datos biológicos asociados a registros de caudal. Para llevar a cabo la implantación de un régimen de caudales ambientales en la cuenca, la ubicación de los puntos de muestreo biológico cercanos a estaciones de aforo es imprescindible para poder extraer relaciones causa-efecto de la gestión hidrológica sobre los ecosistemas dependientes. ABSTRACT In view of a growing freshwater demand because of population raising, improvement of economies and the potential effects of climate change, water resources management has become a challenge for present and future societies. Water policies in Spain have been focused from the 60’s on constructing hydraulic infrastructures, in order to dampen flow variability and granting water availability along the year. Consequently, natural flow regimes have been deeply altered and so the depending habitats and its ecosystems. However, an increasing acknowledgment of societies for preserving healthy freshwater ecosystems started in the 90’s and agreed that to maintain healthy freshwater ecosystems, it was necessary to set environmental flow regimes based on the natural flow variability. The Natural Flow Regime paradigm (Richter et al. 1996, Poff et al. 1997) bases on the hypothesis that freshwater ecosystems are made up by elements adapted to natural flow conditions, and any change on these conditions can provoke deep impacts on the whole system. Environmental flow regime concept consists in designing a flow regime that emulates natural flow characteristics, so that ecosystem structure, functions and services are maintained. ELOHA framework (Ecological Limits of Hydrological Alteration) aims to identify key features of the natural flow regime (NFR) that are needed to maintain and preserve healthy freshwater and riparian ecosystems. Moreover, ELOHA framework aims to quantify thresholds of alteration of these flow features according to ecological impacts. This thesis describes the application of the ELOHA framework in the Ebro River Basin. The Ebro River basin is the second largest basin in Spain and it is highly regulated for human demands. Only the Ebro headwaters tributaries still have completely unimpaired flow regime. The thesis has six chapters and the process is described step by step. The first chapter makes an introduction to the origin of the environmental flow concept and the necessity to come up. The second chapter shows a description of the study area. The third chapter develops a classification of NFRs in the basin based on natural flow data and using exclusively hydrological parameters. Six NFRs were found in the basin: continental Mediterranean-pluvial, nivo-pluvial, continental Mediterranean pluvial (with groundwater-dominated flow pattern), pluvio-oceanic, pluvio-nival-oceanic and Mediterranean. The fourth chapter develops a regionalization of the six NFR types across the basin by using climatic and physiographic variables. The geographical pattern obtained from the regionalization process was consistent with the pattern obtained with the hydrologic classification. The fifth chapter performs a biological validation of both classifications, obtained from the hydrologic classification and the posterior extrapolation. When the aim of flow classification is managing water resources according to ecosystem requirements, a validation based on biological data is compulsory. We found significant differences in reference macroinvertebrate communities between five over the six NFR types identified in the Ebro River basin. Finally, in the sixth chapter we explored the existence of significant and explicative flow alteration-ecological response relationships (FA-E curves) within NFR types in the Ebro River basin. The aim of these curves is to find out thresholds of hydrological alteration (ELOHAs), in order to preserve healthy freshwater ecosystem. We set ELOHA values in three NFR types identified in the Ebro River basin. During the development of this thesis, an inadequate biological monitoring in the Ebro River basin was identified. The design and establishment of appropriate monitoring arrangements is a critical final step in the assessment and implementation of environmental flows. Cause-effect relationships between hydrology and macroinvertebrate community condition are the principal data that sustain FA-E curves. Therefore, both data sites must be closely located, so that the effects of external factors are minimized. The scarce hydro-biological pairs of data available in the basin prevented us to apply the ELOHA method at all NFR types.