979 resultados para Ground analysis
Resumo:
Previous research studies and operational trials have shown that using the airborne Required Time of Arrival (RTA) function, an aircraft can individually achieve an assigned time to a metering or merge point accurately. This study goes a step further and investigates the application of RTA to a real sequence of arriving aircraft into Melbourne Australia. Assuming that the actual arrival times were Controlled Time of Arrivals (CTAs) assigned to each aircraft, the study examines if the airborne RTA solution would work. Three scenarios were compared: a baseline scenario being the actual flown trajectories in a two hour time-span into Melbourne, a scenario in which the sequential landing slot times of the baseline scenario were assigned as CTAs and a third scenario in which the landing slots could be freely redistributed to the inbound traffic as CTAs. The research found that pressure on the terminal area would sometimes require aircraft to lose more time than possible through the RTA capability. Using linear holding as an additional measure to absorb extensive delays, up to 500NM (5%) of total track reduction and 1300kg (3%) of total fuel consumption could be saved in the scenario with landing slots freely distributed as CTAs, compared to the baseline scenario. Assigning CTAs in an arrival sequence requires the ground system to have an accurate trajectory predictor to propose additional delay measures (path stretching, linear holding) if necessary. Reducing the achievable time window of the aircraft to add control margin to the RTA function, had a negative impact and increased the amount of intervention other than speed control required to solve the sequence. It was concluded that the RTA capability is not a complete solution but merely a tool to assist in managing the increasing complexity of air traffic.
Resumo:
In this paper, multiple regression analysis is used to model the top of descent (TOD) location of user-preferred descent trajectories computed by the flight management system (FMS) on over 1000 commercial flights into Melbourne, Australia. In addition to recording TOD, the cruise altitude, final altitude, cruise Mach, descent speed, wind, and engine type were also identified for use as the independent variables in the regression analysis. Both first-order and second-order models are considered, where cross-validation, hypothesis testing, and additional analysis are used to compare models. This identifies the models that should give the smallest errors if used to predict TOD location for new data in the future. A model that is linear in TOD altitude, final altitude, descent speed, and wind gives an estimated standard deviation of 3.9 nmi for TOD location given the trajectory parame- ters, which means about 80% of predictions would have error less than 5 nmi in absolute value. This accuracy is better than demonstrated by other ground automation predictions using kinetic models. Furthermore, this approach would enable online learning of the model. Additional data or further knowledge of algorithms is necessary to conclude definitively that no second-order terms are appropriate. Possible applications of the linear model are described, including enabling arriving aircraft to fly optimized descents computed by the FMS even in congested airspace.
Resumo:
Current design practices recommend to comply with the capacity protection principle, which pays special attention to ensuring an elastic response of the foundations under ground motion events. However, in cases such as elevated reinforced concrete (RC) pile-cap foundation typologies, this design criterion may lead to conservative designs, with excessively high construction costs. Reinforced concrete elevated pile-cap foundations is a system formed by a group of partially embedded piles connected through an aboveground stayed cap and embedded in soil. In the cases when they are subjected to ground motions, the piles suffer large bending moments that make it difficult to maintain their behavior within the elastic range of deformations. Aiming to make an in-depth analysis of the nonlinear behavior of elevated pile-cap foundations, a cyclic loading test was performed on a concrete 2x3 pile configuration specimen of elevated pile-cap foundation. Two results of this test, the failure mechanism and the ductile behavior, were used for the calibration of a numerical model built in OpenSees framework, by using a pushover analysis. The calibration of the numerical model enabled an in-depth study of the seismic nonlinear response of this kind of foundations. A parametric analysis was carried for this purpose, aiming to study how sensitive RC elevated pile-cap foundations are, when subjected to variations in the diameter of piles, reinforcement ratios, external loads, soil density or multilayer configurations. This analysis provided a set of ductility factors that can be used as a reference for design practices and which correspond to each of the cases analyzed.
Resumo:
Because of the high number of crashes occurring on highways, it is necessary to intensify the search for new tools that help in understanding their causes. This research explores the use of a geographic information system (GIS) for an integrated analysis, taking into account two accident-related factors: design consistency (DC) (based on vehicle speed) and available sight distance (ASD) (based on visibility). Both factors require specific GIS software add-ins, which are explained. Digital terrain models (DTMs), vehicle paths, road centerlines, a speed prediction model, and crash data are integrated in the GIS. The usefulness of this approach has been assessed through a study of more than 500 crashes. From a regularly spaced grid, the terrain (bare ground) has been modeled through a triangulated irregular network (TIN). The length of the roads analyzed is greater than 100 km. Results have shown that DC and ASD could be related to crashes in approximately 4% of cases. In order to illustrate the potential of GIS, two crashes are fully analyzed: a car rollover after running off road on the right side and a rear-end collision of two moving vehicles. Although this procedure uses two software add-ins that are available only for ArcGIS, the study gives a practical demonstration of the suitability of GIS for conducting integrated studies of road safety.
Resumo:
Actualmente la optimization de la calidad de experiencia (Quality of Experience- QoE) de HTTP Adaptive Streaming (HAS) de video recibe una atención creciente. Este incremento de interés proviene fundamentalmente de las carencias de las soluciones actuales HAS, que, al no ser QoE-driven, no incluyen la percepción de la calidad de los usuarios finales como una parte integral de la lógica de adaptación. Por lo tanto, la obtención de información de referencia fiable en QoE en HAS presenta retos importantes, ya que las metodologías de evaluación subjetiva de la calidad de vídeo propuestas en las normas actuales no son adecuadas para tratar con la variación temporal de la calidad que es consustancial de HAS. Esta tesis investiga la influencia de la adaptación dinámica en la calidad de la transmisión de vídeo considerando métodos de evaluación subjetiva. Tras un estudio exhaustivo del estado del arte en la evaluación subjetiva de QoE en HAS, se han resaltado los retos asociados y las líneas de investigación abiertas. Como resultado, se han seleccionado dos líneas principales de investigación: el análisis del impacto en la QoE de los parámetros de las técnicas de adaptación y la investigación de las metodologías de prueba subjetiva adecuada para evaluación de QoE en HAS. Se han llevado a cabo un conjunto de experimentos de laboratorio para investigar las cuestiones planteadas mediante la utilización de diferentes metodologáas para pruebas subjetivas. El análisis estadístico muestra que no son robustas todas las suposiciones y reivindicaciones de las referencias analizadas, en particular en lo que respecta al impacto en la QoE de la frecuencia de las variaciones de calidad, de las adaptaciones suaves o abruptas y de las oscilaciones de calidad. Por otra parte, nuestros resultados confirman la influencia de otros parámetros, como la longitud de los segmentos de vídeo y la amplitud de las oscilaciones de calidad. Los resultados también muestran que tomar en consideración las características objetivas de los contenidos puede ser beneficioso para la mejora de la QoE en HAS. Además, todos los resultados han sido validados mediante extensos análisis experimentales que han incluido estudio tanto en otros laboratorios como en crowdsourcing Por último, sobre los aspectos metodológicos de las pruebas subjetivas de QoE, se ha realizado la comparación entre los resultados experimentales obtenidos a partir de un método estandarizado basado en estímulos cortos (ACR) y un método semi continuo (desarrollado para la evaluación de secuencias prolongadas de vídeo). A pesar de algunas diferencias, el resultado de los análisis estadísticos no muestra ningún efecto significativo de la metodología de prueba. Asimismo, aunque se percibe la influencia de la presencia de audio en la evaluación de degradaciones del vídeo, no se han encontrado efectos estadísticamente significativos de dicha presencia. A partir de la ausencia de influencia del método de prueba y de la presencia de audio, se ha realizado un análisis adicional sobre el impacto de realizar comparaciones estadísticas múltiples en niveles estadísticos de importancia que aumentan la probabilidad de los errores de tipo-I (falsos positivos). Nuestros resultados muestran que, para obtener un efectos sólido en el análisis estadístico de los resultados subjetivos, es necesario aumentar el número de sujetos de las pruebas claramente por encima de los tamaños de muestras propuestos por las normas y recomendaciones actuales. ABSTRACT Optimizing the Quality of Experience (QoE) of HTTP adaptive video streaming (HAS) is receiving increasing attention nowadays. The growth of interest is mainly caused by the fact that current HAS solutions are not QoE-driven, i.e. end-user quality perception is not integral part of the adaptation logic. However, obtaining the necessary reliable ground truths on HAS QoE faces substantial challenges, since the subjective video quality assessment methodologies as proposed by current standards are not well-suited for dealing with the time-varying quality properties that are characteristic for HAS. This thesis investigates the influence of dynamic quality adaptation on the QoE of streaming video by means of subjective evaluation approaches. Based on a comprehensive survey of related work on subjective HAS QoE assessment, the related challenges and open research questions are highlighted and discussed. As a result, two main research directions are selected for further investigation: analysis of the QoE impact of different technical adaptation parameters, and investigation of testing methodologies suitable for HAS QoE evaluation. In order to investigate related research issues and questions, a set of laboratory experiments have been conducted using different subjective testing methodologies. Our statistical analysis demonstrates that not all assumptions and claims reported in the literature are robust, particularly as regards the QoE impact of switching frequency, smooth vs. abrupt switching, and quality oscillation. On the other hand, our results confirm the influence of some other parameters such as chunk length and switching amplitude on perceived quality. We also show that taking the objective characteristics of the content into account can be beneficial to improve the adaptation viewing experience. In addition, all aforementioned findings are validated by means of an extensive cross-experimental analysis that involves external laboratory and crowdsourcing studies. Finally, to address the methodological aspects of subjective QoE testing, a comparison between the experimental results obtained from a (short stimuli-based) ACR standardized method and a semi-continuous method (developed for assessment of long video sequences) has been performed. In spite of observation of some differences, the result of statistical analysis does not show any significant effect of testing methodology. Similarly, although the influence of audio presence on evaluation of video-related degradations is perceived, no statistically significant effect of audio presence could be found. Motivating by this finding (no effect of testing method and audio presence), a subsequent analysis has been performed investigating the impact of performing multiple statistical comparisons on statistical levels of significance which increase the likelihood of Type-I errors (false positives). Our results show that in order to obtain a strong effect from the statistical analysis of the subjective results, it is necessary to increase the number of test subjects well beyond the sample sizes proposed by current quality assessment standards and recommendations.
Resumo:
Intercontinental Ballistic Missiles are capable of placing a nuclear warhead at more than 5,000 km away from its launching base. With the lethal power of a nuclear warhead a whole city could be wiped out by a single weapon causing millions of deaths. This means that the threat posed to any country from a single ICBM captured by a terrorist group or launched by a 'rogue' state is huge. This threat is increasing as more countries are achieving nuclear and advanced launcher capabilities. In order to suppress or at least reduce this threat the United States created the National Missile Defense System which involved, among other systems, the development of long-range interceptors whose aim is to destroy incoming ballistic missiles in their midcourse phase. The Ballistic Missile Defense is a high-profile topic that has been the focus of political controversy lately when the U.S. decided to expand the Ballistic Missile system to Europe, with the opposition of Russia. However the technical characteristics of this system are mostly unknown by the general public. The Interception of an ICBM using a long range Interceptor Missile as intended within the Ground-Based Missile Defense System by the American National Missile Defense (NMD) implies a series of problems of incredible complexity: - The incoming missile has to be detected almost immediately after launch. - The incoming missile has to be tracked along its trajectory with a great accuracy. - The Interceptor Missile has to implement a fast and accurate guidance algorithm in order to reach the incoming missile as soon as possible. - The Kinetic Kill Vehicle deployed by the interceptor boost vehicle has to be able to detect the reentry vehicle once it has been deployed by ICBM, when it offers a very low infrared signature, in order to perform a final rendezvous manoeuvre. - The Kinetic Kill Vehicle has to be able to discriminate the reentry vehicle from the surrounding debris and decoys. - The Kinetic Kill Vehicle has to be able to implement an accurate guidance algorithm in order to perform a kinetic interception (direct collision) of the reentry vehicle, at relative speeds of more than 10 km/s. All these problems are being dealt simultaneously by the Ground-Based Missile Defense System that is developing very complex and expensive sensors, communications and control centers and long-range interceptors (Ground-Based Interceptor Missile) including a Kinetic Kill Vehicle. Among all the technical challenges involved in this interception scenario, this thesis focuses on the algorithms required for the guidance of the Interceptor Missile and the Kinetic Kill Vehicle in order to perform the direct collision with the ICBM. The involved guidance algorithms are deeply analysed in this thesis in part III where conventional guidance strategies are reviewed and optimal guidance algorithms are developed for this interception problem. The generation of a realistic simulation of the interception scenario between an ICBM and a Ground Based Interceptor designed to destroy it was considered as necessary in order to be able to compare different guidance strategies with meaningful results. As a consequence, a highly representative simulator for an ICBM and a Kill Vehicle has been implemented, as detailed in part II, and the generation of these simulators has also become one of the purposes of this thesis. In summary, the main purposes of this thesis are: - To develop a highly representative simulator of an interception scenario between an ICBM and a Kill Vehicle launched from a Ground Based Interceptor. -To analyse the main existing guidance algorithms both for the ascent phase and the terminal phase of the missiles. Novel conclusions of these analyses are obtained. - To develop original optimal guidance algorithms for the interception problem. - To compare the results obtained using the different guidance strategies, assess the behaviour of the optimal guidance algorithms, and analyse the feasibility of the Ballistic Missile Defense system in terms of guidance (part IV). As a secondary objective, a general overview of the state of the art in terms of ballistic missiles and anti-ballistic missile defence is provided (part I).
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
Cover crop selection should be oriented to the achievement of specific agrosystem benefits. The covercrop, catch crop, green manure and fodder uses were identified as possible targets for selection. Theobjective was to apply multi-criteria decision analysis to evaluate different species (Hordeum vulgareL., Secale cereale L., ×Triticosecale Whim, Sinapis alba L., Vicia sativa L.) and cultivars according to theirsuitability to be used as cover crops in each of the uses. A field trial with 20 cultivars of the five specieswas conducted in Central Spain during two seasons (October?April). Measurements of ground cover, cropbiomass, N uptake, N derived from the atmosphere, C/N, dietary fiber content and residue quality werecollected. Aggregation of these variables through utility functions allowed ranking species and cultivarsfor each usage. Grasses were the most suitable for the cover crop, catch crop and fodder uses, while thevetches were the best as green manures. The mustard attained high ranks as cover and catch crop the firstseason, but the second decayed due to low performance in cold winters. Mustard and vetches obtainedworse rankings than grasses as fodder. Hispanic was the most suitable barley cultivar as cover and catchcrop, and Albacete as fodder. The triticale Titania attained the highest rank as cover and catch crop andfodder. Vetches Aitana and BGE014897 showed good aptitudes as green manures and catch crops. Thisanalysis allowed comparison among species and cultivars and might provide relevant information forcover crops selection and management.
Resumo:
Frequency Response Analysis is a well-known technique for the diagnosis of power transformers. Currently, this technique is under research for its application in rotary electrical machines. This paper presents significant results on the application of Frequency Response Analysis to fault detection in field winding of synchronous machines with static excitation. First, the influence of the rotor position on the frequency response is evaluated. Secondly, some relevant test results are shown regarding ground fault and inter-turn fault detection in field windings at standstill condition. The influence of the fault resistance value is also taken into account. This paper also studies the applicability of Frequency Response Analysis in fault detection in field windings while rotating. This represents an important feature because some defects only appear with the machine rated speed. Several laboratory test results show the applicability of this fault detection technique in field windings at full speed with no excitation current.
Resumo:
Kinetic anomalies in protein folding can result from changes of the kinetic ground states (D, I, and N), changes of the protein folding transition state, or both. The 102-residue protein U1A has a symmetrically curved chevron plot which seems to result mainly from changes of the transition state. At low concentrations of denaturant the transition state occurs early in the folding reaction, whereas at high denaturant concentration it moves close to the native structure. In this study we use this movement to follow continuously the formation and growth of U1A's folding nucleus by φ analysis. Although U1A's transition state structure is generally delocalized and displays a typical nucleation–condensation pattern, we can still resolve a sequence of folding events. However, these events are sufficiently coupled to start almost simultaneously throughout the transition state structure.
Resumo:
Spectral changes in the photocycle of the photoactive yellow protein (PYP) are investigated by using ab initio multiconfigurational second-order perturbation theory at the available structures experimentally determined. Using the dark ground-state crystal structure [Genick, U. K., Soltis, S. M., Kuhn, P., Canestrelli, I. L. & Getzoff, E. D. (1998) Nature (London) 392, 206–209], the ππ* transition to the lowest excited state is related to the typical blue-light absorption observed at 446 nm. The different nature of the second excited state (nπ*) is consistent with the alternative route detected at 395-nm excitation. The results suggest the low-temperature photoproduct PYPHL as the most plausible candidate for the assignment of the cryogenically trapped early intermediate (Genick et al.). We cannot establish, however, a successful correspondence between the theoretical spectrum for the nanosecond time-resolved x-ray structure [Perman, B., Šrajer, V., Ren, Z., Teng, T., Pradervand, C., et al. (1998) Science 279, 1946–1950] and any of the spectroscopic photoproducts known up to date. It is fully confirmed that the colorless light-activated intermediate recorded by millisecond time-resolved crystallography [Genick, U. K., Borgstahl, G. E. O., Ng, K., Ren, Z., Pradervand, C., et al. (1997) Science 275, 1471–1475] is protonated, nicely matching the spectroscopic features of the photoproduct PYPM. The overall contribution demonstrates that a combined analysis of high-level theoretical results and experimental data can be of great value to perform assignments of detected intermediates in a photocycle.
Resumo:
Many major weeds rely upon vegetative dispersal by rhizomes and seed dispersal by "shattering" of the mature inflorescence. We report molecular analysis of these traits in a cross between cultivated and wild species of Sorghum that are the probable progenitors of the major weed "johnsongrass." By restriction fragment length polymorphism mapping, variation in the number of rhizomes producing above-ground shoots was associated with three quantitative trait loci (QTLs). Variation in regrowth (ratooning) after overwintering was associated with QTLs accounting for additional rhizomatous growth and with QTLs influencing tillering. Vegetative buds that become rhizomes are similar to those that become tillers--one QTL appears to influence the number of such vegetative buds available, and additional independent genes determine whether individual buds differentiate into tillers or rhizomes. DNA markers described herein facilitate cloning of genes associated with weediness, comparative study of rhizomatousness in other Poaceae, and assessment of gene flow between cultivated and weedy sorghums--a risk that constrains improvement of sorghum through biotechnology. Cloning of "weediness" genes may create opportunities for plant growth regulation, in suppressing propagation of weeds and enhancing productivity of major forage, turf, and "ratoon" crops.
Resumo:
Decades of mixed messages from three federal agencies left many Americans unaware of the hazards associated with the indiscriminate disposal of unwanted or expired medicines. For this Capstone project, a systematic review of state and federal regulations was undertaken to determine how these laws obstruct household pharmaceutical waste collection. In addition, a survey of 654 Atlanta residents was conducted to evaluate unwanted medicine disposal habits, awareness of pharmaceutical compounds being detected in drinking water, surface, and ground waters, and willingness to participate in a household pharmaceutical waste collection program. Survey responses were tabulated to provide overall results and by age group, gender, and race. A household pharmaceutical waste collection plan was developed for the city and included as an appendix.
Resumo:
This work presents a forensic analysis of buildings affected by mining subsidence, which is based on deformation data obtained by Differential Interferometry (DInSAR). The proposed test site is La Union village (Murcia, SE Spain) where subsidence was triggered in an industrial area due to the collapse of abandoned underground mining labours occurred in 1998. In the first part of this work the study area was introduced, describing the spatial and temporal evolution of ground subsidence, through the elaboration of a cracks map on the buildings located within the affected area. In the second part, the evolution of the most significant cracks found in the most damaged buildings was monitored using biaxial extensometric units and inclinometers. This article describes the work performed in the third part, where DInSAR processing of satellite radar data, available between 1998 and 2008, has permitted to determine the spatial and temporal evolution of the deformation of all the buildings of the study area in a period when no continuous in situ instrumental data is available. Additionally, the comparison of these results with the forensic data gathered in the 2005–2008 period, reveal that there is a coincidence between damaged buildings, buildings where extensometers register significant movements of cracks, and buildings deformation estimated from radar data. As a result, it has been demonstrated that the integration of DInSAR data into forensic analysis methodologies contributes to improve significantly the assessment of the damages of buildings affected by mining subsidence.
Resumo:
Subsidence related to multiple natural and human-induced processes affects an increasing number of areas worldwide. Although this phenomenon may involve surface deformation with 3D displacement components, negative vertical movement, either progressive or episodic, tends to dominate. Over the last decades, differential SAR interferometry (DInSAR) has become a very useful remote sensing tool for accurately measuring the spatial and temporal evolution of surface displacements over broad areas. This work discusses the main advantages and limitations of addressing active subsidence phenomena by means of DInSAR techniques from an end-user point of view. Special attention is paid to the spatial and temporal resolution, the precision of the measurements, and the usefulness of the data. The presented analysis is focused on DInSAR results exploitation of various ground subsidence phenomena (groundwater withdrawal, soil compaction, mining subsidence, evaporite dissolution subsidence, and volcanic deformation) with different displacement patterns in a selection of subsidence areas in Spain. Finally, a cost comparative study is performed for the different techniques applied.