328 resultados para Sieve


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This data set contains measurements of total nitrogen from the main experiment plots of a large grassland biodiversity experiment (the Jena Experiment; see further details below). In the main experiment, 82 grassland plots of 20 x 20 m were established from a pool of 60 species belonging to four functional groups (grasses, legumes, tall and small herbs). In May 2002, varying numbers of plant species from this species pool were sown into the plots to create a gradient of plant species richness (1, 2, 4, 8, 16 and 60 species) and functional richness (1, 2, 3, 4 functional groups). Plots were maintained by bi-annual weeding and mowing. Soil sampling and analysis: Stratified soil sampling was performed before sowing in April 2002. Five independent samples per plot were taken using a split tube sampler with an inner diameter of 4.8 cm (Eijkelkamp Agrisearch Equipment, Giesbeek, the Netherlands). Soil samples were dried at 40°C and then segmented to a depth resolution of 5 cm giving six depth subsamples per core. All samples were analyzed independently and averaged values per depth layer are reported. Sampling locations were less than 30 cm apart from sampling locations in other years. Subsequently, samples were dried at 40°C. All soil samples were passed through a sieve with a mesh size of 2 mm. Rarely present visible plant remains were removed using tweezers. Total nitrogen concentration was analyzed on ball-milled subsamples (time 4 min, frequency 30 s-1) by an elemental analyzer at 1150°C (Elementaranalysator vario Max CN; Elementar Analysensysteme GmbH, Hanau, Germany).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set provides environmental context to all samples from the Tara Oceans Expedition (2009-2013), about mesoscale features related to the sampling date, time and location. Includes calculated averages of mesaurements made concurrently at the sampling location and depth, and calculated averages from climatologies (AMODIS, VGPM) and satellite products.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Esta tesis aborda metodologías para el cálculo de riesgo de colisión de satélites. La minimización del riesgo de colisión se debe abordar desde dos puntos de vista distintos. Desde el punto de vista operacional, es necesario filtrar los objetos que pueden presentar un encuentro entre todos los objetos que comparten el espacio con un satélite operacional. Puesto que las órbitas, del objeto operacional y del objeto envuelto en la colisión, no se conocen perfectamente, la geometría del encuentro y el riesgo de colisión deben ser evaluados. De acuerdo con dicha geometría o riesgo, una maniobra evasiva puede ser necesaria para evitar la colisión. Dichas maniobras implican un consumo de combustible que impacta en la capacidad de mantenimiento orbital y por tanto de la visa útil del satélite. Por tanto, el combustible necesario a lo largo de la vida útil de un satélite debe ser estimado en fase de diseño de la misión para una correcta definición de su vida útil, especialmente para satélites orbitando en regímenes orbitales muy poblados. Los dos aspectos, diseño de misión y aspectos operacionales en relación con el riesgo de colisión están abordados en esta tesis y se resumen en la Figura 3. En relación con los aspectos relacionados con el diseño de misión (parte inferior de la figura), es necesario evaluar estadísticamente las características de de la población espacial y las teorías que permiten calcular el número medio de eventos encontrados por una misión y su capacidad de reducir riesgo de colisión. Estos dos aspectos definen los procedimientos más apropiados para reducir el riesgo de colisión en fase operacional. Este aspecto es abordado, comenzando por la teoría descrita en [Sánchez-Ortiz, 2006]T.14 e implementada por el autor de esta tesis en la herramienta ARES [Sánchez-Ortiz, 2004b]T.15 proporcionada por ESA para la evaluación de estrategias de evitación de colisión. Esta teoría es extendida en esta tesis para considerar las características de los datos orbitales disponibles en las fases operacionales de un satélite (sección 4.3.3). Además, esta teoría se ha extendido para considerar riesgo máximo de colisión cuando la incertidumbre de las órbitas de objetos catalogados no es conocida (como se da el caso para los TLE), y en el caso de querer sólo considerar riesgo de colisión catastrófico (sección 4.3.2.3). Dichas mejoras se han incluido en la nueva versión de ARES [Domínguez-González and Sánchez-Ortiz, 2012b]T.12 puesta a disposición a través de [SDUP,2014]R.60. En fase operacional, los catálogos que proporcionan datos orbitales de los objetos espaciales, son procesados rutinariamente, para identificar posibles encuentros que se analizan en base a algoritmos de cálculo de riesgo de colisión para proponer maniobras de evasión. Actualmente existe una única fuente de datos públicos, el catálogo TLE (de sus siglas en inglés, Two Line Elements). Además, el Joint Space Operation Center (JSpOC) Americano proporciona mensajes con alertas de colisión (CSM) cuando el sistema de vigilancia americano identifica un posible encuentro. En función de los datos usados en fase operacional (TLE o CSM), la estrategia de evitación puede ser diferente debido a las características de dicha información. Es preciso conocer las principales características de los datos disponibles (respecto a la precisión de los datos orbitales) para estimar los posibles eventos de colisión encontrados por un satélite a lo largo de su vida útil. En caso de los TLE, cuya precisión orbital no es proporcionada, la información de precisión orbital derivada de un análisis estadístico se puede usar también en el proceso operacional así como en el diseño de la misión. En caso de utilizar CSM como base de las operaciones de evitación de colisiones, se conoce la precisión orbital de los dos objetos involucrados. Estas características se han analizado en detalle, evaluando estadísticamente las características de ambos tipos de datos. Una vez concluido dicho análisis, se ha analizado el impacto de utilizar TLE o CSM en las operaciones del satélite (sección 5.1). Este análisis se ha publicado en una revista especializada [Sánchez-Ortiz, 2015b]T.3. En dicho análisis, se proporcionan recomendaciones para distintas misiones (tamaño del satélite y régimen orbital) en relación con las estrategias de evitación de colisión para reducir el riesgo de colisión de manera significativa. Por ejemplo, en el caso de un satélite en órbita heliosíncrona en régimen orbital LEO, el valor típico del ACPL que se usa de manera extendida es 10-4. Este valor no es adecuado cuando los esquemas de evitación de colisión se realizan sobre datos TLE. En este caso, la capacidad de reducción de riesgo es prácticamente nula (debido a las grandes incertidumbres de los datos TLE) incluso para tiempos cortos de predicción. Para conseguir una reducción significativa del riesgo, sería necesario usar un ACPL en torno a 10-6 o inferior, produciendo unas 10 alarmas al año por satélite (considerando predicciones a un día) o 100 alarmas al año (con predicciones a tres días). Por tanto, la principal conclusión es la falta de idoneidad de los datos TLE para el cálculo de eventos de colisión. Al contrario, usando los datos CSM, debido a su mejor precisión orbital, se puede obtener una reducción significativa del riesgo con ACPL en torno a 10-4 (considerando 3 días de predicción). Incluso 5 días de predicción pueden ser considerados con ACPL en torno a 10-5. Incluso tiempos de predicción más largos se pueden usar (7 días) con reducción del 90% del riesgo y unas 5 alarmas al año (en caso de predicciones de 5 días, el número de maniobras se mantiene en unas 2 al año). La dinámica en GEO es diferente al caso LEO y hace que el crecimiento de las incertidumbres orbitales con el tiempo de propagación sea menor. Por el contrario, las incertidumbres derivadas de la determinación orbital son peores que en LEO por las diferencias en las capacidades de observación de uno y otro régimen orbital. Además, se debe considerar que los tiempos de predicción considerados para LEO pueden no ser apropiados para el caso de un satélite GEO (puesto que tiene un periodo orbital mayor). En este caso usando datos TLE, una reducción significativa del riesgo sólo se consigue con valores pequeños de ACPL, produciendo una alarma por año cuando los eventos de colisión se predicen a un día vista (tiempo muy corto para implementar maniobras de evitación de colisión).Valores más adecuados de ACPL se encuentran entre 5•10-8 y 10-7, muy por debajo de los valores usados en las operaciones actuales de la mayoría de las misiones GEO (de nuevo, no se recomienda en este régimen orbital basar las estrategias de evitación de colisión en TLE). Los datos CSM permiten una reducción de riesgo apropiada con ACPL entre 10-5 y 10-4 con tiempos de predicción cortos y medios (10-5 se recomienda para predicciones a 5 o 7 días). El número de maniobras realizadas sería una en 10 años de misión. Se debe notar que estos cálculos están realizados para un satélite de unos 2 metros de radio. En el futuro, otros sistemas de vigilancia espacial (como el programa SSA de la ESA), proporcionarán catálogos adicionales de objetos espaciales con el objetivo de reducir el riesgo de colisión de los satélites. Para definir dichos sistemas de vigilancia, es necesario identificar las prestaciones del catalogo en función de la reducción de riesgo que se pretende conseguir. Las características del catálogo que afectan principalmente a dicha capacidad son la cobertura (número de objetos incluidos en el catalogo, limitado principalmente por el tamaño mínimo de los objetos en función de las limitaciones de los sensores utilizados) y la precisión de los datos orbitales (derivada de las prestaciones de los sensores en relación con la precisión de las medidas y la capacidad de re-observación de los objetos). El resultado de dicho análisis (sección 5.2) se ha publicado en una revista especializada [Sánchez-Ortiz, 2015a]T.2. Este análisis no estaba inicialmente previsto durante la tesis, y permite mostrar como la teoría descrita en esta tesis, inicialmente definida para facilitar el diseño de misiones (parte superior de la figura 1) se ha extendido y se puede aplicar para otros propósitos como el dimensionado de un sistema de vigilancia espacial (parte inferior de la figura 1). La principal diferencia de los dos análisis se basa en considerar las capacidades de catalogación (precisión y tamaño de objetos observados) como una variable a modificar en el caso de un diseño de un sistema de vigilancia), siendo fijas en el caso de un diseño de misión. En el caso de las salidas generadas en el análisis, todos los aspectos calculados en un análisis estadístico de riesgo de colisión son importantes para diseño de misión (con el objetivo de calcular la estrategia de evitación y la cantidad de combustible a utilizar), mientras que en el caso de un diseño de un sistema de vigilancia, los aspectos más importantes son el número de maniobras y falsas alarmas (fiabilidad del sistema) y la capacidad de reducción de riesgo (efectividad del sistema). Adicionalmente, un sistema de vigilancia espacial debe ser caracterizado por su capacidad de evitar colisiones catastróficas (evitando así in incremento dramático de la población de basura espacial), mientras que el diseño de una misión debe considerar todo tipo de encuentros, puesto que un operador está interesado en evitar tanto las colisiones catastróficas como las letales. Del análisis de las prestaciones (tamaño de objetos a catalogar y precisión orbital) requeridas a un sistema de vigilancia espacial se concluye que ambos aspectos han de ser fijados de manera diferente para los distintos regímenes orbitales. En el caso de LEO se hace necesario observar objetos de hasta 5cm de radio, mientras que en GEO se rebaja este requisito hasta los 100 cm para cubrir las colisiones catastróficas. La razón principal para esta diferencia viene de las diferentes velocidades relativas entre los objetos en ambos regímenes orbitales. En relación con la precisión orbital, ésta ha de ser muy buena en LEO para poder reducir el número de falsas alarmas, mientras que en regímenes orbitales más altos se pueden considerar precisiones medias. En relación con los aspectos operaciones de la determinación de riesgo de colisión, existen varios algoritmos de cálculo de riesgo entre dos objetos espaciales. La Figura 2 proporciona un resumen de los casos en cuanto a algoritmos de cálculo de riesgo de colisión y como se abordan en esta tesis. Normalmente se consideran objetos esféricos para simplificar el cálculo de riesgo (caso A). Este caso está ampliamente abordado en la literatura y no se analiza en detalle en esta tesis. Un caso de ejemplo se proporciona en la sección 4.2. Considerar la forma real de los objetos (caso B) permite calcular el riesgo de una manera más precisa. Un nuevo algoritmo es definido en esta tesis para calcular el riesgo de colisión cuando al menos uno de los objetos se considera complejo (sección 4.4.2). Dicho algoritmo permite calcular el riesgo de colisión para objetos formados por un conjunto de cajas, y se ha presentado en varias conferencias internacionales. Para evaluar las prestaciones de dicho algoritmo, sus resultados se han comparado con un análisis de Monte Carlo que se ha definido para considerar colisiones entre cajas de manera adecuada (sección 4.1.2.3), pues la búsqueda de colisiones simples aplicables para objetos esféricos no es aplicable a este caso. Este análisis de Monte Carlo se considera la verdad a la hora de calcular los resultados del algoritmos, dicha comparativa se presenta en la sección 4.4.4. En el caso de satélites que no se pueden considerar esféricos, el uso de un modelo de la geometría del satélite permite descartar eventos que no son colisiones reales o estimar con mayor precisión el riesgo asociado a un evento. El uso de estos algoritmos con geometrías complejas es más relevante para objetos de dimensiones grandes debido a las prestaciones de precisión orbital actuales. En el futuro, si los sistemas de vigilancia mejoran y las órbitas son conocidas con mayor precisión, la importancia de considerar la geometría real de los satélites será cada vez más relevante. La sección 5.4 presenta un ejemplo para un sistema de grandes dimensiones (satélite con un tether). Adicionalmente, si los dos objetos involucrados en la colisión tienen velocidad relativa baja (y geometría simple, Caso C en la Figura 2), la mayor parte de los algoritmos no son aplicables requiriendo implementaciones dedicadas para este caso particular. En esta tesis, uno de estos algoritmos presentado en la literatura [Patera, 2001]R.26 se ha analizado para determinar su idoneidad en distintos tipos de eventos (sección 4.5). La evaluación frete a un análisis de Monte Carlo se proporciona en la sección 4.5.2. Tras este análisis, se ha considerado adecuado para abordar las colisiones de baja velocidad. En particular, se ha concluido que el uso de algoritmos dedicados para baja velocidad son necesarios en función del tamaño del volumen de colisión proyectado en el plano de encuentro (B-plane) y del tamaño de la incertidumbre asociada al vector posición entre los dos objetos. Para incertidumbres grandes, estos algoritmos se hacen más necesarios pues la duración del intervalo en que los elipsoides de error de los dos objetos pueden intersecar es mayor. Dicho algoritmo se ha probado integrando el algoritmo de colisión para objetos con geometrías complejas. El resultado de dicho análisis muestra que este algoritmo puede ser extendido fácilmente para considerar diferentes tipos de algoritmos de cálculo de riesgo de colisión (sección 4.5.3). Ambos algoritmos, junto con el método Monte Carlo para geometrías complejas, se han implementado en la herramienta operacional de la ESA CORAM, que es utilizada para evaluar el riesgo de colisión en las actividades rutinarias de los satélites operados por ESA [Sánchez-Ortiz, 2013a]T.11. Este hecho muestra el interés y relevancia de los algoritmos desarrollados para la mejora de las operaciones de los satélites. Dichos algoritmos han sido presentados en varias conferencias internacionales [Sánchez-Ortiz, 2013b]T.9, [Pulido, 2014]T.7,[Grande-Olalla, 2013]T.10, [Pulido, 2014]T.5, [Sánchez-Ortiz, 2015c]T.1. ABSTRACT This document addresses methodologies for computation of the collision risk of a satellite. Two different approaches need to be considered for collision risk minimisation. On an operational basis, it is needed to perform a sieve of possible objects approaching the satellite, among all objects sharing the space with an operational satellite. As the orbits of both, satellite and the eventual collider, are not perfectly known but only estimated, the miss-encounter geometry and the actual risk of collision shall be evaluated. In the basis of the encounter geometry or the risk, an eventual manoeuvre may be required to avoid the conjunction. Those manoeuvres will be associated to a reduction in the fuel for the mission orbit maintenance, and thus, may reduce the satellite operational lifetime. Thus, avoidance manoeuvre fuel budget shall be estimated, at mission design phase, for a better estimation of mission lifetime, especially for those satellites orbiting in very populated orbital regimes. These two aspects, mission design and operational collision risk aspects, are summarised in Figure 3, and covered along this thesis. Bottom part of the figure identifies the aspects to be consider for the mission design phase (statistical characterisation of the space object population data and theory computing the mean number of events and risk reduction capability) which will define the most appropriate collision avoidance approach at mission operational phase. This part is covered in this work by starting from the theory described in [Sánchez-Ortiz, 2006]T.14 and implemented by this author in ARES tool [Sánchez-Ortiz, 2004b]T.15 provided by ESA for evaluation of collision avoidance approaches. This methodology has been now extended to account for the particular features of the available data sets in operational environment (section 4.3.3). Additionally, the formulation has been extended to allow evaluating risk computation approached when orbital uncertainty is not available (like the TLE case) and when only catastrophic collisions are subject to study (section 4.3.2.3). These improvements to the theory have been included in the new version of ESA ARES tool [Domínguez-González and Sánchez-Ortiz, 2012b]T.12 and available through [SDUP,2014]R.60. At the operation phase, the real catalogue data will be processed on a routine basis, with adequate collision risk computation algorithms to propose conjunction avoidance manoeuvre optimised for every event. The optimisation of manoeuvres in an operational basis is not approached along this document. Currently, American Two Line Element (TLE) catalogue is the only public source of data providing orbits of objects in space to identify eventual conjunction events. Additionally, Conjunction Summary Message (CSM) is provided by Joint Space Operation Center (JSpOC) when the American system identifies a possible collision among satellites and debris. Depending on the data used for collision avoidance evaluation, the conjunction avoidance approach may be different. The main features of currently available data need to be analysed (in regards to accuracy) in order to perform estimation of eventual encounters to be found along the mission lifetime. In the case of TLE, as these data is not provided with accuracy information, operational collision avoidance may be also based on statistical accuracy information as the one used in the mission design approach. This is not the case for CSM data, which includes the state vector and orbital accuracy of the two involved objects. This aspect has been analysed in detail and is depicted in the document, evaluating in statistical way the characteristics of both data sets in regards to the main aspects related to collision avoidance. Once the analysis of data set was completed, investigations on the impact of those features in the most convenient avoidance approaches have been addressed (section 5.1). This analysis is published in a peer-reviewed journal [Sánchez-Ortiz, 2015b]T.3. The analysis provides recommendations for different mission types (satellite size and orbital regime) in regards to the most appropriate collision avoidance approach for relevant risk reduction. The risk reduction capability is very much dependent on the accuracy of the catalogue utilized to identify eventual collisions. Approaches based on CSM data are recommended against the TLE based approach. Some approaches based on the maximum risk associated to envisaged encounters are demonstrated to report a very large number of events, which makes the approach not suitable for operational activities. Accepted Collision Probability Levels are recommended for the definition of the avoidance strategies for different mission types. For example for the case of a LEO satellite in the Sun-synchronous regime, the typically used ACPL value of 10-4 is not a suitable value for collision avoidance schemes based on TLE data. In this case the risk reduction capacity is almost null (due to the large uncertainties associated to TLE data sets, even for short time-to-event values). For significant reduction of risk when using TLE data, ACPL on the order of 10-6 (or lower) seems to be required, producing about 10 warnings per year and mission (if one-day ahead events are considered) or 100 warnings per year (for three-days ahead estimations). Thus, the main conclusion from these results is the lack of feasibility of TLE for a proper collision avoidance approach. On the contrary, for CSM data, and due to the better accuracy of the orbital information when compared with TLE, ACPL on the order of 10-4 allows to significantly reduce the risk. This is true for events estimated up to 3 days ahead. Even 5 days ahead events can be considered, but ACPL values down to 10-5 should be considered in such case. Even larger prediction times can be considered (7 days) for risk reduction about 90%, at the cost of larger number of warnings up to 5 events per year, when 5 days prediction allows to keep the manoeuvre rate in 2 manoeuvres per year. Dynamics of the GEO orbits is different to that in LEO, impacting on a lower increase of orbits uncertainty along time. On the contrary, uncertainties at short prediction times at this orbital regime are larger than those at LEO due to the differences in observation capabilities. Additionally, it has to be accounted that short prediction times feasible at LEO may not be appropriate for a GEO mission due to the orbital period being much larger at this regime. In the case of TLE data sets, significant reduction of risk is only achieved for small ACPL values, producing about a warning event per year if warnings are raised one day in advance to the event (too short for any reaction to be considered). Suitable ACPL values would lay in between 5•10-8 and 10-7, well below the normal values used in current operations for most of the GEO missions (TLE-based strategies for collision avoidance at this regime are not recommended). On the contrary, CSM data allows a good reduction of risk with ACPL in between 10-5 and 10-4 for short and medium prediction times. 10-5 is recommended for prediction times of five or seven days. The number of events raised for a suitable warning time of seven days would be about one in a 10-year mission. It must be noted, that these results are associated to a 2 m radius spacecraft, impact of the satellite size are also analysed within the thesis. In the future, other Space Situational Awareness Systems (SSA, ESA program) may provide additional catalogues of objects in space with the aim of reducing the risk. It is needed to investigate which are the required performances of those catalogues for allowing such risk reduction. The main performance aspects are coverage (objects included in the catalogue, mainly limited by a minimum object size derived from sensor performances) and the accuracy of the orbital data to accurately evaluate the conjunctions (derived from sensor performance in regards to object observation frequency and accuracy). The results of these investigations (section 5.2) are published in a peer-reviewed journal [Sánchez-Ortiz, 2015a]T.2. This aspect was not initially foreseen as objective of the thesis, but it shows how the theory described in the thesis, initially defined for mission design in regards to avoidance manoeuvre fuel allocation (upper part of figure 1), is extended and serves for additional purposes as dimensioning a Space Surveillance and Tracking (SST) system (bottom part of figure below). The main difference between the two approaches is the consideration of the catalogue features as part of the theory which are not modified (for the satellite mission design case) instead of being an input for the analysis (in the case of the SST design). In regards to the outputs, all the features computed by the statistical conjunction analysis are of importance for mission design (with the objective of proper global avoidance strategy definition and fuel allocation), whereas for the case of SST design, the most relevant aspects are the manoeuvre and false alarm rates (defining a reliable system) and the Risk Reduction capability (driving the effectiveness of the system). In regards to the methodology for computing the risk, the SST system shall be driven by the capacity of providing the means to avoid catastrophic conjunction events (avoiding the dramatic increase of the population), whereas the satellite mission design should consider all type of encounters, as the operator is interested on avoiding both lethal and catastrophic collisions. From the analysis of the SST features (object coverage and orbital uncertainty) for a reliable system, it is concluded that those two characteristics are to be imposed differently for the different orbital regimes, as the population level is different depending on the orbit type. Coverage values range from 5 cm for very populated LEO regime up to 100 cm in the case of GEO region. The difference on this requirement derives mainly from the relative velocity of the encounters at those regimes. Regarding the orbital knowledge of the catalogues, very accurate information is required for objects in the LEO region in order to limit the number of false alarms, whereas intermediate orbital accuracy can be considered for higher orbital regimes. In regards to the operational collision avoidance approaches, several collision risk algorithms are used for evaluation of collision risk of two pair of objects. Figure 2 provides a summary of the different collision risk algorithm cases and indicates how they are covered along this document. The typical case with high relative velocity is well covered in literature for the case of spherical objects (case A), with a large number of available algorithms, that are not analysed in detailed in this work. Only a sample case is provided in section 4.2. If complex geometries are considered (Case B), a more realistic risk evaluation can be computed. New approach for the evaluation of risk in the case of complex geometries is presented in this thesis (section 4.4.2), and it has been presented in several international conferences. The developed algorithm allows evaluating the risk for complex objects formed by a set of boxes. A dedicated Monte Carlo method has also been described (section 4.1.2.3) and implemented to allow the evaluation of the actual collisions among a large number of simulation shots. This Monte Carlo runs are considered the truth for comparison of the algorithm results (section 4.4.4). For spacecrafts that cannot be considered as spheres, the consideration of the real geometry of the objects may allow to discard events which are not real conjunctions, or estimate with larger reliability the risk associated to the event. This is of particular importance for the case of large spacecrafts as the uncertainty in positions of actual catalogues does not reach small values to make a difference for the case of objects below meter size. As the tracking systems improve and the orbits of catalogued objects are known more precisely, the importance of considering actual shapes of the objects will become more relevant. The particular case of a very large system (as a tethered satellite) is analysed in section 5.4. Additionally, if the two colliding objects have low relative velocity (and simple geometries, case C in figure above), the most common collision risk algorithms fail and adequate theories need to be applied. In this document, a low relative velocity algorithm presented in the literature [Patera, 2001]R.26 is described and evaluated (section 4.5). Evaluation through comparison with Monte Carlo approach is provided in section 4.5.2. The main conclusion of this analysis is the suitability of this algorithm for the most common encounter characteristics, and thus it is selected as adequate for collision risk estimation. Its performances are evaluated in order to characterise when it can be safely used for a large variety of encounter characteristics. In particular, it is found that the need of using dedicated algorithms depend on both the size of collision volume in the B-plane and the miss-distance uncertainty. For large uncertainties, the need of such algorithms is more relevant since for small uncertainties the encounter duration where the covariance ellipsoids intersect is smaller. Additionally, its application for the case of complex satellite geometries is assessed (case D in figure above) by integrating the developed algorithm in this thesis with Patera’s formulation for low relative velocity encounters. The results of this analysis show that the algorithm can be easily extended for collision risk estimation process suitable for complex geometry objects (section 4.5.3). The two algorithms, together with the Monte Carlo method, have been implemented in the operational tool CORAM for ESA which is used for the evaluation of collision risk of ESA operated missions, [Sánchez-Ortiz, 2013a]T.11. This fact shows the interest and relevance of the developed algorithms for improvement of satellite operations. The algorithms have been presented in several international conferences, [Sánchez-Ortiz, 2013b]T.9, [Pulido, 2014]T.7,[Grande-Olalla, 2013]T.10, [Pulido, 2014]T.5, [Sánchez-Ortiz, 2015c]T.1.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A separation technique employing a microfabricated sieve has been demonstrated by observing the motion of DNA molecules of different size. The sieve consists of a two-dimensional lattice of obstacles whose asymmetric disposition rectifies the Brownian motion of molecules driven through the device, causing them to follow paths that depend on their diffusion coefficient. A nominal 6% resolution by length of DNA molecules in the size range 15–30 kbp may be achieved in a 4-inch (10-cm) silicon wafer. The advantage of this method is that samples can be loaded and sorted continuously, in contrast to the batch mode commonly used in gel electrophoresis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In angiosperms, the functional enucleate sieve tube system of the phloem appears to be maintained by the surrounding companion cells. In this study, we tested the hypothesis that polypeptides present within the phloem sap traffic cell to cell from the companion cells, where they are synthesized, into the sieve tube via plasmodesmata. Coinjection of fluorescently labeled dextrans along with size-fractionated Cucurbita maxima phloem proteins, ranging in size from 10 to 200 kDa, as well as injection of individual fluorescently labeled phloem proteins, provided unambiguous evidence that these proteins have the capacity to interact with mesophyll plasmodesmata in cucurbit cotyledons to induce an increase in size exclusion limit and traffic cell to cell. Plasmodesmal size exclusion limit increased to greater than 20 kDa, but less than 40 kDa, irrespective of the size of the injected protein, indicating that partial protein unfolding may be a requirement for transport. A threshold concentration in the 20–100 nM range was required for cell-to-cell transport indicating that phloem proteins have a high affinity for the mesophyll plasmodesmal binding site(s). Parallel experiments with glutaredoxin and cystatin, phloem sap proteins from Ricinus communis, established that these proteins can also traffic through cucurbit mesophyll plasmodesmata. These results are discussed in terms of the requirements for regulated protein trafficking between companion cells and the sieve tube system. As the threshold value for plasmodesmal transport of phloem sap proteins falls within the same range as many plant hormones, the possibility is discussed that some of these proteins may act as long-distance signaling molecules.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For nearly 200 years since their discovery in 1756, geologists considered the zeolite minerals to occur as fairly large crystals in the vugs and cavities of basalts and other traprock formations. Here, they were prized by mineral collectors, but their small abundance and polymineralic nature defied commercial exploitation. As the synthetic zeolite (molecular sieve) business began to take hold in the late 1950s, huge beds of zeolite-rich sediments, formed by the alteration of volcanic ash (glass) in lake and marine waters, were discovered in the western United States and elsewhere in the world. These beds were found to contain as much as 95% of a single zeolite; they were generally flat-lying and easily mined by surface methods. The properties of these low-cost natural materials mimicked those of many of their synthetic counterparts, and considerable effort has made since that time to develop applications for them based on their unique adsorption, cation-exchange, dehydration–rehydration, and catalytic properties. Natural zeolites (i.e., those found in volcanogenic sedimentary rocks) have been and are being used as building stone, as lightweight aggregate and pozzolans in cements and concretes, as filler in paper, in the take-up of Cs and Sr from nuclear waste and fallout, as soil amendments in agronomy and horticulture, in the removal of ammonia from municipal, industrial, and agricultural waste and drinking waters, as energy exchangers in solar refrigerators, as dietary supplements in animal diets, as consumer deodorizers, in pet litters, in taking up ammonia from animal manures, and as ammonia filters in kidney-dialysis units. From their use in construction during Roman times, to their role as hydroponic (zeoponic) substrate for growing plants on space missions, to their recent success in the healing of cuts and wounds, natural zeolites are now considered to be full-fledged mineral commodities, the use of which promise to expand even more in the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Catalysis at organophilic silica-rich surfaces of zeolites and feldspars might generate replicating biopolymers from simple chemicals supplied by meteorites, volcanic gases, and other geological sources. Crystal–chemical modeling yielded packings for amino acids neatly encapsulated in 10-ring channels of the molecular sieve silicalite-ZSM-5-(mutinaite). Calculation of binding and activation energies for catalytic assembly into polymers is progressing for a chemical composition with one catalytic Al–OH site per 25 neutral Si tetrahedral sites. Internal channel intersections and external terminations provide special stereochemical features suitable for complex organic species. Polymer migration along nano/micrometer channels of ancient weathered feldspars, plus exploitation of phosphorus and various transition metals in entrapped apatite and other microminerals, might have generated complexes of replicating catalytic biomolecules, leading to primitive cellular organisms. The first cell wall might have been an internal mineral surface, from which the cell developed a protective biological cap emerging into a nutrient-rich “soup.” Ultimately, the biological cap might have expanded into a complete cell wall, allowing mobility and colonization of energy-rich challenging environments. Electron microscopy of honeycomb channels inside weathered feldspars of the Shap granite (northwest England) has revealed modern bacteria, perhaps indicative of Archean ones. All known early rocks were metamorphosed too highly during geologic time to permit simple survival of large-pore zeolites, honeycombed feldspar, and encapsulated species. Possible microscopic clues to the proposed mineral adsorbents/catalysts are discussed for planning of systematic study of black cherts from weakly metamorphosed Archaean sediments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Buckwheat (Fagopyrum esculentum Moench. cv Jianxi), which shows high Al resistance, accumulates Al in the leaves. The internal detoxification mechanism was studied by purifying and identifying Al complexes in the leaves and roots. About 90% of Al accumulated in the leaves was found in the cell sap, in which the dominant organic acid was oxalic acid. Purification of the Al complex in the cell sap of leaves by molecular-sieve chromatography resulted in a complex with a ratio of Al to oxalic acid of 1:3. A 13C-nuclear magnetic resonance study of the purified cell sap revealed only one signal at a chemical shift 164.4 ppm, which was assigned to the Al-chelated carboxylic group of oxalic acid. A 27Al-nuclear magnetic resonance analysis revealed one major signal at the chemical shift of 16.0 to 17.0 ppm, with a minor signal at the chemical shift of 11.0 to 12 ppm in both the intact roots and their cell sap, which is consistent with the Al-oxalate complexes at 1:3 and 1:2 ratios, respectively. The purified cell sap was not phytotoxic to root elongation in corn (Zea mays). All of these results indicate that Al tolerance in the roots and leaves of buckwheat is achieved by the formation of a nonphytotoxic Al-oxalate (1:3) complex.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Virus invasion of minor veins in inoculated leaves of a host is the likely prelude to systemic movement of the pathogen and to subsequent yield reduction and quality loss. In this study we have analyzed the cell number and arrangement in minor veins within mature leaves of various members of the Solanaceae and Fabaceae families. We then monitored the accumulation pattern of several tobamoviruses and potyviruses in these veins at the time of rapid, phloem-mediated movement of viruses. Vascular parenchyma cells were the predominant and sometimes only cells to become visibly infected among the cells surrounding the sieve elements in minor veins containing 9 to 12 cells. In no instance did we observe a companion cell infected without a vascular parenchyma cell also being infected in the same vein. This suggests that the viruses used in this study first enter the vascular parenchyma cells and then the companion cells during invasion. The lack of detectable infection of smooth-walled companion or transfer cells, respectively, from inoculated leaves of bean (Phaseolus vulgaris) and pea (Pisum sativum) during a period of known rapid, phloem-mediated movement suggests that some viruses may be able to circumvent these cells in establishing phloem-mediated infection. The cause of the barrier to virus accumulation in the companion or transfer cells, the relationship of this barrier to previously identified barriers for virus or photoassimilate transport, and the relevance of these findings to photoassimilate transport models are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To fully understand vascular transport of plant viruses, the viral and host proteins, their structures and functions, and the specific vascular cells in which these factors function must be determined. We report here on the ability of various cDNA-derived coat protein (CP) mutants of tobacco mosaic virus (TMV) to invade vascular cells in minor veins of Nicotiana tabacum L. cv. Xanthi nn. The mutant viruses we studied, TMV CP-O, U1mCP15-17, and SNC015, respectively, encode a CP from a different tobamovirus (i.e., from odontoglossum ringspot virus) resulting in the formation of non-native capsids, a mutant CP that accumulates in aggregates but does not encapsidate the viral RNA, or no CP. TMV CP-O is impaired in phloem-dependent movement, whereas U1mCP15-17 and SNC015 do not accumulate by phloem-dependent movement. In developmentally-defined studies using immunocytochemical analyses we determined that all of these mutants invaded vascular parenchyma cells within minor veins in inoculated leaves. In addition, we determined that the CPs of TMV CP-O and U1mCP15-17 were present in companion (C) cells of minor veins in inoculated leaves, although more rarely than CP of wild-type virus. These results indicate that the movement of TMV into minor veins does not require the CP, and an encapsidation-competent CP is not required for, but may increase the efficiency of, movement into the conducting complex of the phloem (i.e., the C cell/sieve element complex). Also, a host factor(s) functions at or beyond the C cell/sieve element interface with other cells to allow efficient phloem-dependent accumulation of TMV CP-O.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A range of catalysts based on Pd nanoparticles supported on inorganic supports such as BETA and ZSM-5 zeolites, a silicoaluminophosphate molecular sieve (SAPO-5) and γ-alumina as a standard support have been tested for the total oxidation of naphthalene (100 ppm, total flow 50 ml/min) showing a conversion to carbon dioxide of 100% between 165 and 180 °C for all the analysed catalysts. From the combined use of zeolites with PVP polymer protected Pd based nanoparticles, enhanced properties have been found for the total abatement of naphthalene in contrast with other kinds of catalysts. A Pd/BETA catalyst has been demonstrated to have excellent activity, with a high degree of stability, as shown by time on line experiments maintaining 100% conversion to CO2 during the 48 h tested.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With global warming becoming one of the main problems our society is facing nowadays, there is an urgent demand to develop materials suitable for CO2 storage as well as for gas separation. Within this context, hierarchical porous structures are of great interest for in-flow applications because of the desirable combination of an extensive internal reactive surface along narrow nanopores with facile molecular transport through broad “highways” leading to and from these pores. Deep eutectic solvents (DESs) have been recently used in the synthesis of carbon monoliths exhibiting a bicontinuous porous structure composed of continuous macroporous channels and a continuous carbon network that contains a certain microporosity and provides considerable surface area. In this work, we have prepared two DESs for the preparation of two hierarchical carbon monoliths with different compositions (e.g., either nitrogen-doped or not) and structure. It is worth noting that DESs played a capital role in the synthesis of hierarchical carbon monoliths not only promoting the spinodal decomposition that governs the formation of the bicontinuous porous structure but also providing the precursors required to tailor the composition and the molecular sieve structure of the resulting carbons. We have studied the performance of these two carbons for CO2, N2, and CH4 adsorption in both monolithic and powdered form. We have also studied the selective adsorption of CO2 versus CH4 in equilibrium and dynamic conditions. We found that these materials combined a high CO2-sorption capacity besides an excellent CO2/N2 and CO2/CH4 selectivity and, interestingly, this performance was preserved when processed in both monolithic and powdered form.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The susceptibility of clay bearing rocks to weathering (erosion and/or differential degradation) is known to influence the stability of heterogeneous slopes. However, not all of these rocks show the same behaviour, as there are considerable differences in the speed and type of weathering observed. As such, it is very important to establish relationships between behaviour quantified in a laboratory environment with that observed in the field. The slake durability test is the laboratory test most commonly used to evaluate the relationship between slaking behaviour and rock durability. However, it has a number of disadvantages; it does not account for changes in shape and size in fragments retained in the 2 mm sieve, nor does its most commonly used index (Id2) accurately reflect weathering behaviour observed in the field. The main aim of this paper is to propose a simple methodology for characterizing the weathering behaviour of carbonate lithologies that outcrop in heterogeneous rock masses (such as Flysch slopes), for use by practitioners. To this end, the Potential Degradation Index (PDI) is proposed. This is calculated using the fragment size distribution curves taken from material retained in the drum after each cycle of the slake durability test. The number of slaking cycles has also been increased to five. Through laboratory testing of 117 samples of carbonate rocks, extracted from strata in selected slopes, 6 different rock types were established based on their slaking behaviour, and corresponding to the different weathering behaviours observed in the field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of recycled materials in asphalt mixtures such as reclaimed asphalt pavements (RAP) have become widely accepted as a replacement for virgin asphalt binder or virgin aggregates. In this study, the RAP content was 30%, and CR additives were blended with the soft unmodified binder by using dry processes. The objective of this study was to investigate and evaluate the engineering properties of dry method application of crumb rubber influences on reclaimed asphalt pavement (RAP) mixtures. To evaluate the effect of rubber-bitumen interaction on the mixture’s mechanical properties, a laboratory investigation has been conducted on a range of dense graded and 30% RAP by dry process crumb rubber modified (CRM) asphalt mixtures containing 0% (control), 1% crumb rubber by the total aggregate mass. The experimental program in this research include the binder extraction for estimating the amount of aged binder in the both fine and coarse RAP material. Before extracting the binder the RAP sieve analysis, have been done to provide the Black grading curve. In continue after the binder extraction the material sieved again to providing the white curve. The comparison of Black and White curve indicated that there is a remarkable difference between the aggregate grading even for the fine RAP. The experimental program was continued by fabricating 12 specimens in different 4 types of mixtures. For the first group no RAP, no rejuvenator and no crumb rubber were used. For the second group 30% of virgin aggregates substituted by RAP material and the third group was similar to the second group just with 0.01% rejuvenator. the forth group was the group, which in that the specimens contain RAP, rejuvenator and crumb rubber. Finally the specimens were tested for Indirect tensile strength. The results indicated that the addition of crumb rubber increased the optimum amount of binder in the mixture with 30% RAP.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We performed a quantitative comparison of brittle thrust wedge experiments to evaluate the variability among analogue models and to appraise the reproducibility and limits of model interpretation. Fifteen analogue modeling laboratories participated in this benchmark initiative. Each laboratory received a shipment of the same type of quartz and corundum sand and all laboratories adhered to a stringent model building protocol and used the same type of foil to cover base and sidewalls of the sandbox. Sieve structure, sifting height, filling rate, and details on off-scraping of excess sand followed prescribed procedures. Our analogue benchmark shows that even for simple plane-strain experiments with prescribed stringent model construction techniques, quantitative model results show variability, most notably for surface slope, thrust spacing and number of forward and backthrusts. One of the sources of the variability in model results is related to slight variations in how sand is deposited in the sandbox. Small changes in sifting height, sifting rate, and scraping will result in slightly heterogeneous material bulk densities, which will affect the mechanical properties of the sand, and will result in lateral and vertical differences in peak and boundary friction angles, as well as cohesion values once the model is constructed. Initial variations in basal friction are inferred to play the most important role in causing model variability. Our comparison shows that the human factor plays a decisive role, and even when one modeler repeats the same experiment, quantitative model results still show variability. Our observations highlight the limits of up-scaling quantitative analogue model results to nature or for making comparisons with numerical models. The frictional behavior of sand is highly sensitive to small variations in material state or experimental set-up, and hence, it will remain difficult to scale quantitative results such as number of thrusts, thrust spacing, and pop-up width from model to nature.