991 resultados para C 4.5*stat algorithm


Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE Parametrial involvement (PMI) is one of the most important factors influencing prognosis in locally advanced stage cervical cancer (LACC) patients. We aimed to evaluate PMI rate among LACC patients undergoing neoadjuvant chemotherapy (NACT), thus evaluating the utility of parametrectomy in tailor adjuvant treatments. METHODS Retrospective evaluation of consecutive 275 patients affected by LACC (IB2-IIB), undergoing NACT followed by type C/class III radical hysterectomy. Basic descriptive statistics, univariate and multivariate analyses were applied in order to identify factors predicting PMI. Survival outcomes were assessed using Kaplan-Meier and Cox models. RESULTS PMI was detected in 37 (13%) patients: it was associated with vaginal involvement, lymph node positivity and both in 10 (4%), 5 (2%) and 12 (4%) patients, respectively; while PMI alone was observed in only 10 (4%) patients. Among this latter group, adjuvant treatment was delivered in 3 (1%) patients on the basis of pure PMI; while the remaining patients had other characteristics driving adjuvant treatment. Considering factors predicting PMI we observed that only suboptimal pathological responses (OR: 1.11; 95% CI: 1.01, 1.22) and vaginal involvement (OR: 1.29 (95%) CI: 1.17, 1.44) were independently associated with PMI. PMI did not correlate with survival (HR: 2.0; 95% CI: 0.82, 4.89); while clinical response to NACT (HR: 3.35; 95% CI: 1.59, 7.04), vaginal involvement (HR: 2.38; 95% CI: 1.12, 5.02) and lymph nodes positivity (HR: 3.47; 95% CI: 1.62, 7.41), independently correlated with worse survival outcomes. CONCLUSIONS Our data suggest that PMI had a limited role on the choice to administer adjuvant treatment, thus supporting the potential embrace of less radical surgery in LACC patients undergoing NACT. Further prospective studies are warranted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Previous restriction analysis of cloned equine DNA and genomic DNA of equine peripheral blood mononuclear cells had indicated the existence of one c epsilon, one c alpha and up to six c gamma genes in the haploid equine genome. The c epsilon and c alpha genes have been aligned on a 30 kb DNA fragment in the order 5' c epsilon-c alpha 3'. Here we describe the alignment of the equine c mu and c gamma genes by deletion analysis of one IgM, four IgG and two equine light chain expressing heterohybridomas. This analysis establishes the existence of six c gamma genes per haploid genome. The genomic alignment of the cH-genes is 5' c mu/(/) c gamma 1/(/) c gamma 2/(/) c gamma 3/(/) c gamma 4/(/) c gamma 5/(/) c gamma 6/(/) c epsilon-c alpha 3', naming the c gamma genes according to their position relative to c mu. For three of the c gamma genes the corresponding IgG isotypes could be identified as IgGa for c gamma 1, IgG(T) for c gamma 3 and IgGb for c gamma 4.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vorlesung 1928: "Über Kants Erkenntnistheorie", eigenhändige Notizen, Manuskript, 2 Blatt; "Über das Recht soziologischer Interpretation"; a) Typoskript mit eigenhändigen Korrekturen, 8 Blatt, b) Typoskript mit eigenhändigen Korrekturen, 7 Blatt, c) Manuskript, 5 Blatt; "Notes"; "Max Scheler (1874-1828)", Typoskript mit eigenhändigen Korrekturen, 13 Blatt; "Zitate aus Werken Max Schelers", 6 Blatt; Notizen zur Vorlesung "Politik und Moral" von Max Scheler, 08.05.1928, 4 Blatt; Paul Ludwig Landsberg: "Zum Gedächtnis Max Schelers", Zeitungsausschnitt aus Literarische Rundschau der Rhein-Mainischen Volkszeitung, 25.05.1928, 1 Blatt; "Hegel und das Problem der Metaphysik", Typoskript mit eigenhändigen Korrekturen, 3 Blatt; "Über Schristian Wolff", Vorlesungsmanuskript, 6 Blatt; Friedrich Pollock: "Über antike und christliche Geschixhtsauffassung", eigenhändige Notizen, 4 Blatt; Diskussion zwischen Max Horkheimer, Mannheim, Tillich und Adorno, u.a. über Wissensoziologie und Pragmatismus, 16.01.1931, Mitschrift von Leo Löwenthal, Typoskript, 2 Blatt; Diskussion zwischen Max Horkheimer, Mannheim, Tillich und Adorno, u.a. über das Verhältnis von Philosophie und Wissenschaft gegenüber dem Schrecken. Mitschrift von Friedrich Pollock, 19.06.1931; a) Typoskript, 3 Blatt, b) eigenhändige Notizen, 12 Blatt; "Thesen über Wissenschaft. Bearbeitung Löwenthal", Frühjahr 1932, Typoskript, 5 Blatt; Friedrich Pollock: Notizheft, eigenhändige Notizen, 1 Heft, 19 Blatt und 8 zusätliche Blätter (enthält u.a.: "Zur heutigen Lage des Idealismus", Notizen zum Vortrag "Der Gegensatz von 'Geist' und 'Leben' in der gegenwärtigen Naturphilosophie" von Ernst Cassirer, gehalten am 03.10.1928; "Zur Kritik der gegenwärtigen Philosophie"; Disposition der Vorlesung von Max Horkheimers "Materialismus und Idealismus in der Geschichte der neueren Philosophie", Wintersemester 1928/29, 23.09.1928 und "Heidegger"); "Materialismus und Idealismus in der Geschichte der neuen Philosophie", Vorlesung Wintersemester 1928/29, (enthält: Vorlesungsmanuskript, 1 Heft, 8 Blatt und 22 zusätzliche Blätter; Friedrich Pollock: Kollegheft zur Vorlesung, 1 Heft, 48 Blatt, davon 9 leer, beiliegend eigenhändige Notizen zu einem Vortrag von Prinzhorn (?) über Lebensphilosophie und Psychoanalyse, 22.12.1928, 10 Blatt; Friedrich Pollock: Kollegheft zur Vorlesung, 05.-15.02.1929, 1 Heft, 12 Blatt, davon 4 leer, und 2 zusätzliche Blätter; Friedrich Pollock, Kolegheft zur Vorlesung, 22.-26.02.1929, 1 Heft, 7 Blatt);

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The p21-activated kinase 5 (PAK5) is a serine/threonine protein kinase associated with the group 2 subfamily of PAKs. Although our understanding about PAK5 is very limited, it is receiving increasing interest due to its tissue specific expression pattern and important signaling properties. PAK5 is highly expressed in brain. Its overexpression induces neurite outgrowth in neuroblastoma cells and promotes survival in fibroblasts. ^ The serine/threonine protein kinase Raf-1 is an essential mediator of Ras-dependent signaling that controls the ERK/MAPK pathway. In contrast to PAK5, Raf-1 has been the subject of intensive investigation. However due to the complexity of its activation mechanism, the biological inputs controlling Raf-1 activation are not fully understood. ^ PAKs 1-3 are the known kinases responsible for phosphorylation of Raf-1 on serine 338, which is a crucial phosphorylation site for Raf-1 activation. However, dominant negative versions of these kinases do not block EGF-induced Raf-1 activation, indicating that other kinases may regulate the phosphorylation of Raf-1 on serine 338. ^ This thesis work was initiated to test whether the group 2 PAKs 4, 5 and 6 are responsible for EGF-induced Raf-1 activation. We found that PAK5, and to a lesser extent PAK4, can activate Raf-1 in cells. Our studies thereafter focused on PAK5. With the progress of our study we found that PAK5 does not significantly stimulate serine 338 phosphorylation of Triton X-100 soluble Raf-1. PAK5, however, constitutively and specifically associates with Raf-1 and targets it to a Triton X-100 insoluble, mitochondrial compartment, where PAK5 phosphorylates serine 338 of Raf-1. We further demonstrated that endogenous PAK5 and Raf-1 colocalize in Hela cells at the mitochondrial outer membrane. In addition, we found that the mitochondria-targeting of PAK5 is determined by its C-terminal kinase domain plus the upstream proximal region, and facilitated by the N-terminal p21 binding domain. We also demonstrated that Rho GTPases Cdc42 and RhoD associate with and regulate the subcellular localization of PAK5. Taken together, this work suggests that the mitochondria-targeting of PAK5 may link Ras and Rho GTPase-mediated signaling pathways, and sheds light on aspects of PAK5 signaling that may be important for regulating neuronal homeostasis. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The opaque mineralogy and the contents and isotope compositions of sulfur in serpentinized peridotites from the MARK (Mid-Atlantic Ridge, Kane Fracture Zone) area were examined to understand the conditions of serpentinization and evaluate this process as a sink for seawater sulfur. The serpentinites contain a sulfur-rich secondary mineral assemblage and have high sulfur contents (up to 1 wt.%) and elevated d34S_sulfide (3.7 to 12.7?). Geochemical reaction modeling indicates that seawater-peridotite interaction at 300 to 400°C alone cannot account for both the high sulfur contents and high d34S_sulfide. These require a multistage reaction with leaching of sulfide from subjacent gabbro during higher temperature (~400°C) reactions with seawater and subsequent deposition of sulfide during serpentinization of peridotite at ~300°C. Serpentinization produces highly reducing conditions and significant amounts of H2 and results in the partial reduction of seawater carbonate to methane. The latter is documented by formation of carbonate veins enriched in 13C (up to 4.5?) at temperatures above 250°C. Although different processes produce variable sulfur isotope effects in other oceanic serpentinites, sulfur is consistently added to abyssal peridotites during serpentinization. Data for serpentinites drilled and dredged from oceanic crust and from ophiolites indicate that oceanic peridotites are a sink for up to 0.4 to 6.0 mln ton seawater S per year. This is comparable to sulfur exchange that occurs in hydrothermal systems in mafic oceanic crust at midocean ridges and on ridge flanks and amounts to 2 to 30% of the riverine sulfate source and sedimentary sulfide sink in the oceans. The high concentrations and modified isotope compositions of sulfur in serpentinites could be important for mantle metasomatism during subduction of crust generated at slow spreading rates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The EPICA (European Project for Ice Coring in Antarctica) Dome C drilling in East Antarctica has now been completed to a depth of 3260 m, at only a few meters above bedrock. Here we present the new EDC3 chronology, which is based on the use of 1) a snow accumulation and mechanical flow model, and 2) a set of independent age markers along the core. These are obtained by pattern matching of recorded parameters to either absolutely dated paleoclimatic records, or to insolation variations. We show that this new time scale is in excellent agreement with the Dome Fuji and Vostok ice core time scales back to 100 kyr within 1 kyr. Discrepancies larger than 3 kyr arise during MIS 5.4, 5.5 and 6, which points to anomalies in either snow accumulation or mechanical flow during these time periods. We estimate that EDC3 gives accurate event durations within 20% (2 sigma) back to MIS11 and accurate absolute ages with a maximum uncertainty of 6 kyr back to 800 kyr.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An analytical method for the determination of the alpha dicarbonyls glyoxal (GLY) and methylglyoxal (MGLY) from seawater and marine aerosol particles is presented. The method is based on derivatization with o-(2,3,4,5,6-Pentafluorobenzyl)-hydroxylamine (PFBHA) reagent, solvent extraction and GC-MS (SIM) analysis. The method showed good precision (RSD < 10%), sensitivity (detection limits in the low ng/l range), and accuracy (good agreement between external calibration and standard addition). The method was applied to determine GLY and MGLY in oceanic water sampled during the Polarstern cruise ANT XXVII/4 from Capetown to Bremerhaven in spring 2011. GLY and MGLY were determined in the sea surface microlayer (SML) of the ocean and corresponding bulk water (BW) with average concentrations of 228 ng/l (GLY) and 196 ng/l (MGLY). The results show a significant enrichment (factor of 4) of GLY and MGLY in the SML. Furthermore, marine aerosol particles (PM1) were sampled during the cruise and analyzed for GLY (average concentration 0.19 ng/m**3) and MGLY (average concentration 0.15 ng/m**3). On aerosol particles, both carbonyls show a very good correlation with oxalate, supporting the idea of a secondary formation of oxalic acid via GLY and MGLY. Concentrations of GLY and MGLY in seawater and on aerosol particles were correlated to environmental parameters such as global radiation, temperature, distance to the coastline and biological activity. There are slight hints for a photochemical production of GLY and MGLY in the SML (significant enrichment in the SML, higher enrichment at higher temperature). However, a clear connection of GLY and MGLY to global radiation as well as to biological activity cannot be concluded from the data. A slight correlation between GLY and MGLY in the SML and in aerosol particles could be a hint for interactions, in particular of GLY, between seawater and the atmosphere.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Surface sediments from the South American continental margin surrounding tbe Argentine Basin were studied with respect to bulk geochemistry (Caeo) and C ) and grain-size composition (sand/silt/clay relation and terrigenous silt grain-size distribution). The grain-size distributions of the terrigenous silt fraction were unmixed into three end members (EMs), using an end-member modelling algorithm. Three unimodal EMs appear to satisfactorily explain the variations in the data set of the grain-size distributions ofterrigenous silt. The EMs are related to sediment supply by rivers, downslope transport, winnowing, dispersal and re-deposition by currents. The bulk geochemical composition was used to trace the distribution of prominent water masses within the vertical profile. The sediments of the eastern South American continental margin are generally divided into a coarse-grained and carbonate-depleted southwestern part, and a finer-grained and carbonate-rich northeastern part. The transition of both environments is located at the position of the Brazil Malvinas Confluence (BMC). The sediments below the confluence mixing zone of the Malvinas and Brazil Currents and its extensions are characterised by high concentrations of organic carbon, low carbonate contents and high proportions of the intennediate grain-size end member. Tracing these properties, the BMC emerges as a distinct north-south striking feature centered at 52-54°W crossing the continental margin diagonally. Adjacent to this prominent feature in the southwest, the direct detrital sediment discharge of the Rio de la Plata is clearly recognised by a downslope tongue of sand and high proportions of the coarsest EM. A similar coarse grain-size composition extends further south along the continental slope. However, it displays bener sorting due to intense winnowing by the vigorous Malvinas Current. Fine-grained sedimentary deposition zones are located at the southwestern deeper part of the Rio Grande Rise and the southern abyssal Brazil Basin, both within the AABW domain. Less conspicuous winnowing/accumulation panerns are indicated north of the La Plata within the NADW level according to the continental margin topography. We demonstrate that combined bulk geochemical and grain-size properties of surface sediments, unmixed with an end-member algorithm, provide a powerful tool to reconstruct the complex interplay of sedimentology and oceanography along a time slice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Carbon and hydrogen concentrations and isotopic compositions were measured in 19 samples from altered oceanic crust cored in ODP/IODP Hole 1256D through lavas, dikes down to the gabbroic rocks. Bulk water content varies from 0.32 to 2.14 wt% with dD values from -64per mil to -25per mil. All samples are enriched in water relative to fresh basalts. The dD values are interpreted in terms of mixing between magmatic water and another source that can be either secondary hydrous minerals and/or H contained in organic compounds such as hydrocarbons. Total CO2, extracted by step-heating technique, ranges between 564 and 2823 ppm with d13C values from -14.9per mil to -26.6per mil. As for water, these altered samples are enriched in carbon relative to fresh basalts. The carbon isotope compositions are interpreted in terms of a mixing between two components: (1) a carbonate with d13C = -4.5per mil and (2) an organic compound with d13C = -26.6per mil. A mixing model calculation indicates that, for most samples (17 of 19), more than 75% of the total C occurs as organic compounds while carbonates represent less than 25%. This result is also supported by independent estimates of carbonate content from CO2 yield after H3PO4 attack. A comparison between the carbon concentration in our samples, seawater DIC (Dissolved Inorganic Carbon) and DOC (Dissolved Organic Carbon), and hydrothermal fluids suggests that CO2 degassed from magmatic reservoirs is the main source of organic C addition to the crust during the alteration process. A reduction step of dissolved CO2 is thus required, and can be either biologically mediated or not. Abiotic processes are necessary for the deeper part of the crust (>1000 mbsf) because alteration temperatures are greater than any hyperthermophilic living organism (i.e. T > 110 °C). Even if not required, we cannot rule out the contribution of microbial activity in the low-temperature alteration zones. We propose a two-step model for carbon cycling during crustal alteration: (1) when "fresh" oceanic crust forms at or close to ridge axis, alteration starts with hot hydrothermal fluids enriched in magmatic CO2, leading to the formation of organic compounds during Fischer-Tropsch-type reactions; (2) when the crust moves away from the ridge axis, these interactions with hot hydrothermal fluids decrease and are replaced by seawater interactions with carbonate precipitation in fractures. Taking into account this organic carbon, we estimate C isotope composition of mean altered oceanic crust at ? -4.7per mil, similar to the d13C of the C degassed from the mantle at ridge axis, and discuss the global carbon budget. The total flux of C stored in the altered oceanic crust, as carbonate and organic compound, is 2.9 ± 0.4 * 10**12 molC/yr.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The diabases cut across the ophiolites as parallel and variably thick dyke-swarms. Geochemistry of the diabases reveals three distinct groups, including a) supra-subduction zone (SSZ) type, which is characterized by marked Nb-anomaly and normal mid-ocean ridge basalt (N-MORB) like HFSE distribution, b) enriched MORB (E-MORB) type, showing some degree of enrichment relative to N-MORB, c) oceanic-island basalt (OIB) type with characteristic hump-backed trace element patterns, coupled with fractionated REE distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Calcareous nannoplankton analyses on late quaternary sediments from the eastern North Atlantic ODP Site 980 (55°29'N, 14°42'W) provide detailed insight into palaeoceanographic and palaeoclimatic changes that occurred throughout the Termination II and the adjacent interglacial of the Marine Isotope Stage (MIS) 5. This study presents the development of the coccolith assemblage throughout the interglacial MIS 5 towards the beginning of the glacial MIS 4 in the vicinity of the Rockall Plateau and investigates and characterises the impact of climatic and environmental variations on the coccolith assemblage distribution between 135 and 65 ky. In general, the coccolith assemblage is dominated by Gephyrocapsa muellerae and Emiliania huxleyi, whilst significant changes in palaeoceanographic and palaeoclimatic conditions are mainly shown by variations of subordinate species. A drastic increase in coccolith accumulation rates and a change from a less to a higher diverse species assemblage indicate a rapid increase in surface water temperatures during the onset of MIS 5 from c. 127.5 ky on. Highest coccolith numbers, high numbers of taxa and a large diversity indicate highest coccolithophore primary productivity and peak interglacial conditions during MIS 5.5, which are due to the high influence of relatively warm surface water to this region. Coccolith numbers peak again around 120 ky and decline afterwards but stay above glacial levels. The two cooling events of MIS 5.4 and 5.2 interrupt the generally warm conditions and are indicated by lowered coccolith numbers, a drop of thermophile species and a reduction of the species diversity. Decreasing coccolith numbers and a slightly reduced diversity indicate that environmental conditions deteriorated towards the onset of MIS 4. The analysis of the coccolith assemblage reveals that not only the stadial events MIS 5.4 and 5.2 are characterised by colder conditions, but furthermore confirms the upcoming notion that MIS 5.5 was terminated by a slight short-term cooling of the surface water which occurred around 124 ky.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present and evaluate a compiler from Prolog (and extensions) to JavaScript which makes it possible to use (constraint) logic programming to develop the client side of web applications while being compliant with current industry standards. Targeting JavaScript makes (C)LP programs executable in virtually every modern computing device with no additional software requirements from the point of view of the user. In turn, the use of a very high-level language facilitates the development of high-quality, complex software. The compiler is a back end of the Ciao system and supports most of its features, including its module system and its rich language extension mechanism based on packages. We present an overview of the compilation process and a detailed description of the run-time system, including the support for modular compilation into separate JavaScript code. We demonstrate the maturity of the compiler by testing it with complex code such as a CLP(FD) library written in Prolog with attributed variables. Finally, we validate our proposal by measuring the performance of some LP and CLP(FD) benchmarks running on top of major JavaScript engines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tesis aborda metodologías para el cálculo de riesgo de colisión de satélites. La minimización del riesgo de colisión se debe abordar desde dos puntos de vista distintos. Desde el punto de vista operacional, es necesario filtrar los objetos que pueden presentar un encuentro entre todos los objetos que comparten el espacio con un satélite operacional. Puesto que las órbitas, del objeto operacional y del objeto envuelto en la colisión, no se conocen perfectamente, la geometría del encuentro y el riesgo de colisión deben ser evaluados. De acuerdo con dicha geometría o riesgo, una maniobra evasiva puede ser necesaria para evitar la colisión. Dichas maniobras implican un consumo de combustible que impacta en la capacidad de mantenimiento orbital y por tanto de la visa útil del satélite. Por tanto, el combustible necesario a lo largo de la vida útil de un satélite debe ser estimado en fase de diseño de la misión para una correcta definición de su vida útil, especialmente para satélites orbitando en regímenes orbitales muy poblados. Los dos aspectos, diseño de misión y aspectos operacionales en relación con el riesgo de colisión están abordados en esta tesis y se resumen en la Figura 3. En relación con los aspectos relacionados con el diseño de misión (parte inferior de la figura), es necesario evaluar estadísticamente las características de de la población espacial y las teorías que permiten calcular el número medio de eventos encontrados por una misión y su capacidad de reducir riesgo de colisión. Estos dos aspectos definen los procedimientos más apropiados para reducir el riesgo de colisión en fase operacional. Este aspecto es abordado, comenzando por la teoría descrita en [Sánchez-Ortiz, 2006]T.14 e implementada por el autor de esta tesis en la herramienta ARES [Sánchez-Ortiz, 2004b]T.15 proporcionada por ESA para la evaluación de estrategias de evitación de colisión. Esta teoría es extendida en esta tesis para considerar las características de los datos orbitales disponibles en las fases operacionales de un satélite (sección 4.3.3). Además, esta teoría se ha extendido para considerar riesgo máximo de colisión cuando la incertidumbre de las órbitas de objetos catalogados no es conocida (como se da el caso para los TLE), y en el caso de querer sólo considerar riesgo de colisión catastrófico (sección 4.3.2.3). Dichas mejoras se han incluido en la nueva versión de ARES [Domínguez-González and Sánchez-Ortiz, 2012b]T.12 puesta a disposición a través de [SDUP,2014]R.60. En fase operacional, los catálogos que proporcionan datos orbitales de los objetos espaciales, son procesados rutinariamente, para identificar posibles encuentros que se analizan en base a algoritmos de cálculo de riesgo de colisión para proponer maniobras de evasión. Actualmente existe una única fuente de datos públicos, el catálogo TLE (de sus siglas en inglés, Two Line Elements). Además, el Joint Space Operation Center (JSpOC) Americano proporciona mensajes con alertas de colisión (CSM) cuando el sistema de vigilancia americano identifica un posible encuentro. En función de los datos usados en fase operacional (TLE o CSM), la estrategia de evitación puede ser diferente debido a las características de dicha información. Es preciso conocer las principales características de los datos disponibles (respecto a la precisión de los datos orbitales) para estimar los posibles eventos de colisión encontrados por un satélite a lo largo de su vida útil. En caso de los TLE, cuya precisión orbital no es proporcionada, la información de precisión orbital derivada de un análisis estadístico se puede usar también en el proceso operacional así como en el diseño de la misión. En caso de utilizar CSM como base de las operaciones de evitación de colisiones, se conoce la precisión orbital de los dos objetos involucrados. Estas características se han analizado en detalle, evaluando estadísticamente las características de ambos tipos de datos. Una vez concluido dicho análisis, se ha analizado el impacto de utilizar TLE o CSM en las operaciones del satélite (sección 5.1). Este análisis se ha publicado en una revista especializada [Sánchez-Ortiz, 2015b]T.3. En dicho análisis, se proporcionan recomendaciones para distintas misiones (tamaño del satélite y régimen orbital) en relación con las estrategias de evitación de colisión para reducir el riesgo de colisión de manera significativa. Por ejemplo, en el caso de un satélite en órbita heliosíncrona en régimen orbital LEO, el valor típico del ACPL que se usa de manera extendida es 10-4. Este valor no es adecuado cuando los esquemas de evitación de colisión se realizan sobre datos TLE. En este caso, la capacidad de reducción de riesgo es prácticamente nula (debido a las grandes incertidumbres de los datos TLE) incluso para tiempos cortos de predicción. Para conseguir una reducción significativa del riesgo, sería necesario usar un ACPL en torno a 10-6 o inferior, produciendo unas 10 alarmas al año por satélite (considerando predicciones a un día) o 100 alarmas al año (con predicciones a tres días). Por tanto, la principal conclusión es la falta de idoneidad de los datos TLE para el cálculo de eventos de colisión. Al contrario, usando los datos CSM, debido a su mejor precisión orbital, se puede obtener una reducción significativa del riesgo con ACPL en torno a 10-4 (considerando 3 días de predicción). Incluso 5 días de predicción pueden ser considerados con ACPL en torno a 10-5. Incluso tiempos de predicción más largos se pueden usar (7 días) con reducción del 90% del riesgo y unas 5 alarmas al año (en caso de predicciones de 5 días, el número de maniobras se mantiene en unas 2 al año). La dinámica en GEO es diferente al caso LEO y hace que el crecimiento de las incertidumbres orbitales con el tiempo de propagación sea menor. Por el contrario, las incertidumbres derivadas de la determinación orbital son peores que en LEO por las diferencias en las capacidades de observación de uno y otro régimen orbital. Además, se debe considerar que los tiempos de predicción considerados para LEO pueden no ser apropiados para el caso de un satélite GEO (puesto que tiene un periodo orbital mayor). En este caso usando datos TLE, una reducción significativa del riesgo sólo se consigue con valores pequeños de ACPL, produciendo una alarma por año cuando los eventos de colisión se predicen a un día vista (tiempo muy corto para implementar maniobras de evitación de colisión).Valores más adecuados de ACPL se encuentran entre 5•10-8 y 10-7, muy por debajo de los valores usados en las operaciones actuales de la mayoría de las misiones GEO (de nuevo, no se recomienda en este régimen orbital basar las estrategias de evitación de colisión en TLE). Los datos CSM permiten una reducción de riesgo apropiada con ACPL entre 10-5 y 10-4 con tiempos de predicción cortos y medios (10-5 se recomienda para predicciones a 5 o 7 días). El número de maniobras realizadas sería una en 10 años de misión. Se debe notar que estos cálculos están realizados para un satélite de unos 2 metros de radio. En el futuro, otros sistemas de vigilancia espacial (como el programa SSA de la ESA), proporcionarán catálogos adicionales de objetos espaciales con el objetivo de reducir el riesgo de colisión de los satélites. Para definir dichos sistemas de vigilancia, es necesario identificar las prestaciones del catalogo en función de la reducción de riesgo que se pretende conseguir. Las características del catálogo que afectan principalmente a dicha capacidad son la cobertura (número de objetos incluidos en el catalogo, limitado principalmente por el tamaño mínimo de los objetos en función de las limitaciones de los sensores utilizados) y la precisión de los datos orbitales (derivada de las prestaciones de los sensores en relación con la precisión de las medidas y la capacidad de re-observación de los objetos). El resultado de dicho análisis (sección 5.2) se ha publicado en una revista especializada [Sánchez-Ortiz, 2015a]T.2. Este análisis no estaba inicialmente previsto durante la tesis, y permite mostrar como la teoría descrita en esta tesis, inicialmente definida para facilitar el diseño de misiones (parte superior de la figura 1) se ha extendido y se puede aplicar para otros propósitos como el dimensionado de un sistema de vigilancia espacial (parte inferior de la figura 1). La principal diferencia de los dos análisis se basa en considerar las capacidades de catalogación (precisión y tamaño de objetos observados) como una variable a modificar en el caso de un diseño de un sistema de vigilancia), siendo fijas en el caso de un diseño de misión. En el caso de las salidas generadas en el análisis, todos los aspectos calculados en un análisis estadístico de riesgo de colisión son importantes para diseño de misión (con el objetivo de calcular la estrategia de evitación y la cantidad de combustible a utilizar), mientras que en el caso de un diseño de un sistema de vigilancia, los aspectos más importantes son el número de maniobras y falsas alarmas (fiabilidad del sistema) y la capacidad de reducción de riesgo (efectividad del sistema). Adicionalmente, un sistema de vigilancia espacial debe ser caracterizado por su capacidad de evitar colisiones catastróficas (evitando así in incremento dramático de la población de basura espacial), mientras que el diseño de una misión debe considerar todo tipo de encuentros, puesto que un operador está interesado en evitar tanto las colisiones catastróficas como las letales. Del análisis de las prestaciones (tamaño de objetos a catalogar y precisión orbital) requeridas a un sistema de vigilancia espacial se concluye que ambos aspectos han de ser fijados de manera diferente para los distintos regímenes orbitales. En el caso de LEO se hace necesario observar objetos de hasta 5cm de radio, mientras que en GEO se rebaja este requisito hasta los 100 cm para cubrir las colisiones catastróficas. La razón principal para esta diferencia viene de las diferentes velocidades relativas entre los objetos en ambos regímenes orbitales. En relación con la precisión orbital, ésta ha de ser muy buena en LEO para poder reducir el número de falsas alarmas, mientras que en regímenes orbitales más altos se pueden considerar precisiones medias. En relación con los aspectos operaciones de la determinación de riesgo de colisión, existen varios algoritmos de cálculo de riesgo entre dos objetos espaciales. La Figura 2 proporciona un resumen de los casos en cuanto a algoritmos de cálculo de riesgo de colisión y como se abordan en esta tesis. Normalmente se consideran objetos esféricos para simplificar el cálculo de riesgo (caso A). Este caso está ampliamente abordado en la literatura y no se analiza en detalle en esta tesis. Un caso de ejemplo se proporciona en la sección 4.2. Considerar la forma real de los objetos (caso B) permite calcular el riesgo de una manera más precisa. Un nuevo algoritmo es definido en esta tesis para calcular el riesgo de colisión cuando al menos uno de los objetos se considera complejo (sección 4.4.2). Dicho algoritmo permite calcular el riesgo de colisión para objetos formados por un conjunto de cajas, y se ha presentado en varias conferencias internacionales. Para evaluar las prestaciones de dicho algoritmo, sus resultados se han comparado con un análisis de Monte Carlo que se ha definido para considerar colisiones entre cajas de manera adecuada (sección 4.1.2.3), pues la búsqueda de colisiones simples aplicables para objetos esféricos no es aplicable a este caso. Este análisis de Monte Carlo se considera la verdad a la hora de calcular los resultados del algoritmos, dicha comparativa se presenta en la sección 4.4.4. En el caso de satélites que no se pueden considerar esféricos, el uso de un modelo de la geometría del satélite permite descartar eventos que no son colisiones reales o estimar con mayor precisión el riesgo asociado a un evento. El uso de estos algoritmos con geometrías complejas es más relevante para objetos de dimensiones grandes debido a las prestaciones de precisión orbital actuales. En el futuro, si los sistemas de vigilancia mejoran y las órbitas son conocidas con mayor precisión, la importancia de considerar la geometría real de los satélites será cada vez más relevante. La sección 5.4 presenta un ejemplo para un sistema de grandes dimensiones (satélite con un tether). Adicionalmente, si los dos objetos involucrados en la colisión tienen velocidad relativa baja (y geometría simple, Caso C en la Figura 2), la mayor parte de los algoritmos no son aplicables requiriendo implementaciones dedicadas para este caso particular. En esta tesis, uno de estos algoritmos presentado en la literatura [Patera, 2001]R.26 se ha analizado para determinar su idoneidad en distintos tipos de eventos (sección 4.5). La evaluación frete a un análisis de Monte Carlo se proporciona en la sección 4.5.2. Tras este análisis, se ha considerado adecuado para abordar las colisiones de baja velocidad. En particular, se ha concluido que el uso de algoritmos dedicados para baja velocidad son necesarios en función del tamaño del volumen de colisión proyectado en el plano de encuentro (B-plane) y del tamaño de la incertidumbre asociada al vector posición entre los dos objetos. Para incertidumbres grandes, estos algoritmos se hacen más necesarios pues la duración del intervalo en que los elipsoides de error de los dos objetos pueden intersecar es mayor. Dicho algoritmo se ha probado integrando el algoritmo de colisión para objetos con geometrías complejas. El resultado de dicho análisis muestra que este algoritmo puede ser extendido fácilmente para considerar diferentes tipos de algoritmos de cálculo de riesgo de colisión (sección 4.5.3). Ambos algoritmos, junto con el método Monte Carlo para geometrías complejas, se han implementado en la herramienta operacional de la ESA CORAM, que es utilizada para evaluar el riesgo de colisión en las actividades rutinarias de los satélites operados por ESA [Sánchez-Ortiz, 2013a]T.11. Este hecho muestra el interés y relevancia de los algoritmos desarrollados para la mejora de las operaciones de los satélites. Dichos algoritmos han sido presentados en varias conferencias internacionales [Sánchez-Ortiz, 2013b]T.9, [Pulido, 2014]T.7,[Grande-Olalla, 2013]T.10, [Pulido, 2014]T.5, [Sánchez-Ortiz, 2015c]T.1. ABSTRACT This document addresses methodologies for computation of the collision risk of a satellite. Two different approaches need to be considered for collision risk minimisation. On an operational basis, it is needed to perform a sieve of possible objects approaching the satellite, among all objects sharing the space with an operational satellite. As the orbits of both, satellite and the eventual collider, are not perfectly known but only estimated, the miss-encounter geometry and the actual risk of collision shall be evaluated. In the basis of the encounter geometry or the risk, an eventual manoeuvre may be required to avoid the conjunction. Those manoeuvres will be associated to a reduction in the fuel for the mission orbit maintenance, and thus, may reduce the satellite operational lifetime. Thus, avoidance manoeuvre fuel budget shall be estimated, at mission design phase, for a better estimation of mission lifetime, especially for those satellites orbiting in very populated orbital regimes. These two aspects, mission design and operational collision risk aspects, are summarised in Figure 3, and covered along this thesis. Bottom part of the figure identifies the aspects to be consider for the mission design phase (statistical characterisation of the space object population data and theory computing the mean number of events and risk reduction capability) which will define the most appropriate collision avoidance approach at mission operational phase. This part is covered in this work by starting from the theory described in [Sánchez-Ortiz, 2006]T.14 and implemented by this author in ARES tool [Sánchez-Ortiz, 2004b]T.15 provided by ESA for evaluation of collision avoidance approaches. This methodology has been now extended to account for the particular features of the available data sets in operational environment (section 4.3.3). Additionally, the formulation has been extended to allow evaluating risk computation approached when orbital uncertainty is not available (like the TLE case) and when only catastrophic collisions are subject to study (section 4.3.2.3). These improvements to the theory have been included in the new version of ESA ARES tool [Domínguez-González and Sánchez-Ortiz, 2012b]T.12 and available through [SDUP,2014]R.60. At the operation phase, the real catalogue data will be processed on a routine basis, with adequate collision risk computation algorithms to propose conjunction avoidance manoeuvre optimised for every event. The optimisation of manoeuvres in an operational basis is not approached along this document. Currently, American Two Line Element (TLE) catalogue is the only public source of data providing orbits of objects in space to identify eventual conjunction events. Additionally, Conjunction Summary Message (CSM) is provided by Joint Space Operation Center (JSpOC) when the American system identifies a possible collision among satellites and debris. Depending on the data used for collision avoidance evaluation, the conjunction avoidance approach may be different. The main features of currently available data need to be analysed (in regards to accuracy) in order to perform estimation of eventual encounters to be found along the mission lifetime. In the case of TLE, as these data is not provided with accuracy information, operational collision avoidance may be also based on statistical accuracy information as the one used in the mission design approach. This is not the case for CSM data, which includes the state vector and orbital accuracy of the two involved objects. This aspect has been analysed in detail and is depicted in the document, evaluating in statistical way the characteristics of both data sets in regards to the main aspects related to collision avoidance. Once the analysis of data set was completed, investigations on the impact of those features in the most convenient avoidance approaches have been addressed (section 5.1). This analysis is published in a peer-reviewed journal [Sánchez-Ortiz, 2015b]T.3. The analysis provides recommendations for different mission types (satellite size and orbital regime) in regards to the most appropriate collision avoidance approach for relevant risk reduction. The risk reduction capability is very much dependent on the accuracy of the catalogue utilized to identify eventual collisions. Approaches based on CSM data are recommended against the TLE based approach. Some approaches based on the maximum risk associated to envisaged encounters are demonstrated to report a very large number of events, which makes the approach not suitable for operational activities. Accepted Collision Probability Levels are recommended for the definition of the avoidance strategies for different mission types. For example for the case of a LEO satellite in the Sun-synchronous regime, the typically used ACPL value of 10-4 is not a suitable value for collision avoidance schemes based on TLE data. In this case the risk reduction capacity is almost null (due to the large uncertainties associated to TLE data sets, even for short time-to-event values). For significant reduction of risk when using TLE data, ACPL on the order of 10-6 (or lower) seems to be required, producing about 10 warnings per year and mission (if one-day ahead events are considered) or 100 warnings per year (for three-days ahead estimations). Thus, the main conclusion from these results is the lack of feasibility of TLE for a proper collision avoidance approach. On the contrary, for CSM data, and due to the better accuracy of the orbital information when compared with TLE, ACPL on the order of 10-4 allows to significantly reduce the risk. This is true for events estimated up to 3 days ahead. Even 5 days ahead events can be considered, but ACPL values down to 10-5 should be considered in such case. Even larger prediction times can be considered (7 days) for risk reduction about 90%, at the cost of larger number of warnings up to 5 events per year, when 5 days prediction allows to keep the manoeuvre rate in 2 manoeuvres per year. Dynamics of the GEO orbits is different to that in LEO, impacting on a lower increase of orbits uncertainty along time. On the contrary, uncertainties at short prediction times at this orbital regime are larger than those at LEO due to the differences in observation capabilities. Additionally, it has to be accounted that short prediction times feasible at LEO may not be appropriate for a GEO mission due to the orbital period being much larger at this regime. In the case of TLE data sets, significant reduction of risk is only achieved for small ACPL values, producing about a warning event per year if warnings are raised one day in advance to the event (too short for any reaction to be considered). Suitable ACPL values would lay in between 5•10-8 and 10-7, well below the normal values used in current operations for most of the GEO missions (TLE-based strategies for collision avoidance at this regime are not recommended). On the contrary, CSM data allows a good reduction of risk with ACPL in between 10-5 and 10-4 for short and medium prediction times. 10-5 is recommended for prediction times of five or seven days. The number of events raised for a suitable warning time of seven days would be about one in a 10-year mission. It must be noted, that these results are associated to a 2 m radius spacecraft, impact of the satellite size are also analysed within the thesis. In the future, other Space Situational Awareness Systems (SSA, ESA program) may provide additional catalogues of objects in space with the aim of reducing the risk. It is needed to investigate which are the required performances of those catalogues for allowing such risk reduction. The main performance aspects are coverage (objects included in the catalogue, mainly limited by a minimum object size derived from sensor performances) and the accuracy of the orbital data to accurately evaluate the conjunctions (derived from sensor performance in regards to object observation frequency and accuracy). The results of these investigations (section 5.2) are published in a peer-reviewed journal [Sánchez-Ortiz, 2015a]T.2. This aspect was not initially foreseen as objective of the thesis, but it shows how the theory described in the thesis, initially defined for mission design in regards to avoidance manoeuvre fuel allocation (upper part of figure 1), is extended and serves for additional purposes as dimensioning a Space Surveillance and Tracking (SST) system (bottom part of figure below). The main difference between the two approaches is the consideration of the catalogue features as part of the theory which are not modified (for the satellite mission design case) instead of being an input for the analysis (in the case of the SST design). In regards to the outputs, all the features computed by the statistical conjunction analysis are of importance for mission design (with the objective of proper global avoidance strategy definition and fuel allocation), whereas for the case of SST design, the most relevant aspects are the manoeuvre and false alarm rates (defining a reliable system) and the Risk Reduction capability (driving the effectiveness of the system). In regards to the methodology for computing the risk, the SST system shall be driven by the capacity of providing the means to avoid catastrophic conjunction events (avoiding the dramatic increase of the population), whereas the satellite mission design should consider all type of encounters, as the operator is interested on avoiding both lethal and catastrophic collisions. From the analysis of the SST features (object coverage and orbital uncertainty) for a reliable system, it is concluded that those two characteristics are to be imposed differently for the different orbital regimes, as the population level is different depending on the orbit type. Coverage values range from 5 cm for very populated LEO regime up to 100 cm in the case of GEO region. The difference on this requirement derives mainly from the relative velocity of the encounters at those regimes. Regarding the orbital knowledge of the catalogues, very accurate information is required for objects in the LEO region in order to limit the number of false alarms, whereas intermediate orbital accuracy can be considered for higher orbital regimes. In regards to the operational collision avoidance approaches, several collision risk algorithms are used for evaluation of collision risk of two pair of objects. Figure 2 provides a summary of the different collision risk algorithm cases and indicates how they are covered along this document. The typical case with high relative velocity is well covered in literature for the case of spherical objects (case A), with a large number of available algorithms, that are not analysed in detailed in this work. Only a sample case is provided in section 4.2. If complex geometries are considered (Case B), a more realistic risk evaluation can be computed. New approach for the evaluation of risk in the case of complex geometries is presented in this thesis (section 4.4.2), and it has been presented in several international conferences. The developed algorithm allows evaluating the risk for complex objects formed by a set of boxes. A dedicated Monte Carlo method has also been described (section 4.1.2.3) and implemented to allow the evaluation of the actual collisions among a large number of simulation shots. This Monte Carlo runs are considered the truth for comparison of the algorithm results (section 4.4.4). For spacecrafts that cannot be considered as spheres, the consideration of the real geometry of the objects may allow to discard events which are not real conjunctions, or estimate with larger reliability the risk associated to the event. This is of particular importance for the case of large spacecrafts as the uncertainty in positions of actual catalogues does not reach small values to make a difference for the case of objects below meter size. As the tracking systems improve and the orbits of catalogued objects are known more precisely, the importance of considering actual shapes of the objects will become more relevant. The particular case of a very large system (as a tethered satellite) is analysed in section 5.4. Additionally, if the two colliding objects have low relative velocity (and simple geometries, case C in figure above), the most common collision risk algorithms fail and adequate theories need to be applied. In this document, a low relative velocity algorithm presented in the literature [Patera, 2001]R.26 is described and evaluated (section 4.5). Evaluation through comparison with Monte Carlo approach is provided in section 4.5.2. The main conclusion of this analysis is the suitability of this algorithm for the most common encounter characteristics, and thus it is selected as adequate for collision risk estimation. Its performances are evaluated in order to characterise when it can be safely used for a large variety of encounter characteristics. In particular, it is found that the need of using dedicated algorithms depend on both the size of collision volume in the B-plane and the miss-distance uncertainty. For large uncertainties, the need of such algorithms is more relevant since for small uncertainties the encounter duration where the covariance ellipsoids intersect is smaller. Additionally, its application for the case of complex satellite geometries is assessed (case D in figure above) by integrating the developed algorithm in this thesis with Patera’s formulation for low relative velocity encounters. The results of this analysis show that the algorithm can be easily extended for collision risk estimation process suitable for complex geometry objects (section 4.5.3). The two algorithms, together with the Monte Carlo method, have been implemented in the operational tool CORAM for ESA which is used for the evaluation of collision risk of ESA operated missions, [Sánchez-Ortiz, 2013a]T.11. This fact shows the interest and relevance of the developed algorithms for improvement of satellite operations. The algorithms have been presented in several international conferences, [Sánchez-Ortiz, 2013b]T.9, [Pulido, 2014]T.7,[Grande-Olalla, 2013]T.10, [Pulido, 2014]T.5, [Sánchez-Ortiz, 2015c]T.1.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inositol polyphosphate 4-phosphatase (4-phosphatase) is an enzyme that catalyses the hydrolysis of the 4-position phosphate from phosphatidylinositol 3,4-bisphosphate [PtdIns(3,4)P2]. In human platelets the formation of this phosphatidylinositol, by the actions of phosphatidylinositol 3-kinase (PI 3-kinase), correlates with irreversible platelet aggregation. We have shown previously that a phosphatidylinositol 3,4,5-trisphosphate 5-phosphatase forms a complex with the p85 subunit of PI 3-kinase. In this study we investigated whether PI 3-kinase also forms a complex with the 4-phosphatase in human platelets. Immunoprecipitates of the p85 subunit of PI 3-kinase from human platelet cytosol contained 4-phosphatase enzyme activity and a 104-kDa polypeptide recognized by specific 4-phosphatase antibodies. Similarly, immunoprecipitates made using 4-phosphatase-specific antibodies contained PI 3-kinase enzyme activity and an 85-kDa polypeptide recognized by antibodies to the p85 adapter subunit of PI 3-kinase. After thrombin activation, the 4-phosphatase translocated to the actin cytoskeleton along with PI 3-kinase in an integrin- and aggregation-dependent manner. The majority of the PI 3-kinase/4-phosphatase complex (75%) remained in the cytosolic fraction. We propose that the complex formed between the two enzymes serves to localize the 4-phosphatase to sites of PtdIns(3,4)P2 production.