806 resultados para Conservative Design Approach


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Privacy by Design approach to systems engineering introduces privacy requirements in the early stages of development, instead of patching up a built system afterwards. However, 'vague', 'disconnected from technology', or 'aspirational' are some terms employed nowadays to refer to the privacy principles which must lead the development process. Although privacy has become a first-class citizen in the realm of non-functional requirements and some methodological frameworks help developers by providing design guidance, software engineers often miss a solid reference detailing which specific, technical requirements they must abide by, and a systematic methodology to follow. In this position paper, we look into a domain that has already successfully tackled these problems -web accessibility-, and propose translating their findings into the realm of privacy requirements engineering, analyzing as well the gaps not yet covered by current privacy initiatives.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En España existen del orden de 1,300 grandes presas, de las cuales un 20% fueron construidas antes de los años 60. El hecho de que existan actualmente una gran cantidad de presas antiguas aún en operación, ha producido un creciente interés en reevaluar su seguridad empleando herramientas nuevas o modificadas que incorporan modelos de fallo teóricos más completos, conceptos geotécnicos más complejos y nuevas técnicas de evaluación de la seguridad. Una manera muy común de abordar el análisis de estabilidad de presas de gravedad es, por ejemplo, considerar el deslizamiento a través de la interfase presa-cimiento empleando el criterio de rotura lineal de Mohr-Coulomb, en donde la cohesión y el ángulo de rozamiento son los parámetros que definen la resistencia al corte de la superficie de contacto. Sin embargo la influencia de aspectos como la presencia de planos de debilidad en el macizo rocoso de cimentación; la influencia de otros criterios de rotura para la junta y para el macizo rocoso (ej. el criterio de rotura de Hoek-Brown); las deformaciones volumétricas que ocurren durante la deformación plástica en el fallo del macizo rocoso (i.e., influencia de la dilatancia) no son usualmente consideradas durante el diseño original de la presa. En este contexto, en la presente tesis doctoral se propone una metodología analítica para el análisis de la estabilidad al deslizamiento de presas de hormigón, considerando un mecanismo de fallo en la cimentación caracterizado por la presencia de una familia de discontinuidades. En particular, se considera la posibilidad de que exista una junta sub-horizontal, preexistente y persistente en el macizo rocoso de la cimentación, con una superficie potencial de fallo que se extiende a través del macizo rocoso. El coeficiente de seguridad es entonces estimado usando una combinación de las resistencias a lo largo de los planos de rotura, cuyas resistencias son evaluadas empleando los criterios de rotura no lineales de Barton y Choubey (1977) y Barton y Bandis (1990), a lo largo del plano de deslizamiento de la junta; y el criterio de rotura de Hoek y Brown (1980) en su versión generalizada (Hoek et al. 2002), a lo largo del macizo rocoso. La metodología propuesta también considera la influencia del comportamiento del macizo rocoso cuando este sigue una ley de flujo no asociada con ángulo de dilatancia constante (Hoek y Brown 1997). La nueva metodología analítica propuesta es usada para evaluar las condiciones de estabilidad empleando dos modelos: un modelo determinista y un modelo probabilista, cuyos resultados son el valor del coeficiente de seguridad y la probabilidad de fallo al deslizamiento, respectivamente. El modelo determinista, implementado en MATLAB, es validado usando soluciones numéricas calculadas mediante el método de las diferencias finitas, empleando el código FLAC 6.0. El modelo propuesto proporciona resultados que son bastante similares a aquellos calculados con FLAC; sin embargo, los costos computacionales de la formulación propuesta son significativamente menores, facilitando el análisis de sensibilidad de la influencia de los diferentes parámetros de entrada sobre la seguridad de la presa, de cuyos resultados se obtienen los parámetros que más peso tienen en la estabilidad al deslizamiento de la estructura, manifestándose además la influencia de la ley de flujo en la rotura del macizo rocoso. La probabilidad de fallo es obtenida empleando el método de fiabilidad de primer orden (First Order Reliability Method; FORM), y los resultados de FORM son posteriormente validados mediante simulaciones de Monte Carlo. Los resultados obtenidos mediante ambas metodologías demuestran que, para el caso no asociado, los valores de probabilidad de fallo se ajustan de manera satisfactoria a los obtenidos mediante las simulaciones de Monte Carlo. Los resultados del caso asociado no son tan buenos, ya que producen resultados con errores del 0.7% al 66%, en los que no obstante se obtiene una buena concordancia cuando los casos se encuentran en, o cerca de, la situación de equilibrio límite. La eficiencia computacional es la principal ventaja que ofrece el método FORM para el análisis de la estabilidad de presas de hormigón, a diferencia de las simulaciones de Monte Carlo (que requiere de al menos 4 horas por cada ejecución) FORM requiere tan solo de 1 a 3 minutos en cada ejecución. There are 1,300 large dams in Spain, 20% of which were built before 1960. The fact that there are still many old dams in operation has produced an interest of reevaluate their safety using new or updated tools that incorporate state-of-the-art failure modes, geotechnical concepts and new safety assessment techniques. For instance, for gravity dams one common design approach considers the sliding through the dam-foundation interface, using a simple linear Mohr-Coulomb failure criterion with constant friction angle and cohesion parameters. But the influence of aspects such as the persistence of joint sets in the rock mass below the dam foundation; of the influence of others failure criteria proposed for rock joint and rock masses (e.g. the Hoek-Brown criterion); or the volumetric strains that occur during plastic failure of rock masses (i.e., the influence of dilatancy) are often no considered during the original dam design. In this context, an analytical methodology is proposed herein to assess the sliding stability of concrete dams, considering an extended failure mechanism in its rock foundation, which is characterized by the presence of an inclined, and impersistent joint set. In particular, the possibility of a preexisting sub-horizontal and impersistent joint set is considered, with a potential failure surface that could extend through the rock mass; the safety factor is therefore computed using a combination of strength along the rock joint (using the nonlinear Barton and Choubey (1977) and Barton and Bandis (1990) failure criteria) and along the rock mass (using the nonlinear failure criterion of Hoek and Brown (1980) in its generalized expression from Hoek et al. (2002)). The proposed methodology also considers the influence of a non-associative flow rule that has been incorporated using a (constant) dilation angle (Hoek and Brown 1997). The newly proposed analytical methodology is used to assess the dam stability conditions, employing for this purpose the deterministic and probabilistic models, resulting in the sliding safety factor and the probability of failure respectively. The deterministic model, implemented in MATLAB, is validated using numerical solution computed with the finite difference code FLAC 6.0. The proposed deterministic model provides results that are very similar to those computed with FLAC; however, since the new formulation can be implemented in a spreadsheet, the computational cost of the proposed model is significantly smaller, hence allowing to more easily conduct parametric analyses of the influence of the different input parameters on the dam’s safety. Once the model is validated, parametric analyses are conducting using the main parameters that describe the dam’s foundation. From this study, the impact of the more influential parameters on the sliding stability analysis is obtained and the error of considering the flow rule is assessed. The probability of failure is obtained employing the First Order Reliability Method (FORM). The probabilistic model is then validated using the Monte Carlo simulation method. Results obtained using both methodologies show good agreement for cases in which the rock mass has a nonassociate flow rule. For cases with an associated flow rule errors between 0.70% and 66% are obtained, so that the better adjustments are obtained for cases with, or close to, limit equilibrium conditions. The main advantage of FORM on sliding stability analyses of gravity dams is its computational efficiency, so that Monte Carlo simulations require at least 4 hours on each execution, whereas FORM requires only 1 to 3 minutes on each execution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A methodology for developing an advanced communications system for the Deaf in a new domain is presented in this paper. This methodology is a user-centred design approach consisting of four main steps: requirement analysis, parallel corpus generation, technology adaptation to the new domain, and finally, system evaluation. During the requirement analysis, both the user and technical requirements are evaluated and defined. For generating the parallel corpus, it is necessary to collect Spanish sentences in the new domain and translate them into LSE (Lengua de Signos Española: Spanish Sign Language). LSE is represented by glosses and using video recordings. This corpus is used for training the two main modules of the advanced communications system to the new domain: the spoken Spanish into the LSE translation module and the Spanish generation from the LSE module. The main aspects to be generated are the vocabularies for both languages (Spanish words and signs), and the knowledge for translating in both directions. Finally, the field evaluation is carried out with deaf people using the advanced communications system to interact with hearing people in several scenarios. In this evaluation, the paper proposes several objective and subjective measurements for evaluating the performance. In this paper, the new considered domain is about dialogues in a hotel reception. Using this methodology, the system was developed in several months, obtaining very good performance: good translation rates (10% Sign Error Rate) with small processing times, allowing face-to-face dialogues.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta tesis aborda metodologías para el cálculo de riesgo de colisión de satélites. La minimización del riesgo de colisión se debe abordar desde dos puntos de vista distintos. Desde el punto de vista operacional, es necesario filtrar los objetos que pueden presentar un encuentro entre todos los objetos que comparten el espacio con un satélite operacional. Puesto que las órbitas, del objeto operacional y del objeto envuelto en la colisión, no se conocen perfectamente, la geometría del encuentro y el riesgo de colisión deben ser evaluados. De acuerdo con dicha geometría o riesgo, una maniobra evasiva puede ser necesaria para evitar la colisión. Dichas maniobras implican un consumo de combustible que impacta en la capacidad de mantenimiento orbital y por tanto de la visa útil del satélite. Por tanto, el combustible necesario a lo largo de la vida útil de un satélite debe ser estimado en fase de diseño de la misión para una correcta definición de su vida útil, especialmente para satélites orbitando en regímenes orbitales muy poblados. Los dos aspectos, diseño de misión y aspectos operacionales en relación con el riesgo de colisión están abordados en esta tesis y se resumen en la Figura 3. En relación con los aspectos relacionados con el diseño de misión (parte inferior de la figura), es necesario evaluar estadísticamente las características de de la población espacial y las teorías que permiten calcular el número medio de eventos encontrados por una misión y su capacidad de reducir riesgo de colisión. Estos dos aspectos definen los procedimientos más apropiados para reducir el riesgo de colisión en fase operacional. Este aspecto es abordado, comenzando por la teoría descrita en [Sánchez-Ortiz, 2006]T.14 e implementada por el autor de esta tesis en la herramienta ARES [Sánchez-Ortiz, 2004b]T.15 proporcionada por ESA para la evaluación de estrategias de evitación de colisión. Esta teoría es extendida en esta tesis para considerar las características de los datos orbitales disponibles en las fases operacionales de un satélite (sección 4.3.3). Además, esta teoría se ha extendido para considerar riesgo máximo de colisión cuando la incertidumbre de las órbitas de objetos catalogados no es conocida (como se da el caso para los TLE), y en el caso de querer sólo considerar riesgo de colisión catastrófico (sección 4.3.2.3). Dichas mejoras se han incluido en la nueva versión de ARES [Domínguez-González and Sánchez-Ortiz, 2012b]T.12 puesta a disposición a través de [SDUP,2014]R.60. En fase operacional, los catálogos que proporcionan datos orbitales de los objetos espaciales, son procesados rutinariamente, para identificar posibles encuentros que se analizan en base a algoritmos de cálculo de riesgo de colisión para proponer maniobras de evasión. Actualmente existe una única fuente de datos públicos, el catálogo TLE (de sus siglas en inglés, Two Line Elements). Además, el Joint Space Operation Center (JSpOC) Americano proporciona mensajes con alertas de colisión (CSM) cuando el sistema de vigilancia americano identifica un posible encuentro. En función de los datos usados en fase operacional (TLE o CSM), la estrategia de evitación puede ser diferente debido a las características de dicha información. Es preciso conocer las principales características de los datos disponibles (respecto a la precisión de los datos orbitales) para estimar los posibles eventos de colisión encontrados por un satélite a lo largo de su vida útil. En caso de los TLE, cuya precisión orbital no es proporcionada, la información de precisión orbital derivada de un análisis estadístico se puede usar también en el proceso operacional así como en el diseño de la misión. En caso de utilizar CSM como base de las operaciones de evitación de colisiones, se conoce la precisión orbital de los dos objetos involucrados. Estas características se han analizado en detalle, evaluando estadísticamente las características de ambos tipos de datos. Una vez concluido dicho análisis, se ha analizado el impacto de utilizar TLE o CSM en las operaciones del satélite (sección 5.1). Este análisis se ha publicado en una revista especializada [Sánchez-Ortiz, 2015b]T.3. En dicho análisis, se proporcionan recomendaciones para distintas misiones (tamaño del satélite y régimen orbital) en relación con las estrategias de evitación de colisión para reducir el riesgo de colisión de manera significativa. Por ejemplo, en el caso de un satélite en órbita heliosíncrona en régimen orbital LEO, el valor típico del ACPL que se usa de manera extendida es 10-4. Este valor no es adecuado cuando los esquemas de evitación de colisión se realizan sobre datos TLE. En este caso, la capacidad de reducción de riesgo es prácticamente nula (debido a las grandes incertidumbres de los datos TLE) incluso para tiempos cortos de predicción. Para conseguir una reducción significativa del riesgo, sería necesario usar un ACPL en torno a 10-6 o inferior, produciendo unas 10 alarmas al año por satélite (considerando predicciones a un día) o 100 alarmas al año (con predicciones a tres días). Por tanto, la principal conclusión es la falta de idoneidad de los datos TLE para el cálculo de eventos de colisión. Al contrario, usando los datos CSM, debido a su mejor precisión orbital, se puede obtener una reducción significativa del riesgo con ACPL en torno a 10-4 (considerando 3 días de predicción). Incluso 5 días de predicción pueden ser considerados con ACPL en torno a 10-5. Incluso tiempos de predicción más largos se pueden usar (7 días) con reducción del 90% del riesgo y unas 5 alarmas al año (en caso de predicciones de 5 días, el número de maniobras se mantiene en unas 2 al año). La dinámica en GEO es diferente al caso LEO y hace que el crecimiento de las incertidumbres orbitales con el tiempo de propagación sea menor. Por el contrario, las incertidumbres derivadas de la determinación orbital son peores que en LEO por las diferencias en las capacidades de observación de uno y otro régimen orbital. Además, se debe considerar que los tiempos de predicción considerados para LEO pueden no ser apropiados para el caso de un satélite GEO (puesto que tiene un periodo orbital mayor). En este caso usando datos TLE, una reducción significativa del riesgo sólo se consigue con valores pequeños de ACPL, produciendo una alarma por año cuando los eventos de colisión se predicen a un día vista (tiempo muy corto para implementar maniobras de evitación de colisión).Valores más adecuados de ACPL se encuentran entre 5•10-8 y 10-7, muy por debajo de los valores usados en las operaciones actuales de la mayoría de las misiones GEO (de nuevo, no se recomienda en este régimen orbital basar las estrategias de evitación de colisión en TLE). Los datos CSM permiten una reducción de riesgo apropiada con ACPL entre 10-5 y 10-4 con tiempos de predicción cortos y medios (10-5 se recomienda para predicciones a 5 o 7 días). El número de maniobras realizadas sería una en 10 años de misión. Se debe notar que estos cálculos están realizados para un satélite de unos 2 metros de radio. En el futuro, otros sistemas de vigilancia espacial (como el programa SSA de la ESA), proporcionarán catálogos adicionales de objetos espaciales con el objetivo de reducir el riesgo de colisión de los satélites. Para definir dichos sistemas de vigilancia, es necesario identificar las prestaciones del catalogo en función de la reducción de riesgo que se pretende conseguir. Las características del catálogo que afectan principalmente a dicha capacidad son la cobertura (número de objetos incluidos en el catalogo, limitado principalmente por el tamaño mínimo de los objetos en función de las limitaciones de los sensores utilizados) y la precisión de los datos orbitales (derivada de las prestaciones de los sensores en relación con la precisión de las medidas y la capacidad de re-observación de los objetos). El resultado de dicho análisis (sección 5.2) se ha publicado en una revista especializada [Sánchez-Ortiz, 2015a]T.2. Este análisis no estaba inicialmente previsto durante la tesis, y permite mostrar como la teoría descrita en esta tesis, inicialmente definida para facilitar el diseño de misiones (parte superior de la figura 1) se ha extendido y se puede aplicar para otros propósitos como el dimensionado de un sistema de vigilancia espacial (parte inferior de la figura 1). La principal diferencia de los dos análisis se basa en considerar las capacidades de catalogación (precisión y tamaño de objetos observados) como una variable a modificar en el caso de un diseño de un sistema de vigilancia), siendo fijas en el caso de un diseño de misión. En el caso de las salidas generadas en el análisis, todos los aspectos calculados en un análisis estadístico de riesgo de colisión son importantes para diseño de misión (con el objetivo de calcular la estrategia de evitación y la cantidad de combustible a utilizar), mientras que en el caso de un diseño de un sistema de vigilancia, los aspectos más importantes son el número de maniobras y falsas alarmas (fiabilidad del sistema) y la capacidad de reducción de riesgo (efectividad del sistema). Adicionalmente, un sistema de vigilancia espacial debe ser caracterizado por su capacidad de evitar colisiones catastróficas (evitando así in incremento dramático de la población de basura espacial), mientras que el diseño de una misión debe considerar todo tipo de encuentros, puesto que un operador está interesado en evitar tanto las colisiones catastróficas como las letales. Del análisis de las prestaciones (tamaño de objetos a catalogar y precisión orbital) requeridas a un sistema de vigilancia espacial se concluye que ambos aspectos han de ser fijados de manera diferente para los distintos regímenes orbitales. En el caso de LEO se hace necesario observar objetos de hasta 5cm de radio, mientras que en GEO se rebaja este requisito hasta los 100 cm para cubrir las colisiones catastróficas. La razón principal para esta diferencia viene de las diferentes velocidades relativas entre los objetos en ambos regímenes orbitales. En relación con la precisión orbital, ésta ha de ser muy buena en LEO para poder reducir el número de falsas alarmas, mientras que en regímenes orbitales más altos se pueden considerar precisiones medias. En relación con los aspectos operaciones de la determinación de riesgo de colisión, existen varios algoritmos de cálculo de riesgo entre dos objetos espaciales. La Figura 2 proporciona un resumen de los casos en cuanto a algoritmos de cálculo de riesgo de colisión y como se abordan en esta tesis. Normalmente se consideran objetos esféricos para simplificar el cálculo de riesgo (caso A). Este caso está ampliamente abordado en la literatura y no se analiza en detalle en esta tesis. Un caso de ejemplo se proporciona en la sección 4.2. Considerar la forma real de los objetos (caso B) permite calcular el riesgo de una manera más precisa. Un nuevo algoritmo es definido en esta tesis para calcular el riesgo de colisión cuando al menos uno de los objetos se considera complejo (sección 4.4.2). Dicho algoritmo permite calcular el riesgo de colisión para objetos formados por un conjunto de cajas, y se ha presentado en varias conferencias internacionales. Para evaluar las prestaciones de dicho algoritmo, sus resultados se han comparado con un análisis de Monte Carlo que se ha definido para considerar colisiones entre cajas de manera adecuada (sección 4.1.2.3), pues la búsqueda de colisiones simples aplicables para objetos esféricos no es aplicable a este caso. Este análisis de Monte Carlo se considera la verdad a la hora de calcular los resultados del algoritmos, dicha comparativa se presenta en la sección 4.4.4. En el caso de satélites que no se pueden considerar esféricos, el uso de un modelo de la geometría del satélite permite descartar eventos que no son colisiones reales o estimar con mayor precisión el riesgo asociado a un evento. El uso de estos algoritmos con geometrías complejas es más relevante para objetos de dimensiones grandes debido a las prestaciones de precisión orbital actuales. En el futuro, si los sistemas de vigilancia mejoran y las órbitas son conocidas con mayor precisión, la importancia de considerar la geometría real de los satélites será cada vez más relevante. La sección 5.4 presenta un ejemplo para un sistema de grandes dimensiones (satélite con un tether). Adicionalmente, si los dos objetos involucrados en la colisión tienen velocidad relativa baja (y geometría simple, Caso C en la Figura 2), la mayor parte de los algoritmos no son aplicables requiriendo implementaciones dedicadas para este caso particular. En esta tesis, uno de estos algoritmos presentado en la literatura [Patera, 2001]R.26 se ha analizado para determinar su idoneidad en distintos tipos de eventos (sección 4.5). La evaluación frete a un análisis de Monte Carlo se proporciona en la sección 4.5.2. Tras este análisis, se ha considerado adecuado para abordar las colisiones de baja velocidad. En particular, se ha concluido que el uso de algoritmos dedicados para baja velocidad son necesarios en función del tamaño del volumen de colisión proyectado en el plano de encuentro (B-plane) y del tamaño de la incertidumbre asociada al vector posición entre los dos objetos. Para incertidumbres grandes, estos algoritmos se hacen más necesarios pues la duración del intervalo en que los elipsoides de error de los dos objetos pueden intersecar es mayor. Dicho algoritmo se ha probado integrando el algoritmo de colisión para objetos con geometrías complejas. El resultado de dicho análisis muestra que este algoritmo puede ser extendido fácilmente para considerar diferentes tipos de algoritmos de cálculo de riesgo de colisión (sección 4.5.3). Ambos algoritmos, junto con el método Monte Carlo para geometrías complejas, se han implementado en la herramienta operacional de la ESA CORAM, que es utilizada para evaluar el riesgo de colisión en las actividades rutinarias de los satélites operados por ESA [Sánchez-Ortiz, 2013a]T.11. Este hecho muestra el interés y relevancia de los algoritmos desarrollados para la mejora de las operaciones de los satélites. Dichos algoritmos han sido presentados en varias conferencias internacionales [Sánchez-Ortiz, 2013b]T.9, [Pulido, 2014]T.7,[Grande-Olalla, 2013]T.10, [Pulido, 2014]T.5, [Sánchez-Ortiz, 2015c]T.1. ABSTRACT This document addresses methodologies for computation of the collision risk of a satellite. Two different approaches need to be considered for collision risk minimisation. On an operational basis, it is needed to perform a sieve of possible objects approaching the satellite, among all objects sharing the space with an operational satellite. As the orbits of both, satellite and the eventual collider, are not perfectly known but only estimated, the miss-encounter geometry and the actual risk of collision shall be evaluated. In the basis of the encounter geometry or the risk, an eventual manoeuvre may be required to avoid the conjunction. Those manoeuvres will be associated to a reduction in the fuel for the mission orbit maintenance, and thus, may reduce the satellite operational lifetime. Thus, avoidance manoeuvre fuel budget shall be estimated, at mission design phase, for a better estimation of mission lifetime, especially for those satellites orbiting in very populated orbital regimes. These two aspects, mission design and operational collision risk aspects, are summarised in Figure 3, and covered along this thesis. Bottom part of the figure identifies the aspects to be consider for the mission design phase (statistical characterisation of the space object population data and theory computing the mean number of events and risk reduction capability) which will define the most appropriate collision avoidance approach at mission operational phase. This part is covered in this work by starting from the theory described in [Sánchez-Ortiz, 2006]T.14 and implemented by this author in ARES tool [Sánchez-Ortiz, 2004b]T.15 provided by ESA for evaluation of collision avoidance approaches. This methodology has been now extended to account for the particular features of the available data sets in operational environment (section 4.3.3). Additionally, the formulation has been extended to allow evaluating risk computation approached when orbital uncertainty is not available (like the TLE case) and when only catastrophic collisions are subject to study (section 4.3.2.3). These improvements to the theory have been included in the new version of ESA ARES tool [Domínguez-González and Sánchez-Ortiz, 2012b]T.12 and available through [SDUP,2014]R.60. At the operation phase, the real catalogue data will be processed on a routine basis, with adequate collision risk computation algorithms to propose conjunction avoidance manoeuvre optimised for every event. The optimisation of manoeuvres in an operational basis is not approached along this document. Currently, American Two Line Element (TLE) catalogue is the only public source of data providing orbits of objects in space to identify eventual conjunction events. Additionally, Conjunction Summary Message (CSM) is provided by Joint Space Operation Center (JSpOC) when the American system identifies a possible collision among satellites and debris. Depending on the data used for collision avoidance evaluation, the conjunction avoidance approach may be different. The main features of currently available data need to be analysed (in regards to accuracy) in order to perform estimation of eventual encounters to be found along the mission lifetime. In the case of TLE, as these data is not provided with accuracy information, operational collision avoidance may be also based on statistical accuracy information as the one used in the mission design approach. This is not the case for CSM data, which includes the state vector and orbital accuracy of the two involved objects. This aspect has been analysed in detail and is depicted in the document, evaluating in statistical way the characteristics of both data sets in regards to the main aspects related to collision avoidance. Once the analysis of data set was completed, investigations on the impact of those features in the most convenient avoidance approaches have been addressed (section 5.1). This analysis is published in a peer-reviewed journal [Sánchez-Ortiz, 2015b]T.3. The analysis provides recommendations for different mission types (satellite size and orbital regime) in regards to the most appropriate collision avoidance approach for relevant risk reduction. The risk reduction capability is very much dependent on the accuracy of the catalogue utilized to identify eventual collisions. Approaches based on CSM data are recommended against the TLE based approach. Some approaches based on the maximum risk associated to envisaged encounters are demonstrated to report a very large number of events, which makes the approach not suitable for operational activities. Accepted Collision Probability Levels are recommended for the definition of the avoidance strategies for different mission types. For example for the case of a LEO satellite in the Sun-synchronous regime, the typically used ACPL value of 10-4 is not a suitable value for collision avoidance schemes based on TLE data. In this case the risk reduction capacity is almost null (due to the large uncertainties associated to TLE data sets, even for short time-to-event values). For significant reduction of risk when using TLE data, ACPL on the order of 10-6 (or lower) seems to be required, producing about 10 warnings per year and mission (if one-day ahead events are considered) or 100 warnings per year (for three-days ahead estimations). Thus, the main conclusion from these results is the lack of feasibility of TLE for a proper collision avoidance approach. On the contrary, for CSM data, and due to the better accuracy of the orbital information when compared with TLE, ACPL on the order of 10-4 allows to significantly reduce the risk. This is true for events estimated up to 3 days ahead. Even 5 days ahead events can be considered, but ACPL values down to 10-5 should be considered in such case. Even larger prediction times can be considered (7 days) for risk reduction about 90%, at the cost of larger number of warnings up to 5 events per year, when 5 days prediction allows to keep the manoeuvre rate in 2 manoeuvres per year. Dynamics of the GEO orbits is different to that in LEO, impacting on a lower increase of orbits uncertainty along time. On the contrary, uncertainties at short prediction times at this orbital regime are larger than those at LEO due to the differences in observation capabilities. Additionally, it has to be accounted that short prediction times feasible at LEO may not be appropriate for a GEO mission due to the orbital period being much larger at this regime. In the case of TLE data sets, significant reduction of risk is only achieved for small ACPL values, producing about a warning event per year if warnings are raised one day in advance to the event (too short for any reaction to be considered). Suitable ACPL values would lay in between 5•10-8 and 10-7, well below the normal values used in current operations for most of the GEO missions (TLE-based strategies for collision avoidance at this regime are not recommended). On the contrary, CSM data allows a good reduction of risk with ACPL in between 10-5 and 10-4 for short and medium prediction times. 10-5 is recommended for prediction times of five or seven days. The number of events raised for a suitable warning time of seven days would be about one in a 10-year mission. It must be noted, that these results are associated to a 2 m radius spacecraft, impact of the satellite size are also analysed within the thesis. In the future, other Space Situational Awareness Systems (SSA, ESA program) may provide additional catalogues of objects in space with the aim of reducing the risk. It is needed to investigate which are the required performances of those catalogues for allowing such risk reduction. The main performance aspects are coverage (objects included in the catalogue, mainly limited by a minimum object size derived from sensor performances) and the accuracy of the orbital data to accurately evaluate the conjunctions (derived from sensor performance in regards to object observation frequency and accuracy). The results of these investigations (section 5.2) are published in a peer-reviewed journal [Sánchez-Ortiz, 2015a]T.2. This aspect was not initially foreseen as objective of the thesis, but it shows how the theory described in the thesis, initially defined for mission design in regards to avoidance manoeuvre fuel allocation (upper part of figure 1), is extended and serves for additional purposes as dimensioning a Space Surveillance and Tracking (SST) system (bottom part of figure below). The main difference between the two approaches is the consideration of the catalogue features as part of the theory which are not modified (for the satellite mission design case) instead of being an input for the analysis (in the case of the SST design). In regards to the outputs, all the features computed by the statistical conjunction analysis are of importance for mission design (with the objective of proper global avoidance strategy definition and fuel allocation), whereas for the case of SST design, the most relevant aspects are the manoeuvre and false alarm rates (defining a reliable system) and the Risk Reduction capability (driving the effectiveness of the system). In regards to the methodology for computing the risk, the SST system shall be driven by the capacity of providing the means to avoid catastrophic conjunction events (avoiding the dramatic increase of the population), whereas the satellite mission design should consider all type of encounters, as the operator is interested on avoiding both lethal and catastrophic collisions. From the analysis of the SST features (object coverage and orbital uncertainty) for a reliable system, it is concluded that those two characteristics are to be imposed differently for the different orbital regimes, as the population level is different depending on the orbit type. Coverage values range from 5 cm for very populated LEO regime up to 100 cm in the case of GEO region. The difference on this requirement derives mainly from the relative velocity of the encounters at those regimes. Regarding the orbital knowledge of the catalogues, very accurate information is required for objects in the LEO region in order to limit the number of false alarms, whereas intermediate orbital accuracy can be considered for higher orbital regimes. In regards to the operational collision avoidance approaches, several collision risk algorithms are used for evaluation of collision risk of two pair of objects. Figure 2 provides a summary of the different collision risk algorithm cases and indicates how they are covered along this document. The typical case with high relative velocity is well covered in literature for the case of spherical objects (case A), with a large number of available algorithms, that are not analysed in detailed in this work. Only a sample case is provided in section 4.2. If complex geometries are considered (Case B), a more realistic risk evaluation can be computed. New approach for the evaluation of risk in the case of complex geometries is presented in this thesis (section 4.4.2), and it has been presented in several international conferences. The developed algorithm allows evaluating the risk for complex objects formed by a set of boxes. A dedicated Monte Carlo method has also been described (section 4.1.2.3) and implemented to allow the evaluation of the actual collisions among a large number of simulation shots. This Monte Carlo runs are considered the truth for comparison of the algorithm results (section 4.4.4). For spacecrafts that cannot be considered as spheres, the consideration of the real geometry of the objects may allow to discard events which are not real conjunctions, or estimate with larger reliability the risk associated to the event. This is of particular importance for the case of large spacecrafts as the uncertainty in positions of actual catalogues does not reach small values to make a difference for the case of objects below meter size. As the tracking systems improve and the orbits of catalogued objects are known more precisely, the importance of considering actual shapes of the objects will become more relevant. The particular case of a very large system (as a tethered satellite) is analysed in section 5.4. Additionally, if the two colliding objects have low relative velocity (and simple geometries, case C in figure above), the most common collision risk algorithms fail and adequate theories need to be applied. In this document, a low relative velocity algorithm presented in the literature [Patera, 2001]R.26 is described and evaluated (section 4.5). Evaluation through comparison with Monte Carlo approach is provided in section 4.5.2. The main conclusion of this analysis is the suitability of this algorithm for the most common encounter characteristics, and thus it is selected as adequate for collision risk estimation. Its performances are evaluated in order to characterise when it can be safely used for a large variety of encounter characteristics. In particular, it is found that the need of using dedicated algorithms depend on both the size of collision volume in the B-plane and the miss-distance uncertainty. For large uncertainties, the need of such algorithms is more relevant since for small uncertainties the encounter duration where the covariance ellipsoids intersect is smaller. Additionally, its application for the case of complex satellite geometries is assessed (case D in figure above) by integrating the developed algorithm in this thesis with Patera’s formulation for low relative velocity encounters. The results of this analysis show that the algorithm can be easily extended for collision risk estimation process suitable for complex geometry objects (section 4.5.3). The two algorithms, together with the Monte Carlo method, have been implemented in the operational tool CORAM for ESA which is used for the evaluation of collision risk of ESA operated missions, [Sánchez-Ortiz, 2013a]T.11. This fact shows the interest and relevance of the developed algorithms for improvement of satellite operations. The algorithms have been presented in several international conferences, [Sánchez-Ortiz, 2013b]T.9, [Pulido, 2014]T.7,[Grande-Olalla, 2013]T.10, [Pulido, 2014]T.5, [Sánchez-Ortiz, 2015c]T.1.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La tesis “CAN LIS, La huella de la arquitectura de Jørn Utzon a través de su obra Can Lis” analiza en profundidad la vivienda que Jørn Utzon realizó para sí mismo en la entonces calle de la Media Luna, Porto Petro, Mallorca, entre los años 1970-74. La investigación plantea el análisis de esta obra maestra de la arquitectura, sus causas e intenciones, su proceso proyectual, su proceso constructivo, sus relaciones e influencias y sus significados últimos. Este estudio es fruto de una labor de investigación que comenzó hace más de 10 años. Enfrentarse a una obra cuya imagen es tan conocida es una tarea menos sencilla de lo que pudiera parecer. La descripción del proceso de trabajo de Jørn Utzon en Can Lis, y de su método en general, contiene un considerable número de mitificaciones que han sido comúnmente sostenidas por diversos estudiosos de prestigio, que han abordado esta obra sin un estudio razonable de la documentación existente, ni una investigación de los archivos, ni una comprobación de los hechos o una consulta de los testigos intervinientes. Por ello, cuestiones fundamentales habían quedado en el terreno de la conjetura y permanecía deformada la visión general de los procesos de Jørn Utzon en Mallorca. La tesis se ha estructurado en dos grandes apartados que permitan la comprensión de los elementos fundamentales para su gestación: por un lado, el proceso proyectual y constructivo con sus circunstancias y condicionantes y, por otro, el conjunto de hechos y conceptos que han influido de manera directa y significativa en su concreción. Esta investigación contiene una cantidad considerable de materiales y hechos inéditos, fruto de su desarrollo, así como documentación y dibujos nunca publicados anteriormente. Es por tanto, el estudio más completo realizado sobre Can Lis hasta la fecha. En la primera parte de la tesis “CAN LIS” se aborda todo el proceso proyectual de Can Lis desde los primeros croquis, con las condiciones y motivaciones iniciales, los proyectos sucesivos, el proyecto básico y el proyecto de ejecución, hasta el proceso de construcción, su desarrollo y circunstancias, así como las modificaciones posteriores introducidas. Estos elementos se estructuran en tres apartados 1) Las condiciones y condicionantes iniciales. 2) El proceso proyectual. 3) El proceso constructivo. En la segunda parte de la tesis “La huella de la arquitectura de Jørn Utzon a través de su obra Can Lis” se han analizado de manera particular algunos hechos arquitectónicos y vitales previos que fueron especialmente determinantes en la concepción de Can Lis y la serie de influencias directas que son imprescindibles para la comprensión de las condiciones en que se desarrolló y de las ideas que alumbra. Estudios previos de toda la trayectoria de Utzon y todo el corpus de influencias ayudaron a determinar cuáles de ellos son fundamentales para la comprensión del proceso. Estos elementos se pueden considerar directamente precursores de ideas desarrolladas específicamente en Can Lis: 4) La experimentación en Australia: la naturaleza, la dimensión humana y la técnica. La casa de Bayview. 5) Utzon y el descubrimiento de lo islámico: la secuencia espacial, la materia y la luz de la arquitectura islámica como experiencia de aprendizaje. 6) Paisaje sacro, hombre y arquitectura: la búsqueda de Utzon en Grecia. Las conclusiones de este estudio determinan una serie de certezas sobre los procesos de Jørn Utzon en Can Lis, y en su obra global, durante las diversas fases creativas que desmitifican muchos acercamientos precedentes: la visión universal de la arquitectura en su acercamiento al proyecto, el entretejido método proyectual de Utzon, la precisa búsqueda del desarrollo constructivo junto a la ejemplaridad en el rigor, la vitalidad y la espiritualidad con que Utzon aborda la arquitectura, la trascendencia de la forma de habitar de las personas y el valor que para Utzon tiene la arquitectura como medio para revelar el orden universal que nos rodea. ABSTRACT “CAN LIS. The footprint of Jørn Utzon´s architecture through his work Can Lis” carries out a thorough analysis of the house Jørn built for himself between 1970 and 1974 on the street formerly called Calle de la Media Luna, in Porto Petro on the island of Mallorca. The research focuses on the causes that brought about this masterpiece, its purposes, design and building processes, its relations with and influences on other buildings and its ultimate meanings. This study is the result of a research that started over 10 years ago. Approaching a piece of work whose image is so well-known is harder than one might think. Many aspects of Jørn Utzon´s working method in Can Lis particularly but also generally have been described inaccurately and mythicized by several renowned scholars, who hadn´t really studied the existing documents or archives nor confirmed their hypotheses or consulted witnesses of the creation of Can Lis. This is why many key issues have been uncertain all this time and the general idea about Utzon´s working process in Mallorca has been distorted. This dissertation has been structured into two main sections with the aim of helping understand all the facts that led to the creation of the building: the first part describes the design and building processes, as well as the circumstances and constraints under which they were carried out. The second part presents the events and ideas that directly and indirectly influenced the final work. This thesis contains a large amount of unpublished material and information, documents and drawings from the creation process as the inevitable result of a long lasting tenacious research. It probably is the most comprehensive study on Can Lis so far. The first section, “CAN LIS”, addresses the design process of Can Lis from the very first sketches all the way through the subsequent plans, including basic and execution plans and the building process up to the changes that were later carried out on the building. The project´s initial conditions and Utzon´s first motivations and the evolution and circumstances of the building process are also described in this section, which has been divided into three parts: 1. The initial conditions and constraints. 2. The design process. 3. The building process. The second section, “The footprint of Jørn Utzon´s architecture through his work Can Lis”, focuses on some events in Utzon´s life and some of his architectural experiences prior to the creation of Can Lis that were decisive to it. Also the direct influences on the process which are key to understanding the conditions under which the creation was carried out as well as the ideas Can Lis sheds a light on are illustrated in this section. A thorough study of the whole process Utzon went through and of all the influences he was subjected to helped determine what is crucial to comprehending Can Lis. These events or experiences can be considered as the direct precursors to the ideas that crystallised in this master piece: 4. Experiments carried out in Australia: nature, human and technical dimensions. The Bayview house. 5. Utzon´s discovery of Islamic architecture: spatial sequences, matter and light as a learning experience. 6. Sacred landscape, man and architecture: Utzon´s quest in Greece. The findings of this research determine some aspects of Utzon´s working methods at every stage of the creation process both in Can Lis and in general, thus shedding light on previous mistaken ideas about Utzon´s way of working. A universal understanding of architecture, an intertwined design method, a tenacious search for the exact construction solutions, an exemplary rigour, vitality and spirituality in the design approach, the transcendence of our way of living and architecture´s potential to reveal the Universal order that surrounds are all aspects that do define Utzon as an architect.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We have inserted a fourth protein ligand into the zinc coordination polyhedron of carbonic anhydrase II (CAII) that increases metal affinity 200-fold (Kd = 20 fM). The three-dimensional structures of threonine-199-->aspartate (T199D) and threonine-199-->glutamate (T199E) CAIIs, determined by x-ray crystallographic methods to resolutions of 2.35 Angstrum and 2.2 Angstrum, respectively, reveal a tetrahedral metal-binding site consisting of H94, H96, H119, and the engineered carboxylate side chain, which displaces zinc-bound hydroxide. Although the stereochemistry of neither engineered carboxylate-zinc interaction is comparable to that found in naturally occurring protein zinc-binding sites, protein-zinc affinity is enhanced in T199E CAII demonstrating that ligand-metal separation is a significant determinant of carboxylate-zinc affinity. In contrast, the three-dimensional structure of threonine-199-->histidine (T199H) CAII, determined to 2.25-Angstrum resolution, indicates that the engineered imidazole side chain rotates away from the metal and does not coordinate to zinc; this results in a weaker zinc-binding site. All three of these substitutions nearly obliterate CO2 hydrase activity, consistent with the role of zinc-bound hydroxide as catalytic nucleophile. The engineering of an additional protein ligand represents a general approach for increasing protein-metal affinity if the side chain can adopt a reasonable conformation and achieve inner-sphere zinc coordination. Moreover, this structure-assisted design approach may be effective in the development of high-sensitivity metal ion biosensors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

High-quality software, delivered on time and budget, constitutes a critical part of most products and services in modern society. Our government has invested billions of dollars to develop software assets, often to redevelop the same capability many times. Recognizing the waste involved in redeveloping these assets, in 1992 the Department of Defense issued the Software Reuse Initiative. The vision of the Software Reuse Initiative was "To drive the DoD software community from its current "re-invent the software" cycle to a process-driven, domain-specific, architecture-centric, library-based way of constructing software.'' Twenty years after issuing this initiative, there is evidence of this vision beginning to be realized in nonembedded systems. However, virtually every large embedded system undertaken has incurred large cost and schedule overruns. Investigations into the root cause of these overruns implicates reuse. Why are we seeing improvements in the outcomes of these large scale nonembedded systems and worse outcomes in embedded systems? This question is the foundation for this research. The experiences of the Aerospace industry have led to a number of questions about reuse and how the industry is employing reuse in embedded systems. For example, does reuse in embedded systems yield the same outcomes as in nonembedded systems? Are the outcomes positive? If the outcomes are different, it may indicate that embedded systems should not use data from nonembedded systems for estimation. Are embedded systems using the same development approaches as nonembedded systems? Does the development approach make a difference? If embedded systems develop software differently from nonembedded systems, it may mean that the same processes do not apply to both types of systems. What about the reuse of different artifacts? Perhaps there are certain artifacts that, when reused, contribute more or are more difficult to use in embedded systems. Finally, what are the success factors and obstacles to reuse? Are they the same in embedded systems as in nonembedded systems? The research in this dissertation is comprised of a series of empirical studies using professionals in the aerospace and defense industry as its subjects. The main focus has been to investigate the reuse practices of embedded systems professionals and nonembedded systems professionals and compare the methods and artifacts used against the outcomes. The research has followed a combined qualitative and quantitative design approach. The qualitative data were collected by surveying software and systems engineers, interviewing senior developers, and reading numerous documents and other studies. Quantitative data were derived from converting survey and interview respondents' answers into coding that could be counted and measured. From the search of existing empirical literature, we learned that reuse in embedded systems are in fact significantly different from nonembedded systems, particularly in effort in model based development approach and quality where the development approach was not specified. The questionnaire showed differences in the development approach used in embedded projects from nonembedded projects, in particular, embedded systems were significantly more likely to use a heritage/legacy development approach. There was also a difference in the artifacts used, with embedded systems more likely to reuse hardware, test products, and test clusters. Nearly all the projects reported using code, but the questionnaire showed that the reuse of code brought mixed results. One of the differences expressed by the respondents to the questionnaire was the difficulty in reuse of code for embedded systems when the platform changed. The semistructured interviews were performed to tell us why the phenomena in the review of literature and the questionnaire were observed. We asked respected industry professionals, such as senior fellows, fellows and distinguished members of technical staff, about their experiences with reuse. We learned that many embedded systems used heritage/legacy development approaches because their systems had been around for many years, before models and modeling tools became available. We learned that reuse of code is beneficial primarily when the code does not require modification, but, especially in embedded systems, once it has to be changed, reuse of code yields few benefits. Finally, while platform independence is a goal for many in nonembedded systems, it is certainly not a goal for the embedded systems professionals and in many cases it is a detriment. However, both embedded and nonembedded systems professionals endorsed the idea of platform standardization. Finally, we conclude that while reuse in embedded systems and nonembedded systems is different today, they are converging. As heritage embedded systems are phased out, models become more robust and platforms are standardized, reuse in embedded systems will become more like nonembedded systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Policy implementation by private actors constitutes a “missing link” for understanding the implications of private governance. This paper proposes and assesses an institutional logics framework that combines a top-down, policy design approach with a bottom-up, implementation perspective on discretion. We argue that the conflicting institutional logics of the state and the market, in combination with differing degrees of goal ambiguity, accountability and hybridity play a crucial role for output performance. These arguments are analyzed based on a secondary analysis of seven case studies of private and hybrid policy implementation in diverging contexts. We find that aligning private output performance with public interests is at least partly a question of policy design congruence: private implementing actors tend to perform deficiently when the conflicting logics of the state and the market combine with weak accountability mechanisms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-04

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A phantom that can be used for mapping geometric distortion in magnetic resonance imaging (MRI) is described. This phantom provides an array of densely distributed control points in three-dimensional (3D) space. These points form the basis of a comprehensive measurement method to correct for geometric distortion in MR images arising principally from gradient field non-linearity and magnet field inhomogeneity. The phantom was designed based on the concept that a point in space can be defined using three orthogonal planes. This novel design approach allows for as many control points as desired. Employing this novel design, a highly accurate method has been developed that enables the positions of the control points to be measured to sub-voxel accuracy. The phantom described in this paper was constructed to fit into a body coil of a MRI scanner, (external dimensions of the phantom were: 310 mm x 310 mm x 310 mm), and it contained 10,830 control points. With this phantom, the mean errors in the measured coordinates of the control points were on the order of 0.1 mm or less, which were less than one tenth of the voxel's dimensions of the phantom image. The calculated three-dimensional distortion map, i.e., the differences between the image positions and true positions of the control points, can then be used to compensate for geometric distortion for a full image restoration. It is anticipated that this novel method will have an impact on the applicability of MRI in both clinical and research settings. especially in areas where geometric accuracy is highly required, such as in MR neuro-imaging. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a new method for the optimisation of the mirror element spacing arrangement and operating temperature of linear Fresnel reflectors (LFR). The specific objective is to maximise available power output (i.e. exergy) and operational hours whilst minimising cost. The method is described in detail and compared to an existing design method prominent in the literature. Results are given in terms of the exergy per total mirror area (W/m2) and cost per exergy (US $/W). The new method is applied principally to the optimisation of an LFR in Gujarat, India, for which cost data have been gathered. It is recommended to use a spacing arrangement such that the onset of shadowing among mirror elements occurs at a transversal angle of 45°. This results in a cost per exergy of 2.3 $/W. Compared to the existing design approach, the exergy averaged over the year is increased by 9% to 50 W/m2 and an additional 122 h of operation per year are predicted. The ideal operating temperature at the surface of the absorber tubes is found to be 300 °C. It is concluded that the new method is an improvement over existing techniques and a significant tool for any future design work on LFR systems

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The thesis describes an investigation into methods for the design of flexible high-speed product processing machinery, consisting of independent electromechanically actuated machine functions which operate under software coordination and control. An analysis is made of the elements of traditionally designed cam-actuated, mechanically coupled machinery, so that the operational functions and principal performance limitations of the separate machine elements may be identified. These are then used to define the requirements for independent actuators machinery, with a discussion of how this type of design approach is more suited to modern manufacturing trends. A distributed machine controller topology is developed which is a hybrid of hierarchical and pipeline control. An analysis is made, with the aid of dynamic simulation modelling, which confirms the suitability of the controller for flexible machinery control. The simulations include complex models of multiple independent actuators systems, which enable product flow and failure analyses to be performed. An analysis is made of high performance brushless d.c. servomotors and their suitability for actuating machine motions is assessed. Procedures are developed for the selection of brushless servomotors for intermittent machine motions. An experimental rig is described which has enabled the actuation and control methods developed to be implemented. With reference to this, an evaluation is made of the suitability of the machine design method and a discussion is given of the developments which are necessary for operational independent actuators machinery to be attained.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis is concerned with the experimental and theoretical investigation into the compression bond of column longitudinal reinforcement in the transference of axial load from a reinforced concrete column to a base. Experimental work includes twelve tests with square twisted bars and twenty four tests with ribbed bars. The effects of bar size, anchorage length in the base, plan area of the base, provision of bae tensile reinforcement, links around the column bars in the base, plan area of column and concrete compressive strength were investigated in the tests. The tests indicated that the strength of the compression anchorage of deformed reinforcing steel in the concrete was primarily dependent on the concrete strength and the resistance to bursting, which may be available within the anchorage . It was shown in the tests without concreted columns that due to a large containment over the bars in the foundation, failure occurred due to the breakdown of bond followed by the slip of the column bars along the anchorage length. The experimental work showed that the bar size , the stress in the bar, the anchorage length, provision of the transverse steel and the concrete compressive strength significantly affect the bond stress at failure. The ultimate bond stress decreases as the anchorage length is increased, while the ultimate bond stress increases with increasing each of the remainder parameters. Tests with concreted columns also indicated that a section of the column contributed to the bond length in the foundation by acting as an extra anchorage length. The theoretical work is based on the Mindlin equation( 3), an analytical method used in conjunction with finite difference calculus. The theory is used to plot the distribution of bond stress in the elastic and the elastic-plastic stage of behaviour. The theory is also used to plot the load-vertical displacement relationship of the column bars in the anchorage length, and also to determine the theoretical failure load of foundation. The theoretical solutions are in good agreement with the experimental results and the distribution of bond stress is shown to be significantly influenced by the bar stiffness factor K. A comparison of the experimental results with the current codes shows that the bond stresses currently used are low and in particular, CPIlO(56) specifies very conservative design bond stresses .

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dedicated short range communications (DSRC) was proposed for collaborative safety applications (CSA) in vehicle communications. In this article we propose two adaptive congestion control schemes for DSRC-based CSA. A cross-layer design approach is used with congestion detection at the MAC layer and traffic rate control at the application layer. Simulation results show the effectiveness of the proposed rate control scheme for adapting to dynamic traffic loads.