902 resultados para Dynamic search fireworks algorithm with covariance mutation
Resumo:
In this article we propose an exact efficient simulation algorithm for the generalized von Mises circular distribution of order two. It is an acceptance-rejection algorithm with a piecewise linear envelope based on the local extrema and the inflexion points of the generalized von Mises density of order two. We show that these points can be obtained from the roots of polynomials and degrees four and eight, which can be easily obtained by the methods of Ferrari and Weierstrass. A comparative study with the von Neumann acceptance-rejection, with the ratio-of-uniforms and with a Markov chain Monte Carlo algorithms shows that this new method is generally the most efficient.
Resumo:
The jet energy scale (JES) and its systematic uncertainty are determined for jets measured with the ATLAS detector at the LHC in proton-proton collision data at a centre-of-mass energy of sqrt(s) = 7 TeV corresponding to an integrated luminosity of 38 inverse pb. Jets are reconstructed with the anti-kt algorithm with distance parameters R=0.4 or R=0.6. Jet energy and angle corrections are determined from Monte Carlo simulations to calibrate jets with transverse momenta pt > 20 GeV and pseudorapidities eta<4.5. The JES systematic uncertainty is estimated using the single isolated hadron response measured in situ and in test-beams. The JES uncertainty is less than 2.5% in the central calorimeter region (eta<0.8) for jets with 60 < pt < 800 GeV, and is maximally 14% for pt < 30 GeV in the most forward region 3.2
Resumo:
Measurements of inclusive jet suppression in heavy ion collisions at the LHC provide direct sensitivity to the physics of jet quenching. In a sample of lead-lead collisions at root S-NN = 2.76 TeV corresponding to an integrated luminosity of approximately 7 mu b(-1), ATLAS has measured jets with a calorimeter system over the pseudorapidity interval vertical bar eta vertical bar < 2.1 and over the transverse momentum range 38 < pT <210 GeV. Jets were reconstructed using the anti-k(t) algorithm with values for the distance parameter that determines the nominal jet radius of R = 0.2, 0.3, 0.4 and 0.5. The centrality dependence of the jet yield is characterized by the jet "central-to-peripheral ratio," R-CP. Jet production is found to be suppressed by approximately a factor of two in the 10% most central collisions relative to peripheral collisions. R-CP varies smoothly with centrality as characterized by the number of participating nucleons. The observed suppression is only weakly dependent on jet radius and transverse momentum. These results provide the first direct measurement of inclusive jet suppression in heavy ion collisions and complement previous measurements of dijet transverse energy imbalance at the LHC.
Resumo:
An inherited polyneuropathy (PN) observed in Leonberger dogs has clinical similarities to a genetically heterogeneous group of peripheral neuropathies termed Charcot-Marie-Tooth (CMT) disease in humans. The Leonberger disorder is a severe, juvenile-onset, chronic, progressive, and mixed PN, characterized by exercise intolerance, gait abnormalities and muscle atrophy of the pelvic limbs, as well as inspiratory stridor and dyspnea. We mapped a PN locus in Leonbergers to a 250 kb region on canine chromosome 16 (Praw = 1.16×10-10, Pgenome, corrected = 0.006) utilizing a high-density SNP array. Within this interval is the ARHGEF10 gene, a member of the rho family of GTPases known to be involved in neuronal growth and axonal migration, and implicated in human hypomyelination. ARHGEF10 sequencing identified a 10 bp deletion in affected dogs that removes four nucleotides from the 3'-end of exon 17 and six nucleotides from the 5'-end of intron 17 (c.1955_1958+6delCACGGTGAGC). This eliminates the 3'-splice junction of exon 17, creates an alternate splice site immediately downstream in which the processed mRNA contains a frame shift, and generates a premature stop codon predicted to truncate approximately 50% of the protein. Homozygosity for the deletion was highly associated with the severe juvenile-onset PN phenotype in both Leonberger and Saint Bernard dogs. The overall clinical picture of PN in these breeds, and the effects of sex and heterozygosity of the ARHGEF10 deletion, are less clear due to the likely presence of other forms of PN with variable ages of onset and severity of clinical signs. This is the first documented severe polyneuropathy associated with a mutation in ARHGEF10 in any species.
Resumo:
Measurements of charged-particle fragmentation functions of jets produced in ultra-relativistic nuclear collisions can provide insight into the modification of parton showers in the hot, dense medium created in the collisions. ATLAS has measured jets in √sNN=2.76 TeV Pb+Pb collisions at the LHC using a data set recorded in 2011 with an integrated luminosity of 0.14 nb−1. Jets were reconstructed using the anti-kt algorithm with distance parameter values R = 0.2, 0.3, and 0.4. Distributions of charged-particle transverse momentum and longitudinal momentum fraction are reported for seven bins in collision centrality for R=0.4 jets with pjetT>100 GeV. Commensurate minimum pT values are used for the other radii. Ratios of fragment distributions in each centrality bin to those measured in the most peripheral bin are presented. These ratios show a reduction of fragment yield in central collisions relative to peripheral collisions at intermediate z values, 0.04≲z≲0.2 and an enhancement in fragment yield for z≲0.04. A smaller, less significant enhancement is observed at large z and large pT in central collisions.
Resumo:
A heterozygous mutation (c.643C>A; p.Q215X) in the monocarboxylate transporter 12-encoding gene MCT12 (also known as SLC16A12) that mediates creatine transport was recently identified as the cause of a syndrome with juvenile cataracts, microcornea, and glucosuria in a single family. Whereas the MCT12 mutation cosegregated with the eye phenotype, poor correlation with the glucosuria phenotype did not support a pathogenic role of the mutation in the kidney. Here, we examined MCT12 in the kidney and found that it resides on basolateral membranes of proximal tubules. Patients with MCT12 mutation exhibited reduced plasma levels and increased fractional excretion of guanidinoacetate, but normal creatine levels, suggesting that MCT12 may function as a guanidinoacetate transporter in vivo. However, functional studies in Xenopus oocytes revealed that MCT12 transports creatine but not its precursor, guanidinoacetate. Genetic analysis revealed a separate, undescribed heterozygous mutation (c.265G>A; p.A89T) in the sodium/glucose cotransporter 2-encoding gene SGLT2 (also known as SLC5A2) in the family that segregated with the renal glucosuria phenotype. When overexpressed in HEK293 cells, the mutant SGLT2 transporter did not efficiently translocate to the plasma membrane, and displayed greatly reduced transport activity. In summary, our data indicate that MCT12 functions as a basolateral exit pathway for creatine in the proximal tubule. Heterozygous mutation of MCT12 affects systemic levels and renal handling of guanidinoacetate, possibly through an indirect mechanism. Furthermore, our data reveal a digenic syndrome in the index family, with simultaneous MCT12 and SGLT2 mutation. Thus, glucosuria is not part of the MCT12 mutation syndrome.
Resumo:
STUDY DESIGN Bibliometric study of current literature. OBJECTIVE To identify and analyze the 100 most cited publications in cervical spine research. SUMMARY OF BACKGROUND DATA The cervical spine is a dynamic field of research with many advances made within the last century. However, the literature has never been comprehensively analyzed to identify and compare the most influential articles as measured by the number of citations. METHODS All databases of the Thomson Reuters Web of Knowledge were utilized in a two-step approach. First, the 150 most cited cervical spine studies up to and including 2014 were identified using four keywords. Second, all keywords related to the cervical spine found in the 150 studies (n = 38) were used to conduct a second search of the database. The top 100 most cited articles were hereby selected for further analysis of current and past citations, authorship, geographic origin, article type, and level of evidence. RESULTS Total citations for the 100 studies identified ranged from 173 to 879. They were published in the time frame 1952 to 2008 in a total of 30 different journals. Most studies (n = 42) were published in the decade 1991 - 2000. Level of evidence ranged from 1 to 5 with 39 studies in the level 4 category. 13 researchers were first author more than once and 9 researchers senior author more than once. The two step approach with a secondary widening of search terms yielded an additional 27 studies, including the first ranking article. CONCLUSIONS This bibliometric study is likely to include some of the most important milestones in the field of cervical spine research of the last 100 years. LEVEL OF EVIDENCE 3.
Resumo:
Visual neglect is considerably exacerbated by increases in visual attentional load. These detrimental effects of attentional load are hypothesised to be dependent on an interplay between dysfunctional inter-hemispheric inhibitory dynamics and load-related modulation of activity in cortical areas such as the posterior parietal cortex (PPC). Continuous Theta Burst Stimulation (cTBS) over the contralesional PPC reduces neglect severity. It is unknown, however, whether such positive effects also operate in the presence of the detrimental effects of heightened attentional load. Here, we examined the effects of cTBS on neglect severity in overt visual search (i.e., with eye movements), as a function of high and low visual attentional load conditions. Performance was assessed on the basis of target detection rates and eye movements, in a computerised visual search task and in two paper-pencil tasks. cTBS significantly ameliorated target detection performance, independently of attentional load. These ameliorative effects were significantly larger in the high than the low load condition, thereby equating target detection across both conditions. Eye movement analyses revealed that the improvements were mediated by a redeployment of visual fixations to the contralesional visual field. These findings represent a substantive advance, because cTBS led to an unprecedented amelioration of overt search efficiency that was independent of visual attentional load.
Resumo:
Many attempts have already been made to detect exomoons around transiting exoplanets, but the first confirmed discovery is still pending. The experiences that have been gathered so far allow us to better optimize future space telescopes for this challenge already during the development phase. In this paper we focus on the forthcoming CHaraterising ExOPlanet Satellite (CHEOPS), describing an optimized decision algorithm with step-by-step evaluation, and calculating the number of required transits for an exomoon detection for various planet moon configurations that can be observable by CHEOPS. We explore the most efficient way for such an observation to minimize the cost in observing time. Our study is based on PTV observations (photocentric transit timing variation) in simulated CHEOPS data, but the recipe does not depend on the actual detection method, and it can be substituted with, e.g., the photodynamical method for later applications. Using the current state-of-the-art level simulation of CHEOPS data we analyzed transit observation sets for different star planet moon configurations and performed a bootstrap analysis to determine their detection statistics. We have found that the detection limit is around an Earth-sized moon. In the case of favorable spatial configurations, systems with at least a large moon and a Neptune-sized planet, an 80% detection chance requires at least 5-6 transit observations on average. There is also a nonzero chance in the case of smaller moons, but the detection statistics deteriorate rapidly, while the necessary transit measurements increase quickly. After the CoRoT and Kepler spacecrafts, CHEOPS will be the next dedicated space telescope that will observe exoplanetary transits and characterize systems with known Doppler-planets. Although it has a smaller aperture than Kepler (the ratio of the mirror diameters is about 1/3) and is mounted with a CCD that is similar to Kepler's, it will observe brighter stars and operate with larger sampling rate; therefore, the detection limit for an exomoon can be the same as or better, which will make CHEOPS a competitive instruments in the quest for exomoons.
Resumo:
Dynamic penetrometer data obtained with the Nimrod penetrometer (MARUM). Data is presented as (i) penetration depth (including for different layers if present), (ii) measured deceleration and (iv) estimated quasi-static bearing capacity including range of uncertainty due to the processing method. Lat/Long coordinates are given.
Resumo:
This paper shows the Particle Swarm Optimization algorithm with a Differential Evolution. Each candidate solution is sampled uniformly in [!5,5] D, whereDdenotes the search space dimension, and the evolution is performed with a classical PSO algorithm and a classical DE/x/1 algorithm according to a random threshold.
Resumo:
The traditional ballast track structures are still being used in high speed railways lines with success, however technical problems or performance features have led to non-ballast track solution in some cases. A considerable maintenance work is needed for ballasted tracks due to the track deterioration. Therefore it is very important to understand the mechanism of track deterioration and to predict the track settlement or track irregularity growth rate in order to reduce track maintenance costs and enable new track structures to be designed. The objective of this work is to develop the most adequate and efficient models for calculation of dynamic traffic load effects on railways track infrastructure, and then evaluate the dynamic effect on the ballast track settlement, using a ballast track settlement prediction model, which consists of the vehicle/track dynamic model previously selected and a track settlement law. The calculations are based on dynamic finite element models with direct time integration, contact between wheel and rail and interaction with railway cars. A initial irregularity profile is used in the prediction model. The track settlement law is considered to be a function of number of loading cycles and the magnitude of the loading, which represents the long-term behavior of ballast settlement. The results obtained include the track irregularity growth and the contact force in the final interaction of numerical simulation
Resumo:
La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.
Resumo:
Esta tesis se ha realizado en el contexto del proyecto UPMSat-2, que es un microsatélite diseñado, construido y operado por el Instituto Universitario de Microgravedad "Ignacio Da Riva" (IDR / UPM) de la Universidad Politécnica de Madrid. Aplicación de la metodología Ingeniería Concurrente (Concurrent Engineering: CE) en el marco de la aplicación de diseño multidisciplinar (Multidisciplinary Design Optimization: MDO) es uno de los principales objetivos del presente trabajo. En los últimos años, ha habido un interés continuo en la participación de los grupos de investigación de las universidades en los estudios de la tecnología espacial a través de sus propios microsatélites. La participación en este tipo de proyectos tiene algunos desafíos inherentes, tales como presupuestos y servicios limitados. Además, debido al hecho de que el objetivo principal de estos proyectos es fundamentalmente educativo, por lo general hay incertidumbres en cuanto a su misión en órbita y cargas útiles en las primeras fases del proyecto. Por otro lado, existen limitaciones predeterminadas para sus presupuestos de masa, volumen y energía, debido al hecho de que la mayoría de ellos están considerados como una carga útil auxiliar para el lanzamiento. De este modo, el costo de lanzamiento se reduce considerablemente. En este contexto, el subsistema estructural del satélite es uno de los más afectados por las restricciones que impone el lanzador. Esto puede afectar a diferentes aspectos, incluyendo las dimensiones, la resistencia y los requisitos de frecuencia. En la primera parte de esta tesis, la atención se centra en el desarrollo de una herramienta de diseño del subsistema estructural que evalúa, no sólo las propiedades de la estructura primaria como variables, sino también algunas variables de nivel de sistema del satélite, como la masa de la carga útil y la masa y las dimensiones extremas de satélite. Este enfoque permite que el equipo de diseño obtenga una mejor visión del diseño en un espacio de diseño extendido. La herramienta de diseño estructural se basa en las fórmulas y los supuestos apropiados, incluyendo los modelos estáticos y dinámicos del satélite. Un algoritmo genético (Genetic Algorithm: GA) se aplica al espacio de diseño para optimizaciones de objetivo único y también multiobjetivo. El resultado de la optimización multiobjetivo es un Pareto-optimal basado en dos objetivo, la masa total de satélites mínimo y el máximo presupuesto de masa de carga útil. Por otro lado, la aplicación de los microsatélites en misiones espaciales es de interés por su menor coste y tiempo de desarrollo. La gran necesidad de las aplicaciones de teledetección es un fuerte impulsor de su popularidad en este tipo de misiones espaciales. Las misiones de tele-observación por satélite son esenciales para la investigación de los recursos de la tierra y el medio ambiente. En estas misiones existen interrelaciones estrechas entre diferentes requisitos como la altitud orbital, tiempo de revisita, el ciclo de vida y la resolución. Además, todos estos requisitos puede afectar a toda las características de diseño. Durante los últimos años la aplicación de CE en las misiones espaciales ha demostrado una gran ventaja para llegar al diseño óptimo, teniendo en cuenta tanto el rendimiento y el costo del proyecto. Un ejemplo bien conocido de la aplicación de CE es la CDF (Facilidad Diseño Concurrente) de la ESA (Agencia Espacial Europea). Está claro que para los proyectos de microsatélites universitarios tener o desarrollar una instalación de este tipo parece estar más allá de las capacidades del proyecto. Sin embargo, la práctica de la CE a cualquier escala puede ser beneficiosa para los microsatélites universitarios también. En la segunda parte de esta tesis, la atención se centra en el desarrollo de una estructura de optimización de diseño multidisciplinar (Multidisciplinary Design Optimization: MDO) aplicable a la fase de diseño conceptual de microsatélites de teledetección. Este enfoque permite que el equipo de diseño conozca la interacción entre las diferentes variables de diseño. El esquema MDO presentado no sólo incluye variables de nivel de sistema, tales como la masa total del satélite y la potencia total, sino también los requisitos de la misión como la resolución y tiempo de revisita. El proceso de diseño de microsatélites se divide en tres disciplinas; a) diseño de órbita, b) diseño de carga útil y c) diseño de plataforma. En primer lugar, se calculan diferentes parámetros de misión para un rango práctico de órbitas helio-síncronas (sun-synchronous orbits: SS-Os). Luego, según los parámetros orbitales y los datos de un instrumento como referencia, se calcula la masa y la potencia de la carga útil. El diseño de la plataforma del satélite se estima a partir de los datos de la masa y potencia de los diferentes subsistemas utilizando relaciones empíricas de diseño. El diseño del subsistema de potencia se realiza teniendo en cuenta variables de diseño más detalladas, como el escenario de la misión y diferentes tipos de células solares y baterías. El escenario se selecciona, de modo de obtener una banda de cobertura sobre la superficie terrestre paralelo al Ecuador después de cada intervalo de revisita. Con el objetivo de evaluar las interrelaciones entre las diferentes variables en el espacio de diseño, todas las disciplinas de diseño mencionados se combinan en un código unificado. Por último, una forma básica de MDO se ajusta a la herramienta de diseño de sistema de satélite. La optimización del diseño se realiza por medio de un GA con el único objetivo de minimizar la masa total de microsatélite. Según los resultados obtenidos de la aplicación del MDO, existen diferentes puntos de diseños óptimos, pero con diferentes variables de misión. Este análisis demuestra la aplicabilidad de MDO para los estudios de ingeniería de sistema en la fase de diseño conceptual en este tipo de proyectos. La principal conclusión de esta tesis, es que el diseño clásico de los satélites que por lo general comienza con la definición de la misión y la carga útil no es necesariamente la mejor metodología para todos los proyectos de satélites. Un microsatélite universitario, es un ejemplo de este tipo de proyectos. Por eso, se han desarrollado un conjunto de herramientas de diseño para encarar los estudios de la fase inicial de diseño. Este conjunto de herramientas incluye diferentes disciplinas de diseño centrados en el subsistema estructural y teniendo en cuenta una carga útil desconocida a priori. Los resultados demuestran que la mínima masa total del satélite y la máxima masa disponible para una carga útil desconocida a priori, son objetivos conflictivos. En este contexto para encontrar un Pareto-optimal se ha aplicado una optimización multiobjetivo. Según los resultados se concluye que la selección de la masa total por satélite en el rango de 40-60 kg puede considerarse como óptima para un proyecto de microsatélites universitario con carga útil desconocida a priori. También la metodología CE se ha aplicado al proceso de diseño conceptual de microsatélites de teledetección. Los resultados de la aplicación del CE proporcionan una clara comprensión de la interacción entre los requisitos de diseño de sistemas de satélites, tales como la masa total del microsatélite y la potencia y los requisitos de la misión como la resolución y el tiempo de revisita. La aplicación de MDO se hace con la minimización de la masa total de microsatélite. Los resultados de la aplicación de MDO aclaran la relación clara entre los diferentes requisitos de diseño del sistema y de misión, así como que permiten seleccionar las líneas de base para el diseño óptimo con el objetivo seleccionado en las primeras fase de diseño. ABSTRACT This thesis is done in the context of UPMSat-2 project, which is a microsatellite under design and manufacturing at the Instituto Universitario de Microgravedad “Ignacio Da Riva” (IDR/UPM) of the Universidad Politécnica de Madrid. Application of Concurrent Engineering (CE) methodology in the framework of Multidisciplinary Design application (MDO) is one of the main objectives of the present work. In recent years, there has been continuing interest in the participation of university research groups in space technology studies by means of their own microsatellites. The involvement in such projects has some inherent challenges, such as limited budget and facilities. Also, due to the fact that the main objective of these projects is for educational purposes, usually there are uncertainties regarding their in orbit mission and scientific payloads at the early phases of the project. On the other hand, there are predetermined limitations for their mass and volume budgets owing to the fact that most of them are launched as an auxiliary payload in which the launch cost is reduced considerably. The satellite structure subsystem is the one which is most affected by the launcher constraints. This can affect different aspects, including dimensions, strength and frequency requirements. In the first part of this thesis, the main focus is on developing a structural design sizing tool containing not only the primary structures properties as variables but also the satellite system level variables such as payload mass budget and satellite total mass and dimensions. This approach enables the design team to obtain better insight into the design in an extended design envelope. The structural design sizing tool is based on the analytical structural design formulas and appropriate assumptions including both static and dynamic models of the satellite. A Genetic Algorithm (GA) is applied to the design space for both single and multiobejective optimizations. The result of the multiobjective optimization is a Pareto-optimal based on two objectives, minimum satellite total mass and maximum payload mass budget. On the other hand, the application of the microsatellites is of interest for their less cost and response time. The high need for the remote sensing applications is a strong driver of their popularity in space missions. The satellite remote sensing missions are essential for long term research around the condition of the earth resources and environment. In remote sensing missions there are tight interrelations between different requirements such as orbital altitude, revisit time, mission cycle life and spatial resolution. Also, all of these requirements can affect the whole design characteristics. During the last years application of the CE in the space missions has demonstrated a great advantage to reach the optimum design base lines considering both the performance and the cost of the project. A well-known example of CE application is ESA (European Space Agency) CDF (Concurrent Design Facility). It is clear that for the university-class microsatellite projects having or developing such a facility seems beyond the project capabilities. Nevertheless practicing CE at any scale can be beneficiary for the university-class microsatellite projects. In the second part of this thesis, the main focus is on developing a MDO framework applicable to the conceptual design phase of the remote sensing microsatellites. This approach enables the design team to evaluate the interaction between the different system design variables. The presented MDO framework contains not only the system level variables such as the satellite total mass and total power, but also the mission requirements like the spatial resolution and the revisit time. The microsatellite sizing process is divided into the three major design disciplines; a) orbit design, b) payload sizing and c) bus sizing. First, different mission parameters for a practical range of sun-synchronous orbits (SS-Os) are calculated. Then, according to the orbital parameters and a reference remote sensing instrument, mass and power of the payload are calculated. Satellite bus sizing is done based on mass and power calculation of the different subsystems using design estimation relationships. In the satellite bus sizing, the power subsystem design is realized by considering more detailed design variables including a mission scenario and different types of solar cells and batteries. The mission scenario is selected in order to obtain a coverage belt on the earth surface parallel to the earth equatorial after each revisit time. In order to evaluate the interrelations between the different variables inside the design space all the mentioned design disciplines are combined in a unified code. The integrated satellite system sizing tool developed in this section is considered as an application of the CE to the conceptual design of the remote sensing microsatellite projects. Finally, in order to apply the MDO methodology to the design problem, a basic MDO framework is adjusted to the developed satellite system design tool. Design optimization is done by means of a GA single objective algorithm with the objective function as minimizing the microsatellite total mass. According to the results of MDO application, there exist different optimum design points all with the minimum satellite total mass but with different mission variables. This output demonstrates the successful applicability of MDO approach for system engineering trade-off studies at the conceptual design phase of the design in such projects. The main conclusion of this thesis is that the classical design approach for the satellite design which usually starts with the mission and payload definition is not necessarily the best approach for all of the satellite projects. The university-class microsatellite is an example for such projects. Due to this fact an integrated satellite sizing tool including different design disciplines focusing on the structural subsystem and considering unknown payload is developed. According to the results the satellite total mass and available mass for the unknown payload are conflictive objectives. In order to find the Pareto-optimal a multiobjective GA optimization is conducted. Based on the optimization results it is concluded that selecting the satellite total mass in the range of 40-60 kg can be considered as an optimum approach for a university-class microsatellite project with unknown payload(s). Also, the CE methodology is applied to the remote sensing microsatellites conceptual design process. The results of CE application provide a clear understanding of the interaction between satellite system design requirements such as satellite total mass and power and the satellite mission variables such as revisit time and spatial resolution. The MDO application is done with the total mass minimization of a remote sensing satellite. The results from the MDO application clarify the unclear relationship between different system and mission design variables as well as the optimum design base lines according to the selected objective during the initial design phases.
Resumo:
El problema del flujo sobre una cavidad abierta ha sido estudiado en profundidad en la literatura, tanto por el interés académico del problema como por sus aplicaciones prácticas en gran variedad de problemas ingenieriles, como puede ser el alojamiento del tren de aterrizaje de aeronaves, o el depósito de agua de aviones contraincendios. Desde hace muchos a˜nos se estudian los distintos tipos de inestabilidades asociadas a este problema: los modos bidimensionales en la capa de cortadura, y los modos tridimensionales en el torbellino de recirculación principal dentro de la cavidad. En esta tesis se presenta un estudio paramétrico completo del límite incompresible del problema, empleando la herramienta de estabilidad lineal conocida como BiGlobal. Esta aproximación permite contemplar la estabilidad global del flujo, y obtener tanto la forma como las características de los modos propios del problema físico, sean estables o inestables. El estudio realizado permite caracterizar con gran detalle todos los modos relevantes, así como la envolvente de estabilidad en el espacio paramétrico del problema incompresible (Mach nulo, variación de Reynolds, espesor de capa límite incidente, relación altura/profundidad de la cavidad, y longitud característica de la perturbación en la dirección transversal). A la luz de los resultados obtenidos se proponen una serie de relaciones entre los parámetros y características de los modos principales, como por ejemplo entre el Reynolds crítico de un modo, y la longitud característica del mismo. Los resultados numéricos se contrastan con una campaña experimental, siendo la principal conclusión de dicha comparación que los modos lineales están presentes en el flujo real saturado, pero que existen diferencias notables en frecuencia entre las predicciones teóricas y los experimentos. Para intentar determinar la naturaleza de dichas diferencias se realiza una simulación numérica directa tridimensional, y se utiliza un algoritmo de DMD (descomposición dinámica de modos) para describir el proceso de saturación. ABSTRACT The problem of the flow over an open cavity has been studied in depth in the literature, both for being an interesting academical problem and due to the multitude of industrial applications, like the landing gear of aircraft, or the water deposit of firefighter airplanes. The different types of instabilities appearing in this flow studied in the literature are two: the two-dimensional shear layer modes, and the three-dimensional modes that appear in the main recirculating vortex inside the cavity. In this thesis a parametric study in the incompressible limit of the problem is presented, using the linear stability analysis known as BiGlobal. This approximation allows to obtain the global stability behaviour of the flow, and to capture both the morphological features and the characteristics of the eigenmodes of the physical problem, whether they are stable or unstable. The study presented here characterizes with great detail all the relevant eigenmodes, as well as the hypersurface of instability on the parameter space of the incompressible problem (Mach equal to zero, and variation of the Reynolds number, the incoming boundary layer thickness, the length to depth aspect ratio of the cavity and the spanwise length of the perturbation). The results allow to construct parametric relations between the characteristics of the leading eigenmodes and the parameters of the problem, like for example the one existing between the critical Reynolds number and its characteristic length. The numerical results presented here are compared with those of an experimental campaign, with the main conclusion of said comparison being that the linear eigenmode are present in the real saturated flow, albeit with some significant differences in the frequencies of the experiments and those predicted by the theory. To try to determine the nature of those differences a three-dimensional direct numerical simulation, analyzed with Dynamic Mode Decomposition algorithm, was used to describe the process of saturation.