823 resultados para whether court has power to extend time
Resumo:
El objetivo de esta tesis es estudiar la dinámica de la capa logarítmica de flujos turbulentos de pared. En concreto, proponemos un nuevo modelo estructural utilizando diferentes tipos de estructuras coherentes: sweeps, eyecciones, grupos de vorticidad y streaks. La herramienta utilizada es la simulación numérica directa de canales turbulentos. Desde los primeros trabajos de Theodorsen (1952), las estructuras coherentes han jugado un papel fundamental para entender la organización y dinámica de los flujos turbulentos. A día de hoy, datos procedentes de simulaciones numéricas directas obtenidas en instantes no contiguos permiten estudiar las propiedades fundamentales de las estructuras coherentes tridimensionales desde un punto de vista estadístico. Sin embargo, la dinámica no puede ser entendida en detalle utilizando sólo instantes aislados en el tiempo, sino que es necesario seguir de forma continua las estructuras. Aunque existen algunos estudios sobre la evolución temporal de las estructuras más pequeñas a números de Reynolds moderados, por ejemplo Robinson (1991), todavía no se ha realizado un estudio completo a altos números de Reynolds y para todas las escalas presentes de la capa logarítmica. El objetivo de esta tesis es llevar a cabo dicho análisis. Los problemas más interesantes los encontramos en la región logarítmica, donde residen las cascadas de vorticidad, energía y momento. Existen varios modelos que intentan explicar la organización de los flujos turbulentos en dicha región. Uno de los más extendidos fue propuesto por Adrian et al. (2000) a través de observaciones experimentales y considerando como elemento fundamental paquetes de vórtices con forma de horquilla que actúan de forma cooperativa para generar rampas de bajo momento. Un modelo alternativo fué ideado por del Álamo & Jiménez (2006) utilizando datos numéricos. Basado también en grupos de vorticidad, planteaba un escenario mucho más desorganizado y con estructuras sin forma de horquilla. Aunque los dos modelos son cinemáticamente similares, no lo son desde el punto de vista dinámico, en concreto en lo que se refiere a la importancia que juega la pared en la creación y vida de las estructuras. Otro punto importante aún sin resolver se refiere al modelo de cascada turbulenta propuesto por Kolmogorov (1941b), y su relación con estructuras coherentes medibles en el flujo. Para dar respuesta a las preguntas anteriores, hemos desarrollado un nuevo método que permite seguir estructuras coherentes en el tiempo y lo hemos aplicado a simulaciones numéricas de canales turbulentos con números de Reynolds lo suficientemente altos como para tener un rango de escalas no trivial y con dominios computacionales lo suficientemente grandes como para representar de forma correcta la dinámica de la capa logarítmica. Nuestros esfuerzos se han desarrollado en cuatro pasos. En primer lugar, hemos realizado una campaña de simulaciones numéricas directas a diferentes números de Reynolds y tamaños de cajas para evaluar el efecto del dominio computacional en las estadísticas de primer orden y el espectro. A partir de los resultados obtenidos, hemos concluido que simulaciones con cajas de longitud 2vr y ancho vr veces la semi-altura del canal son lo suficientemente grandes para reproducir correctamente las interacciones entre estructuras coherentes de la capa logarítmica y el resto de escalas. Estas simulaciones son utilizadas como punto de partida en los siguientes análisis. En segundo lugar, las estructuras coherentes correspondientes a regiones con esfuerzos de Reynolds tangenciales intensos (Qs) en un canal turbulento han sido estudiadas extendiendo a tres dimensiones el análisis de cuadrantes, con especial énfasis en la capa logarítmica y la región exterior. Las estructuras coherentes han sido identificadas como regiones contiguas del espacio donde los esfuerzos de Reynolds tangenciales son más intensos que un cierto nivel. Los resultados muestran que los Qs separados de la pared están orientados de forma isótropa y su contribución neta al esfuerzo de Reynolds medio es nula. La mayor contribución la realiza una familia de estructuras de mayor tamaño y autosemejantes cuya parte inferior está muy cerca de la pared (ligada a la pared), con una geometría compleja y dimensión fractal « 2. Estas estructuras tienen una forma similar a una ‘esponja de placas’, en comparación con los grupos de vorticidad que tienen forma de ‘esponja de cuerdas’. Aunque el número de objetos decae al alejarnos de la pared, la fracción de esfuerzos de Reynolds que contienen es independiente de su altura, y gran parte reside en unas pocas estructuras que se extienden más allá del centro del canal, como en las grandes estructuras propuestas por otros autores. Las estructuras dominantes en la capa logarítmica son parejas de sweeps y eyecciones uno al lado del otro y con grupos de vorticidad asociados que comparten las dimensiones y esfuerzos con los remolinos ligados a la pared propuestos por Townsend. En tercer lugar, hemos estudiado la evolución temporal de Qs y grupos de vorticidad usando las simulaciones numéricas directas presentadas anteriormente hasta números de Reynolds ReT = 4200 (Reynolds de fricción). Las estructuras fueron identificadas siguiendo el proceso descrito en el párrafo anterior y después seguidas en el tiempo. A través de la interseción geométrica de estructuras pertenecientes a instantes de tiempo contiguos, hemos creado gratos de conexiones temporales entre todos los objetos y, a partir de ahí, definido ramas primarias y secundarias, de tal forma que cada rama representa la evolución temporal de una estructura coherente. Una vez que las evoluciones están adecuadamente organizadas, proporcionan toda la información necesaria para caracterizar la historia de las estructuras desde su nacimiento hasta su muerte. Los resultados muestran que las estructuras nacen a todas las distancias de la pared, pero con mayor probabilidad cerca de ella, donde la cortadura es más intensa. La mayoría mantienen tamaños pequeños y no viven mucho tiempo, sin embargo, existe una familia de estructuras que crecen lo suficiente como para ligarse a la pared y extenderse a lo largo de la capa logarítmica convirtiéndose en las estructuras observas anteriormente y descritas por Townsend. Estas estructuras son geométricamente autosemejantes con tiempos de vida proporcionales a su tamaño. La mayoría alcanzan tamaños por encima de la escala de Corrsin, y por ello, su dinámica está controlada por la cortadura media. Los resultados también muestran que las eyecciones se alejan de la pared con velocidad media uT (velocidad de fricción) y su base se liga a la pared muy rápidamente al inicio de sus vidas. Por el contrario, los sweeps se mueven hacia la pared con velocidad -uT y se ligan a ella más tarde. En ambos casos, los objetos permanecen ligados a la pared durante 2/3 de sus vidas. En la dirección de la corriente, las estructuras se desplazan a velocidades cercanas a la convección media del flujo y son deformadas por la cortadura. Finalmente, hemos interpretado la cascada turbulenta, no sólo como una forma conceptual de organizar el flujo, sino como un proceso físico en el cual las estructuras coherentes se unen y se rompen. El volumen de una estructura cambia de forma suave, cuando no se une ni rompe, o lo hace de forma repentina en caso contrario. Los procesos de unión y rotura pueden entenderse como una cascada directa (roturas) o inversa (uniones), siguiendo el concepto de cascada de remolinos ideado por Richardson (1920) y Obukhov (1941). El análisis de los datos muestra que las estructuras con tamaños menores a 30η (unidades de Kolmogorov) nunca se unen ni rompen, es decir, no experimentan el proceso de cascada. Por el contrario, aquellas mayores a 100η siempre se rompen o unen al menos una vez en su vida. En estos casos, el volumen total ganado y perdido es una fracción importante del volumen medio de la estructura implicada, con una tendencia ligeramente mayor a romperse (cascada directa) que a unirse (cascade inversa). La mayor parte de interacciones entre ramas se debe a roturas o uniones de fragmentos muy pequeños en la escala de Kolmogorov con estructuras más grandes, aunque el efecto de fragmentos de mayor tamaño no es despreciable. También hemos encontrado que las roturas tienen a ocurrir al final de la vida de la estructura y las uniones al principio. Aunque los resultados para la cascada directa e inversa no son idénticos, son muy simétricos, lo que sugiere un alto grado de reversibilidad en el proceso de cascada. ABSTRACT The purpose of the present thesis is to study the dynamics of the logarithmic layer of wall-bounded turbulent flows. Specifically, to propose a new structural model based on four different coherent structures: sweeps, ejections, clusters of vortices and velocity streaks. The tool used is the direct numerical simulation of time-resolved turbulent channels. Since the first work by Theodorsen (1952), coherent structures have played an important role in the understanding of turbulence organization and its dynamics. Nowadays, data from individual snapshots of direct numerical simulations allow to study the threedimensional statistical properties of those objects, but their dynamics can only be fully understood by tracking them in time. Although the temporal evolution has already been studied for small structures at moderate Reynolds numbers, e.g., Robinson (1991), a temporal analysis of three-dimensional structures spanning from the smallest to the largest scales across the logarithmic layer has yet to be performed and is the goal of the present thesis. The most interesting problems lie in the logarithmic region, which is the seat of cascades of vorticity, energy, and momentum. Different models involving coherent structures have been proposed to represent the organization of wall-bounded turbulent flows in the logarithmic layer. One of the most extended ones was conceived by Adrian et al. (2000) and built on packets of hairpins that grow from the wall and work cooperatively to gen- ´ erate low-momentum ramps. A different view was presented by del Alamo & Jim´enez (2006), who extracted coherent vortical structures from DNSs and proposed a less organized scenario. Although the two models are kinematically fairly similar, they have important dynamical differences, mostly regarding the relevance of the wall. Another open question is whether such a model can be used to explain the cascade process proposed by Kolmogorov (1941b) in terms of coherent structures. The challenge would be to identify coherent structures undergoing a turbulent cascade that can be quantified. To gain a better insight into the previous questions, we have developed a novel method to track coherent structures in time, and used it to characterize the temporal evolutions of eddies in turbulent channels with Reynolds numbers high enough to include a non-trivial range of length scales, and computational domains sufficiently long and wide to reproduce correctly the dynamics of the logarithmic layer. Our efforts have followed four steps. First, we have conducted a campaign of direct numerical simulations of turbulent channels at different Reynolds numbers and box sizes, and assessed the effect of the computational domain in the one-point statistics and spectra. From the results, we have concluded that computational domains with streamwise and spanwise sizes 2vr and vr times the half-height of the channel, respectively, are large enough to accurately capture the dynamical interactions between structures in the logarithmic layer and the rest of the scales. These simulations are used in the subsequent chapters. Second, the three-dimensional structures of intense tangential Reynolds stress in plane turbulent channels (Qs) have been studied by extending the classical quadrant analysis to three dimensions, with emphasis on the logarithmic and outer layers. The eddies are identified as connected regions of intense tangential Reynolds stress. Qs are then classified according to their streamwise and wall-normal fluctuating velocities as inward interactions, outward interactions, sweeps and ejections. It is found that wall-detached Qs are isotropically oriented background stress fluctuations, common to most turbulent flows, and do not contribute to the mean stress. Most of the stress is carried by a selfsimilar family of larger wall-attached Qs, increasingly complex away from the wall, with fractal dimensions « 2. They have shapes similar to ‘sponges of flakes’, while vortex clusters resemble ‘sponges of strings’. Although their number decays away from the wall, the fraction of the stress that they carry is independent of their heights, and a substantial part resides in a few objects extending beyond the centerline, reminiscent of the very large scale motions of several authors. The predominant logarithmic-layer structures are sideby- side pairs of sweeps and ejections, with an associated vortex cluster, and dimensions and stresses similar to Townsend’s conjectured wall-attached eddies. Third, the temporal evolution of Qs and vortex clusters are studied using time-resolved DNS data up to ReT = 4200 (friction Reynolds number). The eddies are identified following the procedure presented above, and then tracked in time. From the geometric intersection of structures in consecutive fields, we have built temporal connection graphs of all the objects, and defined main and secondary branches in a way that each branch represents the temporal evolution of one coherent structure. Once these evolutions are properly organized, they provide the necessary information to characterize eddies from birth to death. The results show that the eddies are born at all distances from the wall, although with higher probability near it, where the shear is strongest. Most of them stay small and do not last for long times. However, there is a family of eddies that become large enough to attach to the wall while they reach into the logarithmic layer, and become the wall-attached structures previously observed in instantaneous flow fields. They are geometrically self-similar, with sizes and lifetimes proportional to their distance from the wall. Most of them achieve lengths well above the Corrsin’ scale, and hence, their dynamics are controlled by the mean shear. Eddies associated with ejections move away from the wall with an average velocity uT (friction velocity), and their base attaches very fast at the beginning of their lives. Conversely, sweeps move towards the wall at -uT, and attach later. In both cases, they remain attached for 2/3 of their lives. In the streamwise direction, eddies are advected and deformed by the local mean velocity. Finally, we interpret the turbulent cascade not only as a way to conceptualize the flow, but as an actual physical process in which coherent structures merge and split. The volume of an eddy can change either smoothly, when they are not merging or splitting, or through sudden changes. The processes of merging and splitting can be thought of as a direct (when splitting) or an inverse (when merging) cascade, following the ideas envisioned by Richardson (1920) and Obukhov (1941). It is observed that there is a minimum length of 30η (Kolmogorov units) above which mergers and splits begin to be important. Moreover, all eddies above 100η split and merge at least once in their lives. In those cases, the total volume gained and lost is a substantial fraction of the average volume of the structure involved, with slightly more splits (direct cascade) than mergers. Most branch interactions are found to be the shedding or absorption of Kolmogorov-scale fragments by larger structures, but more balanced splits or mergers spanning a wide range of scales are also found to be important. The results show that splits are more probable at the end of the life of the eddy, while mergers take place at the beginning of the life. Although the results for the direct and the inverse cascades are not identical, they are found to be very symmetric, which suggests a high degree of reversibility of the cascade process.
Resumo:
In this contribution a novel iterative bit- and power allocation (IBPA) approach has been developed when transmitting a given bit/s/Hz data rate over a correlated frequency non-selective (4× 4) Multiple-Input MultipleOutput (MIMO) channel. The iterative resources allocation algorithm developed in this investigation is aimed at the achievement of the minimum bit-error rate (BER) in a correlated MIMO communication system. In order to achieve this goal, the available bits are iteratively allocated in the MIMO active layers which present the minimum transmit power requirement per time slot.
Resumo:
One of the main objectives of European Commission related to climate and energy is the well-known 20-20-20 targets to be achieved in 2020: Europe has to reduce greenhouse gas emissions of at least 20% below 1990 levels, 20% of EU energy consumption has to come from renewable resources and, finally, a 20% reduction in primary energy use compared with projected levels, has to be achieved by improving energy efficiency. In order to reach these objectives, it is necessary to reduce the overall emissions, mainly in transport (reducing CO2, NOx and other pollutants), and to increase the penetration of the intermittent renewable energy. A high deployment of battery electric (BEVs) and plug-in hybrid electric vehicles (PHEVs), with a low-cost source of energy storage, could help to achieve both targets. Hybrid electric vehicles (HEVs) use a combination of a conventional internal combustion engine (ICE) with one (or more) electric motor. There are different grades of hybridation from micro-hybrids with start-stop capability, mild hybrids (with kinetic energy recovery), medium hybrids (mild hybrids plus energy assist) and full hybrids (medium hybrids plus electric launch capability). These last types of vehicles use a typical battery capacity around 1-2 kWh. Plug in hybrid electric vehicles (PHEVs) use larger battery capacities to achieve limited electric-only driving range. These vehicles are charged by on-board electricity generation or either plugging into electric outlets. Typical battery capacity is around 10 kWh. Battery Electric Vehicles (BEVs) are only driven by electric power and their typical battery capacity is around 15-20 kWh. One type of PHEV, the Extended Range Electric Vehicle (EREV), operates as a BEV until its plug-in battery capacity is depleted; at which point its gasoline engine powers an electric generator to extend the vehicle's range. The charging of PHEVs (including EREVs) and BEVs will have different impacts to the electric grid, depending on the number of vehicles and the start time for charging. Initially, the lecture will start analyzing the electrical power requirements for charging PHEVs-BEVs in Flanders region (Belgium) under different charging scenarios. Secondly and based on an activity-based microsimulation mobility model, an efficient method to reduce this impact will be presented.
Resumo:
Hoy día nadie discute la importancia de predecir el comportamiento vibroacústico de estructuras (edificios, vehículos aeronaves, satélites). También se ha hecho patente, con el tiempo, que el rango espectral en el que la respuesta es importante se ha desplazado hacia alta frecuencia en prácticamente todos los campos. Esto ha hecho que los métodos de análisis en este rango alto de frecuencias cobren importancia y actualidad. Uno de los métodos más extendidos para este fin es el basado en el Análisis Estadístico de la Energía, SEA. Es un método que ha mostrado proporcionar un buen equilibrio entre potencia de calculo, precisión y fiabilidad. En un SEA el sistema (estructura, cavidades o aire circundante) se modela mediante una matriz de coeficientes que dependen directamente de los factores de pérdidas de las distintas partes del sistema. Formalmente es un método de análisis muy cómodo e intuitivo de manejar cuya mayor dificultad es precisamente la determinación de esos factores de pérdidas. El catálogo de expresiones analíticas o numéricas para su determinación no es suficientemente amplio por lo que normalmente siempre se suele acabar necesitando hacer uso de herramientas experimentales, ya sea para su obtención o la comprobación de los valores utilizados. La determinación experimental tampoco está exenta de problemas, su obtención necesita de configuraciones experimentales grandes y complejas con requisitos que pueden llegar a ser muy exigentes y en las que además, se ven involucrados problemas numéricos relacionados con los valores de los propios factores de pérdidas, el valor relativo entre ellos y las características de las matrices que conforman. Este trabajo estudia la caracterización de sistemas vibroacústicos mediante el análisis estadístico de energía. Se centra en la determinación precisa de los valores de los factores de pérdidas. Dados los problemas que puede presentar un sistema experimental de estas características, en una primera parte se estudia la influencia de todas las magnitudes que intervienen en la determinación de los factores de pérdidas mediante un estudio de incertidumbres relativas, que, por medio de los coeficientes de sensibilidad normalizados, indicará la importancia de cada una de las magnitudes de entrada (esencialmente energías y potencias) en los resultados. De esta parte se obtiene una visión general sobre a qué mensurados se debe prestar más atención, y de qué problemas pueden ser los que más influyan en la falta de estabilidad (o incoherencia) de los resultados. Además, proporciona un modelo de incertidumbres válido para los casos estudiados y ha permitido evaluar el error cometido por algún método utilizado habitualmente para la caracterización de factores de pérdidas como la aproximación a 2 subsistemas En una segunda parte se hace uso de las conclusiones obtenidas en la primera, de forma que el trabajo se orienta en dos direcciones. Una dirigida a la determi nación suficientemente fiel de la potencia de entrada que permita simplificar en lo posible la configuración experimental. Otra basada en un análisis detallado de las propiedades de la matriz que caracteriza un SEA y que conduce a la propuesta de un método para su determinación robusta, basada en un filtrado de Montecarlo y que, además, muestra cómo los problemas numéricos de la matriz SEA pueden no ser tan insalvables como se apunta en la literatura. Por último, además, se plantea una solución al caso en el que no todos los subsistemas en los que se divide el sistema puedan ser excitados. El método propuesto aquí no permite obtener el conjunto completo de coeficientes necesarios para definir un sistema, pero el solo hecho de poder obtener conjuntos parciales ya es un avance importante y, sobre todo, abre la puerta al desarrollo de métodos que permitan relajar de forma importante las exigencias que la determinación experimental de matrices SEA tiene. ABSTRACT At present there is an agreement about the importance to predict the vibroacoustic response of structures (buildings, vehicles, aircrafts, satellites, etc.). In addition, there has become clear over the time that the frequency range over which the response is important has been shift to higher frequencies in almost all the engineering fields. As a consequence, the numerical methods for high frequency analysis have increase in importance. One the most widespread methods for this type of analysis is the one based on the Statistical Energy Analysis, SEA. This method has shown to provide a good balance among calculation power, accuracy and liability. Within a SEA, a system (structure, cavities, surrounding air) is modeled by a coefficients matrix that depends directly on the loss factors of the different parts of the system. Formally, SEA is a very handy and intuitive analysis method whose greatest handicap is precisely the determination of the loss factors. The existing set of analytical or numerical equations to obtain the loss factor values is not enough, so that usually it is necessary to use experimental techniques whether it is to its determination to to check the estimated values by another mean. The experimental determination presents drawbacks, as well. To obtain them great and complex experimental setups are needed including requirements that can be very demanding including numerical problems related to the values of the loss factors themselves, their relative value and the characteristics of the matrices they define. The present work studies the characterization of vibroacousti systems by this SEA method. It focuses on the accurate determination of the loss factors values. Given all the problems an experimental setup of these characteristics can show, the work is divided in two parts. In the first part, the influence of all the quantities involved on the determination of the loss factors is studied by a relative uncertainty estimation, which, by means of the normalised sensitivity coefficients, will provide an insight about the importance of every input quantities (energies and input powers, mainly) on the final result. Besides, this part, gives an uncertainty model that has allowed assessing the error of one of the methods more widely used to characterize the loss factors: the 2-subsystem approach. In the second part, use of the former conclusions is used. An equation for the input power into the subsystems is proposed. This equation allows simplifying the experimental setup without changing the liability of the test. A detailed study of the SEA matrix properties leads to propose a robust determination method of this SEA matrix by a Monte Carlo filtering. In turn, this new method shows how the numerical problems of the SEA matrix can be overcome Finally, a solution is proposed for the case where not all the subsystems are excited. The method proposed do not allows obtaining the whole set of coefficients of the SEA matrix, but the simple fact of getting partial sets of loss factors is a significant advance and, over all, it opens the door to the development of new methods that loosen the requirements that an experimental determination of a SEA matrix have.
Resumo:
Esta tesis se desarrolla dentro del marco de las comunicaciones satelitales en el innovador campo de los pequeños satélites también llamados nanosatélites o cubesats, llamados así por su forma cubica. Estos nanosatélites se caracterizan por su bajo costo debido a que usan componentes comerciales llamados COTS (commercial off-the-shelf) y su pequeño tamaño como los Cubesats 1U (10cm*10 cm*10 cm) con masa aproximada a 1 kg. Este trabajo de tesis tiene como base una iniciativa propuesta por el autor de la tesis para poner en órbita el primer satélite peruano en mi país llamado chasqui I, actualmente puesto en órbita desde la Estación Espacial Internacional. La experiencia de este trabajo de investigación me llevo a proponer una constelación de pequeños satélites llamada Waposat para dar servicio de monitoreo de sensores de calidad de agua a nivel global, escenario que es usado en esta tesis. Es ente entorno y dadas las características limitadas de los pequeños satélites, tanto en potencia como en velocidad de datos, es que propongo investigar una nueva arquitectura de comunicaciones que permita resolver en forma óptima la problemática planteada por los nanosatélites en órbita LEO debido a su carácter disruptivo en sus comunicaciones poniendo énfasis en las capas de enlace y aplicación. Esta tesis presenta y evalúa una nueva arquitectura de comunicaciones para proveer servicio a una red de sensores terrestres usando una solución basada en DTN (Delay/Disruption Tolerant Networking) para comunicaciones espaciales. Adicionalmente, propongo un nuevo protocolo de acceso múltiple que usa una extensión del protocolo ALOHA no ranurado, el cual toma en cuenta la prioridad del trafico del Gateway (ALOHAGP) con un mecanismo de contienda adaptativo. Utiliza la realimentación del satélite para implementar el control de la congestión y adapta dinámicamente el rendimiento efectivo del canal de una manera óptima. Asumimos un modelo de población de sensores finito y una condición de tráfico saturado en el que cada sensor tiene siempre tramas que transmitir. El desempeño de la red se evaluó en términos de rendimiento efectivo, retardo y la equidad del sistema. Además, se ha definido una capa de convergencia DTN (ALOHAGP-CL) como un subconjunto del estándar TCP-CL (Transmission Control Protocol-Convergency Layer). Esta tesis muestra que ALOHAGP/CL soporta adecuadamente el escenario DTN propuesto, sobre todo cuando se utiliza la fragmentación reactiva. Finalmente, esta tesis investiga una transferencia óptima de mensajes DTN (Bundles) utilizando estrategias de fragmentación proactivas para dar servicio a una red de sensores terrestres utilizando un enlace de comunicaciones satelitales que utiliza el mecanismo de acceso múltiple con prioridad en el tráfico de enlace descendente (ALOHAGP). El rendimiento efectivo ha sido optimizado mediante la adaptación de los parámetros del protocolo como una función del número actual de los sensores activos recibidos desde el satélite. También, actualmente no existe un método para advertir o negociar el tamaño máximo de un “bundle” que puede ser aceptado por un agente DTN “bundle” en las comunicaciones por satélite tanto para el almacenamiento y la entrega, por lo que los “bundles” que son demasiado grandes son eliminados o demasiado pequeños son ineficientes. He caracterizado este tipo de escenario obteniendo una distribución de probabilidad de la llegada de tramas al nanosatélite así como una distribución de probabilidad del tiempo de visibilidad del nanosatélite, los cuales proveen una fragmentación proactiva óptima de los DTN “bundles”. He encontrado que el rendimiento efectivo (goodput) de la fragmentación proactiva alcanza un valor ligeramente inferior al de la fragmentación reactiva. Esta contribución permite utilizar la fragmentación activa de forma óptima con todas sus ventajas tales como permitir implantar el modelo de seguridad de DTN y la simplicidad al implementarlo en equipos con muchas limitaciones de CPU y memoria. La implementación de estas contribuciones se han contemplado inicialmente como parte de la carga útil del nanosatélite QBito, que forma parte de la constelación de 50 nanosatélites que se está llevando a cabo dentro del proyecto QB50. ABSTRACT This thesis is developed within the framework of satellite communications in the innovative field of small satellites also known as nanosatellites (<10 kg) or CubeSats, so called from their cubic form. These nanosatellites are characterized by their low cost because they use commercial components called COTS (commercial off-the-shelf), and their small size and mass, such as 1U Cubesats (10cm * 10cm * 10cm) with approximately 1 kg mass. This thesis is based on a proposal made by the author of the thesis to put into orbit the first Peruvian satellite in his country called Chasqui I, which was successfully launched into orbit from the International Space Station in 2014. The experience of this research work led me to propose a constellation of small satellites named Waposat to provide water quality monitoring sensors worldwide, scenario that is used in this thesis. In this scenario and given the limited features of nanosatellites, both power and data rate, I propose to investigate a new communications architecture that allows solving in an optimal manner the problems of nanosatellites in orbit LEO due to the disruptive nature of their communications by putting emphasis on the link and application layers. This thesis presents and evaluates a new communications architecture to provide services to terrestrial sensor networks using a space Delay/Disruption Tolerant Networking (DTN) based solution. In addition, I propose a new multiple access mechanism protocol based on extended unslotted ALOHA that takes into account the priority of gateway traffic, which we call ALOHA multiple access with gateway priority (ALOHAGP) with an adaptive contention mechanism. It uses satellite feedback to implement the congestion control, and to dynamically adapt the channel effective throughput in an optimal way. We assume a finite sensor population model and a saturated traffic condition where every sensor always has frames to transmit. The performance was evaluated in terms of effective throughput, delay and system fairness. In addition, a DTN convergence layer (ALOHAGP-CL) has been defined as a subset of the standard TCP-CL (Transmission Control Protocol-Convergence Layer). This thesis reveals that ALOHAGP/CL adequately supports the proposed DTN scenario, mainly when reactive fragmentation is used. Finally, this thesis investigates an optimal DTN message (bundles) transfer using proactive fragmentation strategies to give service to a ground sensor network using a nanosatellite communications link which uses a multi-access mechanism with priority in downlink traffic (ALOHAGP). The effective throughput has been optimized by adapting the protocol parameters as a function of the current number of active sensors received from satellite. Also, there is currently no method for advertising or negotiating the maximum size of a bundle which can be accepted by a bundle agent in satellite communications for storage and delivery, so that bundles which are too large can be dropped or which are too small are inefficient. We have characterized this kind of scenario obtaining a probability distribution for frame arrivals to nanosatellite and visibility time distribution that provide an optimal proactive fragmentation of DTN bundles. We have found that the proactive effective throughput (goodput) reaches a value slightly lower than reactive fragmentation approach. This contribution allows to use the proactive fragmentation optimally with all its advantages such as the incorporation of the security model of DTN and simplicity in protocol implementation for computers with many CPU and memory limitations. The implementation of these contributions was initially contemplated as part of the payload of the nanosatellite QBito, which is part of the constellation of 50 nanosatellites envisaged under the QB50 project.
Resumo:
La Casa Industrializada supone el ideal de realizar la casa unifamiliar a través de la potencia y los procedimientos de la industria. Como tal, la casa supone un producto industrial más sujeto a la lógica de la reproducción y del consumo. Como producto de consumo la casa debe establecerse como objeto de deseo, accesible al grupo de usuarios-consumidores al que va dirigido. El sueño de la Casa Industrializada se origina en la primera Revolución Industrial y se consolida en la segunda tras la producción del Ford T y la adhesión de los padres del movimiento moderno. A lo largo de su historia se han sucedido casos de éxito y fracaso, los primeros con la realización de un producto de imagen convencional y los segundos la mayor parte de las veces dirigidos por arquitectos. El sueño de la Casa Industrializada de la mano de arquitectos está comenzando a ser una realidad en Japón, Suecia y Estados Unidos a través de marcas como MUJI, Arkitekthus y Living Homes, pero aún dista de ser un hecho extendido en nuestra sociedad. Para que este ideal se cumpla deberá ofrecer valores que permita a la sociedad hacerlo suyo. La Tesis busca analizar la historia y la metodología de la Casa Industrializada, desde el diseño a la comercialización con el fin de ofrecer esos valores en forma de propuestas para la Casa Industrializada en este milenio. La casa como producto industrial-producto de consumo supera las lógicas tradicionales de la arquitectura para operar dentro del contexto de la producción industrial y la reproducción de los objetos. En este sentido es necesario establecer no solo la forma y construcción de la casa sino los mecanismos de reproducción con sus pertinentes eficiencias. La Casa industrializada no se construye, se monta, y para ello utiliza las estrategias de la construcción en seco, la prefabricación, el uso de componentes y los materiales ligeros. Desde la lógica del consumo, la casa debe dirigirse a un determinado público, no es más la casa para todos, característica de las situaciones de crisis y de emergencia. La casa se enfrenta a un mercado segmentado, tanto en cultura, como en deseos y poderes adquisitivos. En la cuestión del diseño debe plantearse más como diseño de producto que como diseño arquitectónico. La Casa Industrializada no es el fruto de un encargo y de una acción singular, debe ofrecerse lista para adquirir y para ser reproducida. Esta reproducción se puede dar tanto en la forma de modelos cerrados o sistemas abiertos que permitan la personalización por parte de los usuarios. Desde el ámbito cultural es necesario entender que la casa es más que una máquina de habitar, es un receptor de emociones, forma parte de nuestra memoria y nuestra cultura. La casa como producto social es una imagen de nosotros mismos, define la manera en la que nos situamos en el mundo y por tanto supone una definición de estatus. En esto, la Tesis se apoya en los textos de Baudrillard y su análisis de la sociedad de consumo y el papel de los objetos y su valor como signo. La Tesis realiza un repaso de los procedimientos industriales con especial énfasis en la producción automovilística y sitúa la evolución de la Casa Industrializada en relación a la evolución de los avances en los sistemas de producción industrial y las transferencias desde las industrias del automóvil y aeronáutica. La tesis se completa con una serie de casos de estudio que parten de las primeras casas de venta por correo de principios del siglo XX, pasando por las propuestas de Gropius, Fuller, el Case Study House Program, Prouvé, Sota y acaban con la situación actual. La Casa Industrializada ha mantenido una serie de valores a lo largo de su historia, como ideal, forma un cuerpo estable de propuestas que no se ha modificado a lo largo del tiempo. Con respecto a este nuevo milenio este ideal no debe ser cambiado sino simplemente actualizado y adaptado a los métodos de producción y las necesidades, sueños y exigencias de la sociedad de hoy. ABSTRACT The industrialized House provides an ideal to manufacture the house through the power and strategies of the industry. As such, the house becomes an industrial product that respond to the logic of reproduction and consumption. As a comodity, the house must become a desirable object, accessible to the group of the consumers to which is targeted The dream of the Industrialized home is originated in the First Industrial Revolution and it is consolidated in the second one after Ford´s production of Model T and the incorporation of the principal figures of the modern movement to the ideal of making houses at the factories. Throughout history there have been cases of success and failure, the first with the completion of a product of conventional image and the second most often led by architects. Industrialized dream house made by architects is starting to become a reality in Japan, Sweden and the United States through brands such as MUJI, Arkitekthus and Living Homes, but still far from beeing a widespread fact in our society. To fulfill this ideal, it should provide values that society could accept as of their own. The Thesis seeks to analyze the history and methodology of industrialized house, from design to marketing in order to offer these values in the form of proposals for industrialized house in this millennium. The house as an industrial-product-comodity extend beyond the traditional architectural logic to operate within the context of industrial production and the reproduction of objects. In this sense it is necessary to establish not only the shape and construction of the house but the mechanisms of reproduction with its relevant efficiencies. Industrialized house is not built it is assembled, and it uses the strategies of dry construction, prefabrication, using lightweight materials and components. From the logic of consumption, the house must go to a certain audience, it is no longer the home for all that is characteristic of crisis respond and emergency. The house faces a segmented market, both in culture and desires and purchasing power. On the question of design it must be considered more like product design than architectural design. Industrialized House is not the result of a commission and a singular action, it should be offered pret-a-porter and able to be reproduced. This reproduction can be given in form of closed or open systems models that allow its customization by users. From the cultural sphere is necessary to understand that the house is more than a machine for living, is a recipient of emotions, is part of our memory and our culture. The home as a social product is an image of ourselves, defines the way in which we place ourselves in the world and therefore represents a definition of status. In this aspect, the thesis is based on the texts of Baudrillard and his analysis of consumption society and the role of objects and its value as a sign in it. The thesis makes a review of the industrial processes with emphasis on automotive production and places the evolution of industrialized House in relation to the evolution of developments in industrial production systems and transfers from the automotive and aeronautics industries. The thesis is completed with a series of case studies that starts from the first mail order houses from the early twentieth century, going through the proposal of Gropius, Fuller, the Case Study House Program, Prouvé, Sota and end up with the current situation. Industrialized House has held a series of values throughout its history, as an ideal, forms a stable corps of proposals that has not changed over time. Regarding this new millennium this ideal should not be changed but simply be updated and adapted to production methods and needs, dreams and demands of today's society.
Resumo:
"Si el hombre es el cuidador de las palabras y sólo de ellas emerge el sentido de las cosas, la arquitectura tiene un cometido preciso: hacer de las condiciones ya dadas de cada lugar palabras que signifiquen las cualidades de la existencia, y que desvelen la riqueza y contenidos que en ellas se contienen potencialmente" Ignasi Solá Morales. Lugar: permanencia o producción, 1992. Esta tesis surge tanto del afán por comprender la identidad de uno de los espacios más representativos de mi ciudad, asumido familiarmente pero que plantea muchas dudas respecto a su caracterización, como de la preocupación personal respecto a la aparente hegemonía del modelo urbano de la "ciudad genérica", crudamente expuesto por Rem Koolhaas a finales del siglo XX, que pone en crisis la ciudad histórica. El territorio, espacio físico concreto, y la memoria asociada a este, obliterados, son considerados como punto de partida para confrontarlos con la proclamación del nuevo modelo de "ciudad genérica", de raíz eminentemente económica y tecnológica. La realidad tangible de un espacio, aparentemente forjado en base a los valores denostados por el nuevo modelo propuesto, se estudia desde las premisas opuestas. La idea del no-lugar, teorizado por Marc Augé y tomado como modelo por Koolhaas, supone éste emancipado tanto de las preexistencias históricas como de su ubicación física concreta, planteando un tipo de espacio de representación al margen del territorio y la memoria. Sin pretender adoptar una postura resistente u opuesta, sino antitética y complementaria, se toman aquí las premisas de Koolhaas para contrastarlas con una porción del territorio a medio camino entre la arquitectura y la ciudad, a fin de desarrollar una reflexión que sirva de complemento y contrapeso al paradigma espacial que la “ciudad genérica” implica y cuya inmediatez y supuesta anomia parecen anular cualquier intención interpretativa al neutralizar los centros históricos y proclamar el agotamiento de la historia. El planteamiento de una teoría dicotómica frente al espacio y las teorías arquitectónicas asociadas a este ya fue formulado por Colin Rowe y Fred Koetter a finales de los años setenta del siglo pasado. Se plantea aquí la idea de una “ciudad tangible” como opuesta a la idea de la "ciudad genérica" enunciada por Koolhaas. Tomando el territorio y la memoria como referencia principal en un lugar concreto y huyendo de la premisa de la inmediatez del instante y el "presente perpetuo" proclamado por Koolhaas, del que según él seríamos prisioneros, se establece una distancia respecto al objeto de análisis que desarrolla el estudio en la dirección opuesta al supuesto origen del mismo, planteando la posibilidad de reactivar una reflexión en torno al territorio y la memoria en el seno del proceso global de habitación para poner de manifiesto determinados mecanismos de configuración de un espacio de representación al margen de la urgencia del presente, reactivando la memoria y su relación con el territorio como punto de partida. Desde de la reconstrucción hipotética del territorio, partiendo de la propia presencia física del mismo, su orografía, la paleo-biología, las analogías etológicas, los restos arqueológicos, la antropología o la historia, se reivindica la reflexión arquitectónica como disciplina diversa y privilegiada en cuanto al análisis espacial, tratando de discernir el proceso mediante el cual el Prado pasó de territorio a escenario. La organización cronológica del estudio y la incorporación de muy diversas fuentes, en su mayoría directas, pretende poner de manifiesto la condición transitiva del espacio de representación y contrastar el pasado remoto del lugar y su construcción con el momento actual, inevitablemente encarnado por el punto de vista desde el cual se desarrolla la tesis. El Prado parece albergar, agazapado en su nombre, la raíz de un origen remoto y olvidado. Si como enunciaba Ignasi Solá-Morales la función de la arquitectura es hacer aflorar los significados inherentes al lugar, esta tesis se plantea como una recuperación de la idea del vínculo entre el territorio y la memoria como fuente fundamental en la definición de un espacio de representación específico. El escrutinio del pasado constituye un acto eminentemente contemporáneo, pues el punto de vista y la distancia, inevitablemente condicionados por el presente, determinan la mirada. El culto contemporáneo a la inmediatez y la proclamación de la superación de los procesos históricos han relegado el pasado, en cierto grado, a depósito de restos o referente a superar, obviando su ineluctable condición de origen o momento anterior condicionante. Partiendo de la reconstrucción del lugar sobre el cual se halla el Prado ubicado y reconsiderando, según las premisas desarrolladas por la moderna historiografía, fundamentalmente desarrolladas por la Escuela francesa de los Annales, la cotidianeidad y lo anónimo como fuente de la que dimanan muchos de los actuales significados de nuestros espacios de representación, tomando como punto de partida un lugar remoto y olvidado, se estudia como se fue consolidando el Prado hasta devenir un lugar insigne de referencia asociado a los poderes fácticos y el espacio áulico de la capital de las Españas en el siglo XVII. El proceso mediante el cual el Prado pasó de territorio a escenario implica la recuperación de la memoria de un espacio agropecuario anónimo y el análisis de cómo, poco a poco, se fue depositando sobre el mismo el acervo de los diversos pobladores de la región que con sus particularidades culturales y sociales fueron condicionando, en mayor o menor grado, un lugar cuyo origen se extiende retrospectivamente hasta hace más de dos mil años, cuando se considera que pudo darse la primera habitación a partir de la cual, de manera ininterrumpida, el Prado ha venido siendo parte de lo que devino, más tarde, Madrid. La llegada de nuevos agentes, vinculados con estructuras de poder y territoriales que trascendían la inmediatez del territorio sobre el que se comenzó a erigir dicho lugar, sirven para repasar los diferentes depósitos ideológicos y culturales que han ido conformando el mismo, reivindicando la diversidad y lo heterogéneo del espacio de representación frente a la idea homogeneizadora que el modelo genérico implica. La constitución del Prado como un espacio de referencia asociado al paganismo arcaico a partir de la praxis espacial cotidiana, su relación con las estructuras defensivas de Al-Andalus y la atalaya Omeya, la apropiación del los primitivos santuarios por parte la iglesia, su relación con un determinado tipo de espiritualidad y las órdenes religiosas más poderosas de la época, la preferencia de Carlos V por Madrid y sus vínculos con la cultura europea del momento, o la definitiva metamorfosis del lugar a partir del siglo XVI y el advenimiento de un nuevo paganismo emblemático y estetizado, culminan con el advenimiento de lo económico como representación del poder en el seno de la corte y la erección del Palacio del Buen Retiro como manifestación tangible de la definitiva exaltación del Prado a espacio de representación áulico. Decía T.S. Elliot que la pugna por el espacio de la memoria constituye el principal rasgo del clasicismo, y el Prado, ciertamente, participa de ese carácter al que está profundamente asociado en la conciencia espacial de los madrileños como lugar de referencia. Acaso la obliteración del territorio y la memoria, propuestas en la “ciudad genérica” también tengan algo que ver con ello. ABSTRACT "If man is the caretaker of words and only they provide the sense of things, the architecture has a precise mission: to make out from the given conditions of each place words that mean the qualities of existence, and which unveil the wealth and content they potentially contain " Ignasi Solá Morales. Place: permanence or production, 1992. This thesis arises from both the desire to understand the identity of one of the most representative spaces of my city, assumed in a familiar way but that raises many doubts about its characterization, and from a personal concern about the apparent hegemony of the urban model of the "generic city " so crudely exposed by Rem Koolhaas in the late twentieth century that puts a strain on the historic city. The obliteration of the territory, specific physical space, and its associated memory, are considered as a starting point to confront them with the proclamation of the new model of "generic city" raised from eminently economic and technological roots. The tangible reality of a space, apparently forged based on the values reviled by the proposed new model, is studied from opposite premises. The idea of non-place, theorized by Marc Augé and modeled by Koolhaas, implies the emancipation from both historical preexistences and physical location, posing a type of space representation outside the territory and memory. Without wishing to establish a confrontational or opposite position, but an antithetical and complementary stance, the premises of Koolhaas are here taken to contrast them with a portion of territory halfway between architecture and the city, to develop a study that will complement and counterbalance the spatial paradigm that the "generic city" means and whose alleged immediacy and anomie appear to nullify any interpretative intention by neutralizing the historic centers and proclaiming the exhaustion of history. The approach of a dichotomous theory versus space and architectural theories associated with this were already formulated by Colin Rowe and Fred Koetter during the late seventies of last century. The idea of a "tangible city" as opposed to the idea of the "generic city" enunciated by Koolhaas arises here. Taking the territory and memory as the main reference in a particular place and trying to avoid the premise of the immediacy of the moment and the "perpetual present" proclaimed by Koolhaas, of which he pleas we would be prisoners, a distance is established from the object of analysis developing the study in the opposite direction to the alleged origin of it, raising the possibility of reactivating a reflection on the territory and memory within the overall process of inhabiting to reveal certain representational space configuration mechanisms outside the urgency of the present, reviving the memory and its relationship with the territory as a starting point. From the hypothetical reconstruction of the territory, starting from its physical presence, geography, paleo-biology, ethological analogies, archaeological remains, anthropology or history, architecture is claimed as a diverse as privileged discipline for spatial analysis, trying to discern the process by which the Prado moved from territory to stage. The chronological organization of the study and incorporating a variety of sources, most direct, aims to highlight the transitive condition of representational space and contrast the remote past of the place and its construction with the current moment, inevitably played by the view point from which the thesis develops. The Prado seems to harbor, in its name, the root of a remote and forgotten origin. If, as Ignasi Sola-Morales said, the aim of architecture is to bring out the meanings inherent in the site, this thesis is presented as a recovery of the idea of the link between the territory and memory as a key source in defining a specific space of representation. The scrutiny of the past is an eminently contemporary act, for the view and distance inevitably conditioned by the present, determine the way we look. The contemporary cult of immediacy and the proclamation of overcoming historical processes have relegated the past, to some extent, to remains deposit or a reference to overcome, ignoring its ineluctable condition as origin or previous constraint. From rebuilding the site on which the Prado is located and reconsidering everyday life and the anonymous as a source of many arising current meanings of our space of representation, according to the premises developed by modern historiography mainly developed by the French school of Annales, trying to recover the remote and forgotten is attempted, the thesis studies how el Prado was consolidated to become the most significant place of Madrid, deeply associated with the power in the capital of Spain during the XVII century. The process by which the Prado evolved from territory to stage involves the recovery of the memory of an anonymous agricultural space and the analysis of how, little by little, the influence of the various inhabitants of the region with their own and how their cultural and social peculiarities was deposited through time on the common ground and how that determined, to a greater or lesser degree, a place whose origin retrospectively extends over more than two thousand years ago, when we can consider the first inhabiting from which, without interruption, the Prado has come to be part of what became, later, Madrid. The arrival of new players, linked to power structures and territorial issues which transcended the immediacy of the territory on which the place begun to be a characteristic space, serve to review the different ideological and cultural deposits that have shaped the place, claiming diversity and heterogeneous space of representation before the homogenizing idea which the generic model implies. The constitution of the Prado as a benchmark associated with the archaic paganism developed from the ancient everyday spatial praxis, its relationship with the defensive structures of Al-Andalus and the Umayyad watchtower, the appropriation of the early sanctuaries by the roman church, its relationship with a certain type of spirituality and the most powerful religious orders of the time, the preference of Carlos V towards Madrid and its links with the European culture of the moment and the final metamorphosis of the place during the sixteenth century, end at the moment on which the advent of the economic as a representation of power within the court and the erection of the Palacio del Buen Retiro, as a tangible manifestation of the ultimate exaltation of courtly Prado space representation, happened in the mid XVII century. T. S. Elliot said that the struggle for memory space is the main feature of classicism, and the Prado certainly shares part of that character deeply associated in the mental spatial structure of the locals as a landmark. Perhaps the obliteration of territory and memory proposed in the "generic city" might also have something to do with that.
Resumo:
This article advocates for a fundamental re-understanding about the way that the history of race is understood by the current Supreme Court. Represented by the racial rights opinions of Justice John Roberts that celebrate racial progress, the Supreme Court has equivocated and rendered obsolete the historical experiences of people of color in the United States. This jurisprudence has in turn reified the notion of color-blindness, consigning racial discrimination to a distant and discredited past that has little bearing to how race and inequality is experienced today. The racial history of the Roberts Court is centrally informed by the context and circumstances surrounding Brown v. Board of Education. For the Court, Brown symbolizes all that is wrong with the history of race in the United States - legal segregation, explicit racial discord, and vicious and random acts of violence. Though Roberts Court opinions suggest that some of those vestiges still exits, the bulk of its jurisprudence indicate the opposite. With Brown’s basic factual premises as its point of reference, the Court has consistently argued that the nation has made tremendous strides away from the condition of racial bigotry, intolerance, and inequity. The article accordingly argues that the Roberts Court reliance on Brown to understand racial progress is anachronistic. Especially as the nation’s focus for racial inequality turned national in scope, the same binaries in Brown that had long served to explain the history of race relations in the United States (such as Black-White, North-South, and Urban-Rural) were giving way to massive multicultural demographic and geographic transformations in the United States in the years and decades after World War II. All of the familiar tropes so clear in Brown and its progeny could no longer fully describe the current reality of shifting and transforming patterns of race relations in the United States. In order to reclaim the history of race from the Roberts Court, the article assesses a case that more accurately symbolizes the recent history and current status of race relations today: Keyes v. School District No. 1. This was the first Supreme Court case to confront how the binaries of cases like Brown proved of little probative value in addressing how and in what ways race and racial discrimination was changing in the United States. Thus, understanding Keyesand the history it reflects reveals much about how and in what ways the Roberts Court should rethink its conclusions regarding the history of race relations in the United States for the last 60 years.
Resumo:
This paper addresses the problem of the automatic recognition and classification of temporal expressions and events in human language. Efficacy in these tasks is crucial if the broader task of temporal information processing is to be successfully performed. We analyze whether the application of semantic knowledge to these tasks improves the performance of current approaches. We therefore present and evaluate a data-driven approach as part of a system: TIPSem. Our approach uses lexical semantics and semantic roles as additional information to extend classical approaches which are principally based on morphosyntax. The results obtained for English show that semantic knowledge aids in temporal expression and event recognition, achieving an error reduction of 59% and 21%, while in classification the contribution is limited. From the analysis of the results it may be concluded that the application of semantic knowledge leads to more general models and aids in the recognition of temporal entities that are ambiguous at shallower language analysis levels. We also discovered that lexical semantics and semantic roles have complementary advantages, and that it is useful to combine them. Finally, we carried out the same analysis for Spanish. The results obtained show comparable advantages. This supports the hypothesis that applying the proposed semantic knowledge may be useful for different languages.
Resumo:
Frequently, population ecology of marine organisms uses a descriptive approach in which their sizes and densities are plotted over time. This approach has limited usefulness for design strategies in management or modelling different scenarios. Population projection matrix models are among the most widely used tools in ecology. Unfortunately, for the majority of pelagic marine organisms, it is difficult to mark individuals and follow them over time to determine their vital rates and built a population projection matrix model. Nevertheless, it is possible to get time-series data to calculate size structure and densities of each size, in order to determine the matrix parameters. This approach is known as a “demographic inverse problem” and it is based on quadratic programming methods, but it has rarely been used on aquatic organisms. We used unpublished field data of a population of cubomedusae Carybdea marsupialis to construct a population projection matrix model and compare two different management strategies to lower population to values before year 2008 when there was no significant interaction with bathers. Those strategies were by direct removal of medusae and by reducing prey. Our results showed that removal of jellyfish from all size classes was more effective than removing only juveniles or adults. When reducing prey, the highest efficiency to lower the C. marsupialis population occurred when prey depletion affected prey of all medusae sizes. Our model fit well with the field data and may serve to design an efficient management strategy or build hypothetical scenarios such as removal of individuals or reducing prey. TThis This sdfsdshis method is applicable to other marine or terrestrial species, for which density and population structure over time are available.
Resumo:
Notebook with paper cover containing a handwritten list of the members of the Massachusetts General Court arranged by county and town. Pearson identified characteristics of the politicians including whether they were chosen by the people or Legislature, were for or against the College, were for or against the Virginia Resolutions, and whether they were "a good Federalist."
Resumo:
The European Union's powerful legal system has proven to be the vanguard moment in the process of European integration. As early as the 1960s, the European Court of Justice established an effective and powerful supranational legal order, beyond the original wording of the Treaties of Rome through the doctrines of direct effect and supremacy. Whereas scholars have analyzed the evolution of EU case law and its implications, only very recent historical scholarship has examined how the Member States received this process in the context of a number of difficult political and economic crises for the integration process. This paper investigates how the national level dealt with these fundamental transformations in the European legal system. Specifically, it examines one of the Union's most important member states, the Federal Republic of Germany. Faced with a huge number of cases dealing with European law, German judges dealt with the supremacy of European law very cautiously, negotiating between increasingly polarized academic, public and ministerial debates on the question throughout the 1960s. By the mid 1970s, the German Constitutional Court famously limited the power of the ECJ in its Solange decision (1974). This was an expression of a broader discourse in Germany from 1968 onwards about the qualitative nature of democracy and participation in public life and was in some aspects a marker, at which the German elites felt comfortable expressing the value of their national constitutional system on the European stage. This paper examines the political, media and academic build up and response to the Constitutional Court's decision in the 1970s, arguing that the national "reception" is central to understanding the dynamics and evolution of European Union legal history.
Resumo:
The idea behind the reputational measure for assessing power of political actors is that actors involved in a decision-making process have the best view of their fellows' power. There has been, however, no systematic examination of why actors consider other actors as powerful. Consequently, it is unclear whether reputational power measures what it ought to. The paper analyzes the determinants of power attribution and distinguishes intended from unintended determinants in a data-set of power assessment covering 10 political decision-making processes in Switzerland. Results are overall reassuring, but nevertheless point toward self-promotion or misperception biases, as informants systematically attribute more power to actors with whom they collaborate.
Resumo:
We examine rock-magnetic, carbonate, and planktonic foraminiferal fluxes to identify climatically controlled changes of terrigenous and pelagic sedimentation at Ocean Drilling Program (ODP) Site 646 (the Labrador Sea). Terrigenous sediments are brought to the site principally by bottom currents. We use a rock-magnetic parameter sensitive to changes in magnetic mineral grain size, the ratio of anhysteretic susceptibility to low-field magnetic susceptibility (XARM/X), to monitor changes in bottom-current intensity over time, with large values of XARM/X (finer-grained magnetic minerals) indicating weaker bottom currents. A second rock-magnetic parameter, magnetic mineral accumulation rate (KaT) was used to indicate variations in terrigenous flux. Planktonic foraminiferal and carbonate accumulation rates (Pfar and CaC03ar) are used as indicators of pelagic flux. Absolute age assignments are based on correlation between the planktonic foraminiferal oxygen-isotope variations for Site 646 and the SPECMAP master oxygen-isotope curve. Cross-correlation analyses of the parameters that we studied with respect to the SPECMAP curve suggest that from oxygen-isotope stages 21 to 11, sedimentation rate, KaT, X, CaCO3ar, and Pfar were at their maximums, whereas XARM/X was at its minimum during peak interglacials (i.e., 0 k.y. lag time with respect to minimum ice volume). However, all parameters we examined lag behind minimum ice volume from stages 11 to 1, indicating a change in timing of both pelagic and terrigenous fluxes at approximately 400 k.y. BP. The negative correlation coefficient between XARM/X and the SPECMAP curve further suggest that finer-grained magnetic minerals are deposited during glacial periods, which probably reflects weaker bottom currents. The shift observed in the lag times of parameters examined with respect to the SPECMAP record is attributed to a change in significance of orbital parameters. Spectral results exhibit strong power in eccentricity (about 100 k.y.) throughout the record. Kap X, CaCO3flr, and Pfar show significant power in obliquity (about 41 k.y.), whereas XARM/X shows significant power at 73 k.y. from stages 21 to 11. The 73-k.y. period in XARM/X is near the difference tone of obliquity and eccentricity: 1/43-1/102 = 1/69. Kar and XARM/X show power only in eccentricity from stages 11 to 1. X and Pfar show significant power in precession (about 18 and 22 k.y.) whereas CaC03ar has power at 34 k.y, which could be a combination of precession and obliquity. The shift in power of orbital parameters may by attributed to the effect of the about 413-k.y. signal of eccentricity.
Resumo:
Report by Professor Sungjoon Cho, Associate Professor of Law, Chicago-Kent College of Law (Chair), and Charlotte Sieber-Gasser, Doctoral Research Fellow, World Trade Institute, University of Bern, Session 27, WTO Public Forum 2010: The Forces Shapping World Trade, pp.29-33. In the course of the financial crisis, the global geography of power has shifted from G8 to G20. The latter, although representing roughly two thirds of global trade, consists of relatively a small number of global players and is consequently excluding many others from decision-making at the international stage. Nevertheless, the G20 has been successful in its reaction to the financial crisis and became therewith an important new player within the international community. When highlighting how the G20 might interfere with the WTO, the panel voiced concerns over the political legitimacy of the G20, given the limited number of members and the global impact of its decisions. It agreed on the impression that although the G20 intends to extend its debates from the financial sector to world economy in general, it has so far little achieved in this direction, particularly when it comes to moving the Doha agenda forward. It remains, thus, open how the G20 will evolve in the coming few years, and what mandates it will shed or adopt. So far, the G20 has complemented the WTO and international financial institutions in handling the financial crisis. Yet, even if there is little evidence pointing towards a less cooperative role in the future, the desirability of a G20 commitment in WTO trade negotiations has yet to be debated. The panel concluded by providing ideas on how the potential of the G20 might be used to serve global interests even better in the future. In their concluding remarks, the panellists agreed that it remains to be seen whether or not the G20 will further broaden its agenda. Given the ebbing away of the financial crisis there is even the question whether the G20 will remain an important international forum for financial collaboration, or whether it has already served its cause and will eventually disappear from the international stage. The Chair concluded the well attended and lively panel with voicing the hope that the two international bodies – the G20 and the WTO – will work in a positive way together in the future and face the challenges and opportunities in their collaboration to the benefit of everyone.