861 resultados para power and domination
Resumo:
The effects of power and time conditions of in situ N2 plasma treatment, prior to silicon nitride (SiN) passivation, were investigated on an AlGaN/GaN high-electron mobility transistor (HEMT). These studies reveal that N2 plasma power is a critical parameter to control the SiN/AlGaN interface quality, which directly affects the 2-D electron gas density. Significant enhancement in the HEMT characteristics was observed by using a low power N2 plasma pretreatment. In contrast, a marked gradual reduction in the maximum drain-source current density (IDS max) and maximum transconductance (gm max), as well as in fT and fmax, was observed as the N2 plasma power increases (up to 40% decrease for 210 W). Different mechanisms were proposed to be dominant as a function of the discharge power range. A good correlation was observed between the device electrical characteristics and the surface assessment by atomic force microscopy and Kelvin force microscopy techniques.
Resumo:
Electrodynamic tethered systems, in which an exposed portion of the conducting tether itself collects electrons from the ionosphere, promise to attain currents of 10 A or more in low Earth orbit. For the first time, another desirable feature of such bare-tether systems is reported and analyzed in detail: Collection by a bare tether is relatively insensitive to variations in electron density that are regularly encountered on each revolution of an orbit. This self-adjusting property of bare-tether systems occurs because the electron-collecting area on the tether is not fixed, but extends along its positively biased portion, and because the current varies as collecting length to a power greater than unity. How this adjustment to density variations follows from the basic collection law of thin cylinders is shown. The effect of variations in the motionally induced tether voltage is also analyzed. Both power and thruster modes are considered. The performance of bare-tether systems to tethered systems is compared using passive spherical collectors of fixed area, taking into consideration recent experimental results. Calculations taking into account motional voltage and plasma density around a realistic orbit for bare-tether systems suitable for space station applications are also presented.
Resumo:
Over the last few years, the Pennsylvania State University (PSU) under the sponsorship of the US Nuclear Regulatory Commission (NRC) has prepared, organized, conducted, and summarized two international benchmarks based on the NUPEC data—the OECD/NRC Full-Size Fine-Mesh Bundle Test (BFBT) Benchmark and the OECD/NRC PWR Sub-Channel and Bundle Test (PSBT) Benchmark. The benchmarks’ activities have been conducted in cooperation with the Nuclear Energy Agency/Organization for Economic Co-operation and Development (NEA/OECD) and the Japan Nuclear Energy Safety (JNES) Organization. This paper presents an application of the joint Penn State University/Technical University of Madrid (UPM) version of the well-known sub-channel code COBRA-TF (Coolant Boiling in Rod Array-Two Fluid), namely, CTF, to the steady state critical power and departure from nucleate boiling (DNB) exercises of the OECD/NRC BFBT and PSBT benchmarks. The goal is two-fold: firstly, to assess these models and to examine their strengths and weaknesses; and secondly, to identify the areas for improvement.
Resumo:
The heating produced by the absorption of radiofrequency (RF) has been considered a secondary undesirable effect during MRI procedures. In this work, we have measured the power absorbed by distilled water, glycerol and egg-albumin during NMR and non-NMR experiments. The samples are dielectric and examples of different biological materials. The samples were irradiated using the same RF pulse sequence, whilst the magnetic field strength was the variable to be changed in the experiments. The measurements show a smooth increase of the thermal power as the magnetic field grows due to the magnetoresistive effect in the copper antenna, a coil around the probe, which is directly heating the sample. However, in the cases when the magnetic field was the adequate for the NMR to take place, some anomalies in the expected thermal powers were observed: the thermal power was higher in the cases of water and glycerol, and lower in the case of albumin. An ANOVA test demonstrated that the observed differences between the measured power and the expected power are significant.
Resumo:
Short-term variability in the power generated by large grid-connected photovoltaic (PV) plants can negatively affect power quality and the network reliability. New grid-codes require combining the PV generator with some form of energy storage technology in order to reduce short-term PV power fluctuation. This paper proposes an effective method in order to calculate, for any PV plant size and maximum allowable ramp-rate, the maximum power and the minimum energy storage requirements alike. The general validity of this method is corroborated with extensive simulation exercises performed with real 5-s one year data of 500 kW inverters at the 38.5 MW Amaraleja (Portugal) PV plant and two other PV plants located in Navarra (Spain), at a distance of more than 660 km from Amaraleja.
Resumo:
Fully integrated semiconductor master-oscillator power-amplifiers (MOPA) with a tapered power amplifier are attractive sources for applications requiring high brightness. The geometrical design of the tapered amplifier is crucial to achieve the required power and beam quality. In this work we investigate by numerical simulation the role of the geometrical design in the beam quality and in the maximum achievable power. The simulations were performed with a Quasi-3D model which solves the complete steady-state semiconductor and thermal equations combined with a beam propagation method. The results indicate that large devices with wide taper angles produce higher power with better beam quality than smaller area designs, but at expenses of a higher injection current and lower conversion efficiency.
Resumo:
Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.
Resumo:
Esta tesis se desarrolla dentro del marco de las comunicaciones satelitales en el innovador campo de los pequeños satélites también llamados nanosatélites o cubesats, llamados así por su forma cubica. Estos nanosatélites se caracterizan por su bajo costo debido a que usan componentes comerciales llamados COTS (commercial off-the-shelf) y su pequeño tamaño como los Cubesats 1U (10cm*10 cm*10 cm) con masa aproximada a 1 kg. Este trabajo de tesis tiene como base una iniciativa propuesta por el autor de la tesis para poner en órbita el primer satélite peruano en mi país llamado chasqui I, actualmente puesto en órbita desde la Estación Espacial Internacional. La experiencia de este trabajo de investigación me llevo a proponer una constelación de pequeños satélites llamada Waposat para dar servicio de monitoreo de sensores de calidad de agua a nivel global, escenario que es usado en esta tesis. Es ente entorno y dadas las características limitadas de los pequeños satélites, tanto en potencia como en velocidad de datos, es que propongo investigar una nueva arquitectura de comunicaciones que permita resolver en forma óptima la problemática planteada por los nanosatélites en órbita LEO debido a su carácter disruptivo en sus comunicaciones poniendo énfasis en las capas de enlace y aplicación. Esta tesis presenta y evalúa una nueva arquitectura de comunicaciones para proveer servicio a una red de sensores terrestres usando una solución basada en DTN (Delay/Disruption Tolerant Networking) para comunicaciones espaciales. Adicionalmente, propongo un nuevo protocolo de acceso múltiple que usa una extensión del protocolo ALOHA no ranurado, el cual toma en cuenta la prioridad del trafico del Gateway (ALOHAGP) con un mecanismo de contienda adaptativo. Utiliza la realimentación del satélite para implementar el control de la congestión y adapta dinámicamente el rendimiento efectivo del canal de una manera óptima. Asumimos un modelo de población de sensores finito y una condición de tráfico saturado en el que cada sensor tiene siempre tramas que transmitir. El desempeño de la red se evaluó en términos de rendimiento efectivo, retardo y la equidad del sistema. Además, se ha definido una capa de convergencia DTN (ALOHAGP-CL) como un subconjunto del estándar TCP-CL (Transmission Control Protocol-Convergency Layer). Esta tesis muestra que ALOHAGP/CL soporta adecuadamente el escenario DTN propuesto, sobre todo cuando se utiliza la fragmentación reactiva. Finalmente, esta tesis investiga una transferencia óptima de mensajes DTN (Bundles) utilizando estrategias de fragmentación proactivas para dar servicio a una red de sensores terrestres utilizando un enlace de comunicaciones satelitales que utiliza el mecanismo de acceso múltiple con prioridad en el tráfico de enlace descendente (ALOHAGP). El rendimiento efectivo ha sido optimizado mediante la adaptación de los parámetros del protocolo como una función del número actual de los sensores activos recibidos desde el satélite. También, actualmente no existe un método para advertir o negociar el tamaño máximo de un “bundle” que puede ser aceptado por un agente DTN “bundle” en las comunicaciones por satélite tanto para el almacenamiento y la entrega, por lo que los “bundles” que son demasiado grandes son eliminados o demasiado pequeños son ineficientes. He caracterizado este tipo de escenario obteniendo una distribución de probabilidad de la llegada de tramas al nanosatélite así como una distribución de probabilidad del tiempo de visibilidad del nanosatélite, los cuales proveen una fragmentación proactiva óptima de los DTN “bundles”. He encontrado que el rendimiento efectivo (goodput) de la fragmentación proactiva alcanza un valor ligeramente inferior al de la fragmentación reactiva. Esta contribución permite utilizar la fragmentación activa de forma óptima con todas sus ventajas tales como permitir implantar el modelo de seguridad de DTN y la simplicidad al implementarlo en equipos con muchas limitaciones de CPU y memoria. La implementación de estas contribuciones se han contemplado inicialmente como parte de la carga útil del nanosatélite QBito, que forma parte de la constelación de 50 nanosatélites que se está llevando a cabo dentro del proyecto QB50. ABSTRACT This thesis is developed within the framework of satellite communications in the innovative field of small satellites also known as nanosatellites (<10 kg) or CubeSats, so called from their cubic form. These nanosatellites are characterized by their low cost because they use commercial components called COTS (commercial off-the-shelf), and their small size and mass, such as 1U Cubesats (10cm * 10cm * 10cm) with approximately 1 kg mass. This thesis is based on a proposal made by the author of the thesis to put into orbit the first Peruvian satellite in his country called Chasqui I, which was successfully launched into orbit from the International Space Station in 2014. The experience of this research work led me to propose a constellation of small satellites named Waposat to provide water quality monitoring sensors worldwide, scenario that is used in this thesis. In this scenario and given the limited features of nanosatellites, both power and data rate, I propose to investigate a new communications architecture that allows solving in an optimal manner the problems of nanosatellites in orbit LEO due to the disruptive nature of their communications by putting emphasis on the link and application layers. This thesis presents and evaluates a new communications architecture to provide services to terrestrial sensor networks using a space Delay/Disruption Tolerant Networking (DTN) based solution. In addition, I propose a new multiple access mechanism protocol based on extended unslotted ALOHA that takes into account the priority of gateway traffic, which we call ALOHA multiple access with gateway priority (ALOHAGP) with an adaptive contention mechanism. It uses satellite feedback to implement the congestion control, and to dynamically adapt the channel effective throughput in an optimal way. We assume a finite sensor population model and a saturated traffic condition where every sensor always has frames to transmit. The performance was evaluated in terms of effective throughput, delay and system fairness. In addition, a DTN convergence layer (ALOHAGP-CL) has been defined as a subset of the standard TCP-CL (Transmission Control Protocol-Convergence Layer). This thesis reveals that ALOHAGP/CL adequately supports the proposed DTN scenario, mainly when reactive fragmentation is used. Finally, this thesis investigates an optimal DTN message (bundles) transfer using proactive fragmentation strategies to give service to a ground sensor network using a nanosatellite communications link which uses a multi-access mechanism with priority in downlink traffic (ALOHAGP). The effective throughput has been optimized by adapting the protocol parameters as a function of the current number of active sensors received from satellite. Also, there is currently no method for advertising or negotiating the maximum size of a bundle which can be accepted by a bundle agent in satellite communications for storage and delivery, so that bundles which are too large can be dropped or which are too small are inefficient. We have characterized this kind of scenario obtaining a probability distribution for frame arrivals to nanosatellite and visibility time distribution that provide an optimal proactive fragmentation of DTN bundles. We have found that the proactive effective throughput (goodput) reaches a value slightly lower than reactive fragmentation approach. This contribution allows to use the proactive fragmentation optimally with all its advantages such as the incorporation of the security model of DTN and simplicity in protocol implementation for computers with many CPU and memory limitations. The implementation of these contributions was initially contemplated as part of the payload of the nanosatellite QBito, which is part of the constellation of 50 nanosatellites envisaged under the QB50 project.
Resumo:
Using allozymes and mtDNA sequences from the cytochrome b gene, we report that the brown kiwi has the highest levels of genetic structuring observed in birds. Moreover, the mtDNA sequences are, with two minor exceptions, diagnostic genetic markers for each population investigated, even though they are among the more slowly evolving coding regions in this genome. A major unexpected finding was the concordant split in molecular phylogenies between brown kiwis in the southern South Island and elsewhere in New Zealand. This basic phylogeographic boundary halfway down the South Island coincides with a fixed allele difference in the Hb nuclear locus and strongly suggests that two morphologically cryptic species are currently merged under one polytypic species. This is another striking example of how molecular genetic assays can detect phylogenetic discontinuities that are not reflected in traditional morphologically based taxonomies. However, reanalysis of the morphological characters by using phylogenetic methods revealed that the reason for this discordance is that most are primitive and thus are phylogenetically uninformative. Shared-derived morphological characters support the same relationships evident in the molecular phylogenies and, in concert with the molecular data, suggest that as brown kiwis colonized northward from the southern South Island, they retained many primitive characters that confounded earlier systematists. Strong subdivided population structure and cryptic species in brown kiwis seem to have evolved relatively recently as a consequence of Pleistocene range disjunctions, low dispersal power, and genetic drift in small populations.
Resumo:
Esta pesquisa tem como tema os discursos que circulam e estruturam o ambiente de trabalho nas empresas. De forma geral, busca-se compreender a lógica administrativa que, no limite, exerce uma condução da subjetividade dos trabalhadores. Temos por objetivos: 1) identificar como nas escolas do pensamento administrativo do século XX o engajamento do trabalho é requerido dos trabalhadores; 2) pretendemos discutir uma possível caracterização do discurso e dos usos da administração, como campo de conhecimento sobre o trabalho, dentro das organizações. Para tanto, estabelecemos como panorama teórico-metodológico nas ideias centrais de Michel Foucault em suas análises sobre a funcionalidade do poder e seus dispositivos. Propusemos uma perspectiva arqueogenealógica para a estudar o tema trabalho por meio da lógica da administração e do engajamento. Para estruturar esta análise, selecionamos disciplinas oferecidas e examinamos ementas e referências de leituras nos cursos de graduação da Escola de Administração de Empresas de São Paulo da Fundação Getúlio Vargas (EAESP-FGV) e do curso de Administração da Faculdade de Economia, Administração e Contabilidade da Universidade de São Paulo (FEA-USP). Diante do contato com a discursividade administrativa pudemos formular algumas reflexões. A Administração de Empresas é a ciência da demonstração, pautada na dominação da realidade por uma dita eficiência. É o que se faz produtivo da realidade ou o que se produz nela que se torna científico. Não há discussão, nos registros aqui investigados, de um questionamento sobre o discurso de eficiência, de melhores práticas e da localização das organizações como construções historicamente produzidas, apenas como produtoras de determinações. Não parece haver a possibilidade de desnaturalizar a empresa. A administração parece estar pautada em uma duplicação da dominação como a profissão que é moldada ao moldar o trabalho. Partimos também da análise do que se nomeia engajamento para compreender o que é requerido do trabalhador para além do contrato de trabalho e de sua produtividade material. Neste sentido, o trabalho torna-se uma experiência de educação em contínua e ininterrupta vigilância pedagógica. O efeito moral das técnicas e das provas nas correntes de pensamento da administração permite uma condução técnica e do modo de viver dos trabalhadores. Nesse sentido, a administração, propõe o aperfeiçoamento do cuidado de si no trabalho e para o trabalho. As disciplinas então se duplicam: pelo exterior, por meio da regra pautada na estrutura formal da autoridade da fábrica/empresa com o governo dos corpos; e pelo interior por meio de uma sujeição por identidade, pela estilização da vida
Resumo:
This dissertation project explored professionalism and the performance of identities by examining Taiwanese commercial airline pilots' discursive practices in everyday life. The intentions for this project were to not only expand current knowledge of organizational communication from a critical rhetorical perspective, but to further explore the under-appreciated concept of professionalism of organizational members. Theoretically, I traced theoretical analysis in the sociology of professions and further investigated scholarship from identity research in organizational communication studies. This research agenda helped to advance communication-based understandings of the meanings and practices of professional identity as a complement to the sociological conception. I further merged a performance paradigm and critical rhetorical perspective to examine the discursive practices of organizational members and to challenge the bias of traditional textual approaches. Methodologically, I conducted ethnographic interviews with Taiwanese commercial airline pilots in order to understand how they construct their personal, social, and professional identities. Five narrative themes were identified and demonstrated in this project: (1) It takes a lot to become a commercial airline pilot, (2) Being a professional commercial airline pilot is to build up sufficient knowledge, beyond average skill, and correct attitude, (3) Pilots' resistance and dissent toward company management, (4) Popular (re)presentation influences professionalism, (5) Power and fear affect professionalism. Pilots' personal narratives were presented in performative writing and in poetic transcription to make word alive with sounds featuring their meanings. Their personal storytelling created a dialogic space to not only let pilots' voice to be heard but also revealed how identities are created within and against a larger organizational identity. Overall, this project demonstrated the interdisciplinary examination of the meanings, functions, and consequence of discursive practices in everyday professional life. It also critiqued relationships between power, domination, and resistance while reintroducing the roles of the body and materiality in the domain of professionalism, and provides ethical readings of larger and complex organizational cultures. Applying communication-oriented analysis to study professionalism indeed challenged the long time neglected phenomena regarding the power of the symbolic in sociological approaches and raised the awareness of structural, material, and bodily condition of work.
Resumo:
ALICE is one of four major experiments of particle accelerator LHC installed in the European laboratory CERN. The management committee of the LHC accelerator has just approved a program update for this experiment. Among the upgrades planned for the coming years of the ALICE experiment is to improve the resolution and tracking efficiency maintaining the excellent particles identification ability, and to increase the read-out event rate to 100 KHz. In order to achieve this, it is necessary to update the Time Projection Chamber detector (TPC) and Muon tracking (MCH) detector modifying the read-out electronics, which is not suitable for this migration. To overcome this limitation the design, fabrication and experimental test of new ASIC named SAMPA has been proposed . This ASIC will support both positive and negative polarities, with 32 channels per chip and continuous data readout with smaller power consumption than the previous versions. This work aims to design, fabrication and experimental test of a readout front-end in 130nm CMOS technology with configurable polarity (positive/negative), peaking time and sensitivity. The new SAMPA ASIC can be used in both chambers (TPC and MCH). The proposed front-end is composed of a Charge Sensitive Amplifier (CSA) and a Semi-Gaussian shaper. In order to obtain an ASIC integrating 32 channels per chip, the design of the proposed front-end requires small area and low power consumption, but at the same time requires low noise. In this sense, a new Noise and PSRR (Power Supply Rejection Ratio) improvement technique for the CSA design without power and area impact is proposed in this work. The analysis and equations of the proposed circuit are presented which were verified by electrical simulations and experimental test of a produced chip with 5 channels of the designed front-end. The measured equivalent noise charge was <550e for 30mV/fC of sensitivity at a input capacitance of 18.5pF. The total core area of the front-end was 2300?m × 150?m, and the measured total power consumption was 9.1mW per channel.
Resumo:
ALICE is one of four major experiments of particle accelerator LHC installed in the European laboratory CERN. The management committee of the LHC accelerator has just approved a program update for this experiment. Among the upgrades planned for the coming years of the ALICE experiment is to improve the resolution and tracking efficiency maintaining the excellent particles identification ability, and to increase the read-out event rate to 100 KHz. In order to achieve this, it is necessary to update the Time Projection Chamber detector (TPC) and Muon tracking (MCH) detector modifying the read-out electronics, which is not suitable for this migration. To overcome this limitation the design, fabrication and experimental test of new ASIC named SAMPA has been proposed . This ASIC will support both positive and negative polarities, with 32 channels per chip and continuous data readout with smaller power consumption than the previous versions. This work aims to design, fabrication and experimental test of a readout front-end in 130nm CMOS technology with configurable polarity (positive/negative), peaking time and sensitivity. The new SAMPA ASIC can be used in both chambers (TPC and MCH). The proposed front-end is composed of a Charge Sensitive Amplifier (CSA) and a Semi-Gaussian shaper. In order to obtain an ASIC integrating 32 channels per chip, the design of the proposed front-end requires small area and low power consumption, but at the same time requires low noise. In this sense, a new Noise and PSRR (Power Supply Rejection Ratio) improvement technique for the CSA design without power and area impact is proposed in this work. The analysis and equations of the proposed circuit are presented which were verified by electrical simulations and experimental test of a produced chip with 5 channels of the designed front-end. The measured equivalent noise charge was <550e for 30mV/fC of sensitivity at a input capacitance of 18.5pF. The total core area of the front-end was 2300?m × 150?m, and the measured total power consumption was 9.1mW per channel.
Resumo:
This Article advances a new capital framework for understanding the bargain between large law firms and their lawyers, depicting BigLaw relationships not as basic labor-salary exchanges but rather as complex transactions in which large law firms and their lawyers exchange labor and various forms of capital — social, cultural, and identity. First, it builds on the work of Pierre Bourdieu regarding economic, cultural, symbolic, and social capital by examining the concepts of positive and negative capital, exploring the meaning of capital ownership by entities, and developing the notion of identity capital — the value individuals and institutions derive from their identities. Then, the Article advances a capital theory of BigLaw, in which large law firms and their lawyers engage in complex transactions trading labor, social, cultural, and identity capital for economic, social, cultural, and identity capital. Capital analysis sheds new light on the well-documented and troubling underrepresentation of diverse lawyers at BigLaw. It shows that the underrepresentation of women and minority lawyers is not solely the result of exogenous forces outside the control of large law firms such as implicit bias, but rather the outcome of the very exchanges in which BigLaw and its lawyers engage. Specifically, large law firms take into account the capital endowments of their lawyers in making hiring, retention and promotion decisions, and derive value from their lawyers’ capital, for example, by trading on the identity of women and minority lawyers in marketing themselves as being diverse and inclusive to clients and potential recruits. Yet, while BigLaw trades for the identity capital of women and minority lawyers, it fails to offer them opportunities in return to acquire the social and cultural capital necessary for attaining positions of power, resulting in underrepresentation. Moreover, these labor-capital exchanges are often implicit and made by uninformed participants, and therefore unjust. Exactly because the capital framework describes the underrepresentation of diverse lawyers at BigLaw as an endogenous outcome within the control of BigLaw and its lawyers, however, it is a cautiously optimistic model that offers hope for greater representation of diverse lawyers in positions of power and influence. The Article suggests policies and procedures BigLaw can and should adopt to improve the quality of the exchanges it offers to women and minority attorneys and to reduce the underrepresentation of diverse lawyers within its ranks. Employing the concepts of capital transparency, capital boundary, and capital infrastructure, it demonstrates how BigLaw can (1) explicitly recognize the roles social, cultural, and identity capital play in its hiring, retention and promotion apparatuses and (2) revise its policies and procedures to ensure that all of its lawyers have equal opportunities to develop the requisite capital and compete on equal and fair terms for positions of power and influence.
Resumo:
This dissertation uses a political ecology approach to examine the relationship between tourism development and groundwater in southwest Nicaragua. Tourism in Nicaragua is a booming industry bolstered by ‘unspoiled’ natural beauty, low crime rates, and government incentives. This growth has led to increased infrastructure, revenue, and employment opportunities for many local communities along the Pacific coast. Not surprisingly, it has also brought concomitant issues of deeper poverty, widening gaps between rich and poor, and competition over natural resources. Adequate provisions of freshwater are necessary to sustain the production and reproduction of tourism; however, it remains uncertain if groundwater supplies can keep pace with demand. The objective of this research is to assess water supply availability amidst tourism development in the Playa Gigante area. It addresses the questions: 1) are local groundwater supplies sufficient to sustain the demand for freshwater imposed by increased tourism development? and 2) is there a power relationship between tourism development and control over local freshwater that would prove inequitable to local populations? Integrating the findings of groundwater monitoring, geological mapping, and ethnographic and survey research from a representative stretch of Pacific coastline, this dissertation shows that diminishing recharge and increased groundwater consumption is creating conflict between stakeholders with various levels of knowledge, power, and access. Although national laws are structured to protect the environment and ensure equitable access to groundwater, the current scramble to secure water has powerful implications on social relations and power structures associated with tourism development. This dissertation concludes that marginalization due to environmental degradation is attributable to the nexus of a political promotion of tourism, poorly enforced state water policies, insufficient water research, and climate change. Greater technical attention to hydrological dynamics and collaboration amongst stakeholders are necessary for equitable access to groundwater, environmental sustainability, and profitability of tourism.