72 resultados para Double Layer Capacitance
Resumo:
We characterize double adjunctions in terms of presheaves and universal squares, and then apply these characterizations to free monads and Eilenberg-Moore objects in double categories. We improve upon an earlier result of Fiore-Gambino-Kock in [7] to conclude: if a double category with cofolding admits the construction of free monads in its horizontal 2-category, then it also admits the construction of free monads as a double category horizontally and vertically, and also in its vertical 2-category. We also prove that a double category admits Eilenberg-Moore objects if and only if a certain parameterized presheaf is representable. Along the way, we develop parameterized presheaves on double categories and prove a double Yoneda Lemma.
Resumo:
The work in this paper deals with the development of momentum and thermal boundary layers when a power law fluid flows over a flat plate. At the plate we impose either constant temperature, constant flux or a Newton cooling condition. The problem is analysed using similarity solutions, integral momentum and energy equations and an approximation technique which is a form of the Heat Balance Integral Method. The fluid properties are assumed to be independent of temperature, hence the momentum equation uncouples from the thermal problem. We first derive the similarity equations for the velocity and present exact solutions for the case where the power law index n = 2. The similarity solutions are used to validate the new approximation method. This new technique is then applied to the thermal boundary layer, where a similarity solution can only be obtained for the case n = 1.
Resumo:
En aquest projecte s'ha estudiat la posada a punt d’un equip comercial ALD per a l’obtenció de capes primes d'alúmina a escala nanomètrica utilitzant vapor d’aigua i TMA com a precursors. Per tal de comprovar a bondat de les receptes experimentals aportades pel fabricant així com comprovar alguns aspectes de la teoria ALD s’han realitzat una sèrie de mostres variant els diferents paràmetres experimentals, principalment la temperatura de deposició, el nombre de cicles, la durada del cicle i el tipus de substrat. Per a la determinació dels gruixos nanomètrics de les capes i per tant dels ritmes de creixement s’ha utilitzat la el·lipsometria, una de les poques tècniques no destructives capaç de mesurar amb gran precisió gruixos de capes o interfases de pocs àngstroms o nanòmetres. En una primera etapa s'han utilitzat els valors experimentals donats pel fabricant del sistema ALD per determinar el ritme de creixement en funció de la temperatura de dipòsit i del numero de cicles, en ambdós casos sobre diversos substrats. S'ha demostrat que el ritme de creixement augmenta lleugerament en augmentar la temperatura de dipòsit, tot i que amb una variació petita, de l'ordre del 12% en variar 70ºC la temperatura de deposició. Així mateix s'ha demostrat la linealitat del gruix amb el número de cicles, tot i que no s’observa una proporcionalitat exacta. En una segona etapa s'han optimitzat els paràmetres experimentals, bàsicament els temps de purga entre pols i pols per tal de reduir considerablement les durades dels experiments realitzats a relativament baixes temperatures. En aquest cas s’ha comprovat que es mantenien els ritmes de creixement amb una diferencia del 3,6%, 4,8% i 5,5% en optimitzar el cicles en 6,65h, 8,31h, o 8,33h, respectivament. A més, per una d'aquestes condicions s’ha demostrat que es mantenia l’alta conformitat de les capes d’alúmina. A més, s'ha realitzat un estudi de l'homogeneïtat del gruix de les capes en tota la zona de dipòsit del reactor ALD. S’ha demostrat que la variació en gruix de les capes dipositades a 120ºC és com a màxim del 6,2% en una superfície de 110 cm2. Confirmant l’excepcional control de gruixos de la tècnica ALD.
Resumo:
Precision of released figures is not only an important quality feature of official statistics,it is also essential for a good understanding of the data. In this paper we show a casestudy of how precision could be conveyed if the multivariate nature of data has to betaken into account. In the official release of the Swiss earnings structure survey, the totalsalary is broken down into several wage components. We follow Aitchison's approachfor the analysis of compositional data, which is based on logratios of components. Wefirst present diferent multivariate analyses of the compositional data whereby the wagecomponents are broken down by economic activity classes. Then we propose a numberof ways to assess precision
Resumo:
In this paper, different recovery methods applied at different network layers and time scales are used in order to enhance the network reliability. Each layer deploys its own fault management methods. However, current recovery methods are applied to only a specific layer. New protection schemes, based on the proposed partial disjoint path algorithm, are defined in order to avoid protection duplications in a multi-layer scenario. The new protection schemes also encompass shared segment backup computation and shared risk link group identification. A complete set of experiments proves the efficiency of the proposed methods in relation with previous ones, in terms of resources used to protect the network, the failure recovery time and the request rejection ratio
Resumo:
This paper focuses on QoS routing with protection in an MPLS network over an optical layer. In this multi-layer scenario each layer deploys its own fault management methods. A partially protected optical layer is proposed and the rest of the network is protected at the MPLS layer. New protection schemes that avoid protection duplications are proposed. Moreover, this paper also introduces a new traffic classification based on the level of reliability. The failure impact is evaluated in terms of recovery time depending on the traffic class. The proposed schemes also include a novel variation of minimum interference routing and shared segment backup computation. A complete set of experiments proves that the proposed schemes are more efficient as compared to the previous ones, in terms of resources used to protect the network, failure impact and the request rejection ratio
Resumo:
Technological limitations and power constraints are resulting in high-performance parallel computing architectures that are based on large numbers of high-core-count processors. Commercially available processors are now at 8 and 16 cores and experimental platforms, such as the many-core Intel Single-chip Cloud Computer (SCC) platform, provide much higher core counts. These trends are presenting new sets of challenges to HPC applications including programming complexity and the need for extreme energy efficiency.In this work, we first investigate the power behavior of scientific PGAS application kernels on the SCC platform, and explore opportunities and challenges for power management within the PGAS framework. Results obtained via empirical evaluation of Unified Parallel C (UPC) applications on the SCC platform under different constraints, show that, for specific operations, the potential for energy savings in PGAS is large; and power/performance trade-offs can be effectively managed using a cross-layerapproach. We investigate cross-layer power management using PGAS language extensions and runtime mechanisms that manipulate power/performance tradeoffs. Specifically, we present the design, implementation and evaluation of such a middleware for application-aware cross-layer power management of UPC applications on the SCC platform. Finally, based on our observations, we provide a set of recommendations and insights that can be used to support similar power management for PGAS applications on other many-core platforms.
Resumo:
Geometries, vibrational frequencies, and interaction energies of the CNH⋯O3 and HCCH⋯O3 complexes are calculated in a counterpoise-corrected (CP-corrected) potential-energy surface (PES) that corrects for the basis set superposition error (BSSE). Ab initio calculations are performed at the Hartree-Fock (HF) and second-order Møller-Plesset (MP2) levels, using the 6-31G(d,p) and D95++(d,p) basis sets. Interaction energies are presented including corrections for zero-point vibrational energy (ZPVE) and thermal correction to enthalpy at 298 K. The CP-corrected and conventional PES are compared; the unconnected PES obtained using the larger basis set including diffuse functions exhibits a double well shape, whereas use of the 6-31G(d,p) basis set leads to a flat single-well profile. The CP-corrected PES has always a multiple-well shape. In particular, it is shown that the CP-corrected PES using the smaller basis set is qualitatively analogous to that obtained with the larger basis sets, so the CP method becomes useful to correctly describe large systems, where the use of small basis sets may be necessary
Resumo:
This article originates from a panel with the above title, held at IEEE VTC Spring 2009, in which the authors took part. The enthusiastic response it received prompted us to discuss for a wider audience whether research at the physical layer (PHY) is still relevant to the field of wireless communications. Using cellular systems as the axis of our exposition, we exemplify areas where PHY research has indeed hit a performance wall and where any improvements are expected to be marginal. We then discuss whether the research directions taken in the past have always been the right choice and how lessons learned could influence future policy decisions. Several of the raised issues are subsequently discussed in greater details, e.g., the growing divergence between academia and industry. With this argumentation at hand, we identify areas that are either under-developed or likely to be of impact in coming years - hence corroborating the relevance and importance of PHY research.
Resumo:
Supported by IEEE 802.15.4 standardization activities, embedded networks have been gaining popularity in recent years. The focus of this paper is to quantify the behavior of key networking metrics of IEEE 802.15.4 beacon-enabled nodes under typical operating conditions, with the inclusion of packet retransmissions. We corrected and extended previous analyses by scrutinizing the assumptions on which the prevalent Markovian modeling is generally based. By means of a comparative study, we singled out which of the assumptions impact each of the performance metrics (throughput, delay, power consumption, collision probability, and packet-discard probability). In particular, we showed that - unlike what is usually assumed - the probability that a node senses the channel busy is not constant for all the stages of the backoff procedure and that these differences have a noticeable impact on backoff delay, packet-discard probability, and power consumption. Similarly, we showed that - again contrary to common assumption - the probability of obtaining transmission access to the channel depends on the number of nodes that is simultaneously sensing it. We evidenced that ignoring this dependence has a significant impact on the calculated values of throughput and collision probability. Circumventing these and other assumptions, we rigorously characterize, through a semianalytical approach, the key metrics in a beacon-enabled IEEE 802.15.4 system with retransmissions.
Resumo:
Economics is the science of want and scarcity. We show that want andscarcity, operating within a simple exchange institution (double auction),are sufficient for an economy consisting of multiple inter--related marketsto attain competitive equilibrium (CE). We generalize Gode and Sunder's(1993a, 1993b) single--market finding to multi--market economies, andexplore the role of the scarcity constraint in convergence of economies to CE.When the scarcity constraint is relaxed by allowing arbitrageurs in multiple markets to enter speculative trades, prices still converge to CE,but allocative efficiency of the economy drops. \\Optimization by individual agents, often used to derive competitive equilibria,are unnecessary for an actual economy to approximately attain such equilibria.From the failure of humans to optimize in complex tasks, one need not concludethat the equilibria derived from the competitive model are descriptivelyirrelevant. We show that even in complex economic systems, such equilibriacan be attained under a range of surprisingly weak assumptions about agentbehavior.
Resumo:
We present an analytical model to interpret nanoscale capacitance microscopy measurements on thin dielectric films. The model displays a logarithmic dependence on the tip-sample distance and on the film thickness-dielectric constant ratio and shows an excellent agreement with finite-element numerical simulations and experimental results on a broad range of values. Based on these results, we discuss the capabilities of nanoscale capacitance microscopy for the quantitative extraction of the dielectric constant and the thickness of thin dielectric films at the nanoscale.
Resumo:
This work is focused on the study of the fine speckle contrast present in planar view observations of matched and mismatched InGaAs layers grown by molecular beam epitaxy on InP substrates. Our results provide experimental evidence of the evolution of this fine structure with the mismatch, layer thickness, and growth temperature. The correlation of the influence of all these parameters on the appearance of the contrast modulation points to the development of the fine structure during the growth. Moreover, as growth proceeds, this structure shows a dynamic behavior which depends on the intrinsic layer substrate stress.