931 resultados para Heat dissipation rate
Resumo:
In recent decades, all over the world, competition in the electric power sector has deeply changed the way this sector’s agents play their roles. In most countries, electric process deregulation was conducted in stages, beginning with the clients of higher voltage levels and with larger electricity consumption, and later extended to all electrical consumers. The sector liberalization and the operation of competitive electricity markets were expected to lower prices and improve quality of service, leading to greater consumer satisfaction. Transmission and distribution remain noncompetitive business areas, due to the large infrastructure investments required. However, the industry has yet to clearly establish the best business model for transmission in a competitive environment. After generation, the electricity needs to be delivered to the electrical system nodes where demand requires it, taking into consideration transmission constraints and electrical losses. If the amount of power flowing through a certain line is close to or surpasses the safety limits, then cheap but distant generation might have to be replaced by more expensive closer generation to reduce the exceeded power flows. In a congested area, the optimal price of electricity rises to the marginal cost of the local generation or to the level needed to ration demand to the amount of available electricity. Even without congestion, some power will be lost in the transmission system through heat dissipation, so prices reflect that it is more expensive to supply electricity at the far end of a heavily loaded line than close to an electric power generation. Locational marginal pricing (LMP), resulting from bidding competition, represents electrical and economical values at nodes or in areas that may provide economical indicator signals to the market agents. This article proposes a data-mining-based methodology that helps characterize zonal prices in real power transmission networks. To test our methodology, we used an LMP database from the California Independent System Operator for 2009 to identify economical zones. (CAISO is a nonprofit public benefit corporation charged with operating the majority of California’s high-voltage wholesale power grid.) To group the buses into typical classes that represent a set of buses with the approximate LMP value, we used two-step and k-means clustering algorithms. By analyzing the various LMP components, our goal was to extract knowledge to support the ISO in investment and network-expansion planning.
Resumo:
Over the last three decades, computer architects have been able to achieve an increase in performance for single processors by, e.g., increasing clock speed, introducing cache memories and using instruction level parallelism. However, because of power consumption and heat dissipation constraints, this trend is going to cease. In recent times, hardware engineers have instead moved to new chip architectures with multiple processor cores on a single chip. With multi-core processors, applications can complete more total work than with one core alone. To take advantage of multi-core processors, parallel programming models are proposed as promising solutions for more effectively using multi-core processors. This paper discusses some of the existent models and frameworks for parallel programming, leading to outline a draft parallel programming model for Ada.
Resumo:
Les systèmes multiprocesseurs sur puce électronique (On-Chip Multiprocessor [OCM]) sont considérés comme les meilleures structures pour occuper l'espace disponible sur les circuits intégrés actuels. Dans nos travaux, nous nous intéressons à un modèle architectural, appelé architecture isométrique de systèmes multiprocesseurs sur puce, qui permet d'évaluer, de prédire et d'optimiser les systèmes OCM en misant sur une organisation efficace des nœuds (processeurs et mémoires), et à des méthodologies qui permettent d'utiliser efficacement ces architectures. Dans la première partie de la thèse, nous nous intéressons à la topologie du modèle et nous proposons une architecture qui permet d'utiliser efficacement et massivement les mémoires sur la puce. Les processeurs et les mémoires sont organisés selon une approche isométrique qui consiste à rapprocher les données des processus plutôt que d'optimiser les transferts entre les processeurs et les mémoires disposés de manière conventionnelle. L'architecture est un modèle maillé en trois dimensions. La disposition des unités sur ce modèle est inspirée de la structure cristalline du chlorure de sodium (NaCl), où chaque processeur peut accéder à six mémoires à la fois et où chaque mémoire peut communiquer avec autant de processeurs à la fois. Dans la deuxième partie de notre travail, nous nous intéressons à une méthodologie de décomposition où le nombre de nœuds du modèle est idéal et peut être déterminé à partir d'une spécification matricielle de l'application qui est traitée par le modèle proposé. Sachant que la performance d'un modèle dépend de la quantité de flot de données échangées entre ses unités, en l'occurrence leur nombre, et notre but étant de garantir une bonne performance de calcul en fonction de l'application traitée, nous proposons de trouver le nombre idéal de processeurs et de mémoires du système à construire. Aussi, considérons-nous la décomposition de la spécification du modèle à construire ou de l'application à traiter en fonction de l'équilibre de charge des unités. Nous proposons ainsi une approche de décomposition sur trois points : la transformation de la spécification ou de l'application en une matrice d'incidence dont les éléments sont les flots de données entre les processus et les données, une nouvelle méthodologie basée sur le problème de la formation des cellules (Cell Formation Problem [CFP]), et un équilibre de charge de processus dans les processeurs et de données dans les mémoires. Dans la troisième partie, toujours dans le souci de concevoir un système efficace et performant, nous nous intéressons à l'affectation des processeurs et des mémoires par une méthodologie en deux étapes. Dans un premier temps, nous affectons des unités aux nœuds du système, considéré ici comme un graphe non orienté, et dans un deuxième temps, nous affectons des valeurs aux arcs de ce graphe. Pour l'affectation, nous proposons une modélisation des applications décomposées en utilisant une approche matricielle et l'utilisation du problème d'affectation quadratique (Quadratic Assignment Problem [QAP]). Pour l'affectation de valeurs aux arcs, nous proposons une approche de perturbation graduelle, afin de chercher la meilleure combinaison du coût de l'affectation, ceci en respectant certains paramètres comme la température, la dissipation de chaleur, la consommation d'énergie et la surface occupée par la puce. Le but ultime de ce travail est de proposer aux architectes de systèmes multiprocesseurs sur puce une méthodologie non traditionnelle et un outil systématique et efficace d'aide à la conception dès la phase de la spécification fonctionnelle du système.
Resumo:
La lithographie et la loi de Moore ont permis des avancées extraordinaires dans la fabrication des circuits intégrés. De nos jours, plusieurs systèmes très complexes peuvent être embarqués sur la même puce électronique. Les contraintes de développement de ces systèmes sont tellement grandes qu’une bonne planification dès le début de leur cycle de développement est incontournable. Ainsi, la planification de la gestion énergétique au début du cycle de développement est devenue une phase importante dans la conception de ces systèmes. Pendant plusieurs années, l’idée était de réduire la consommation énergétique en ajoutant un mécanisme physique une fois le circuit créé, comme par exemple un dissipateur de chaleur. La stratégie actuelle est d’intégrer les contraintes énergétiques dès les premières phases de la conception des circuits. Il est donc essentiel de bien connaître la dissipation d’énergie avant l’intégration des composantes dans une architecture d’un système multiprocesseurs de façon à ce que chaque composante puisse fonctionner efficacement dans les limites de ses contraintes thermiques. Lorsqu’une composante fonctionne, elle consomme de l’énergie électrique qui est transformée en dégagement de chaleur. Le but de ce mémoire est de trouver une affectation efficace des composantes dans une architecture de multiprocesseurs en trois dimensions en tenant compte des limites des facteurs thermiques de ce système.
Resumo:
This study is the first of its kind in India, where in smoked and thermal processed products have been developed using locally available wood as the source of wood smoke and flavoring and a shelf life of one year has been achieved. Retortable pouches of three layers, both imported and indigenous were found suitable to store thermal processed products. Heat penetration rate is quicker in retort pouches due to their thin profile in comparison to cans and hence the total process time is lesser. The nutritional and sensory attributes of the pouch products are better retained during processing. Hence these products are more acceptable than canned products. lndian vegetarian food products and fish curry products are available in the ready to eat form in the markets. Smoked and thermal processed products have not gained an entry to the market and hence this study will pave an opening for such products. Currently trade in tuna products from India is meager compared to the global trade. ln India proper utilization of tuna resources is yet to be achieved due to the lack of infrastructure for handling and knowledge of value addition. The raw material cost is also less due to the poor quality of the fish when landed. Hence, the availability of such products will help in the trade of tuna products, improving the quality of raw material landing and ultimately realizing a better value to the fishermen and processors.
Resumo:
La miniaturització de la industria microelectrònica és un fet del tot inqüestionables i la tecnologia CMOS no n'és una excepció. En conseqüència la comunitat científica s'ha plantejat dos grans reptes: En primer lloc portar la tecnologia CMOS el més lluny possible ('Beyond CMOS') tot desenvolupant sistemes d'altes prestacions com microprocessadors, micro - nanosistemes o bé sistemes de píxels. I en segon lloc encetar una nova generació electrònica basada en tecnologies totalment diferents dins l'àmbit de les Nanotecnologies. Tots aquests avanços exigeixen una recerca i innovació constant en la resta d'àrees complementaries com són les d'encapsulat. L'encapsulat ha de satisfer bàsicament tres funcions: Interfície elèctrica del sistema amb l'exterior, Proporcionar un suport mecànic al sistema i Proporcionar un camí de dissipació de calor. Per tant, si tenim en compte que la majoria d'aquests dispositius d'altes prestacions demanden un alt nombre d'entrades i sortides, els mòduls multixip (MCMs) i la tecnologia flip chip es presenten com una solució molt interessant per aquests tipus de dispositiu. L'objectiu d'aquesta tesi és la de desenvolupar una tecnologia de mòduls multixip basada en interconnexions flip chip per a la integració de detectors de píxels híbrids, que inclou: 1) El desenvolupament d'una tecnologia de bumping basada en bumps de soldadura Sn/Ag eutèctics dipositats per electrodeposició amb un pitch de 50µm, i 2) El desenvolupament d'una tecnologia de vies d'or en silici que permet interconnectar i apilar xips verticalment (3D packaging) amb un pitch de 100µm. Finalment aquesta alta capacitat d'interconnexió dels encapsulats flip chip ha permès que sistemes de píxels tradicionalment monolítics puguin evolucionar cap a sistemes híbrids més compactes i complexes, i que en aquesta tesi s'ha vist reflectit transferint la tecnologia desenvolupada al camp de la física d'altes energies, en concret implantant el sistema de bump bonding d'un mamògraf digital. Addicionalment s'ha implantat també un dispositiu detector híbrid modular per a la reconstrucció d'imatges 3D en temps real, que ha donat lloc a una patent.
Resumo:
This study uses large-eddy simulation (LES) to investigate the characteristics of Langmuir turbulence through the turbulent kinetic energy (TKE) budget. Based on an analysis of the TKE budget a velocity scale for Langmuir turbulence is proposed. The velocity scale depends on both the friction velocity and the surface Stokes drift associated with the wave field. The scaling leads to unique profiles of nondimensional dissipation rate and velocity component variances when the Stokes drift of the wave field is sufficiently large compared to the surface friction velocity. The existence of such a scaling shows that Langmuir turbulence can be considered as a turbulence regime in its own right, rather than a modification of shear-driven turbulence. Comparisons are made between the LES results and observations, but the lack of information concerning the wave field means these are mainly restricted to comparing profile shapes. The shapes of the LES profiles are consistent with observed profiles. The dissipation length scale for Langmuir turbulence is found to be similar to the dissipation length scale in the shear-driven boundary layer. Beyond this it is not possible to test the proposed scaling directly using available data. Entrainment at the base of the mixed layer is shown to be significantly enhanced over that due to normal shear turbulence.
Resumo:
The properties of planar ice crystals settling horizontally have been investigated using a vertically pointing Doppler lidar. Strong specular reflections were observed from their oriented basal facets, identified by comparison with a second lidar pointing 4° from zenith. Analysis of 17 months of continuous high-resolution observations reveals that these pristine crystals are frequently observed in ice falling from mid-level mixed-phase layer clouds (85% of the time for layers at −15 °C). Detailed analysis of a case study indicates that the crystals are nucleated and grow rapidly within the supercooled layer, then fall out, forming well-defined layers of specular reflection. From the lidar alone the fraction of oriented crystals cannot be quantified, but polarimetric radar measurements confirmed that a substantial fraction of the crystal population was well oriented. As the crystals fall into subsaturated air, specular reflection is observed to switch off as the crystal faces become rounded and lose their faceted structure. Specular reflection in ice falling from supercooled layers colder than −22 °C was also observed, but this was much less pronounced than at warmer temperatures: we suggest that in cold clouds it is the small droplets in the distribution that freeze into plates and produce specular reflection, whilst larger droplets freeze into complex polycrystals. The lidar Doppler measurements show that typical fall speeds for the oriented crystals are ≈ 0.3 m s−1, with a weak temperature correlation; the corresponding Reynolds number is Re ∼ 10, in agreement with light-pillar measurements. Coincident Doppler radar observations show no correlation between the specular enhancement and the eddy dissipation rate, indicating that turbulence does not control crystal orientation in these clouds. Copyright © 2010 Royal Meteorological Society
Resumo:
Recent research has shown that Lighthill–Ford spontaneous gravity wave generation theory, when applied to numerical model data, can help predict areas of clear-air turbulence. It is hypothesized that this is the case because spontaneously generated atmospheric gravity waves may initiate turbulence by locally modifying the stability and wind shear. As an improvement on the original research, this paper describes the creation of an ‘operational’ algorithm (ULTURB) with three modifications to the original method: (1) extending the altitude range for which the method is effective downward to the top of the boundary layer, (2) adding turbulent kinetic energy production from the environment to the locally produced turbulent kinetic energy production, and, (3) transforming turbulent kinetic energy dissipation to eddy dissipation rate, the turbulence metric becoming the worldwide ‘standard’. In a comparison of ULTURB with the original method and with the Graphical Turbulence Guidance second version (GTG2) automated procedure for forecasting mid- and upper-level aircraft turbulence ULTURB performed better for all turbulence intensities. Since ULTURB, unlike GTG2, is founded on a self-consistent dynamical theory, it may offer forecasters better insight into the causes of the clear-air turbulence and may ultimately enhance its predictability.
Resumo:
The characteristics of the boundary layer separating a turbulence region from an irrotational (or non-turbulent) flow region are investigated using rapid distortion theory (RDT). The turbulence region is approximated as homogeneous and isotropic far away from the bounding turbulent/non-turbulent (T/NT) interface, which is assumed to remain approximately flat. Inviscid effects resulting from the continuity of the normal velocity and pressure at the interface, in addition to viscous effects resulting from the continuity of the tangential velocity and shear stress, are taken into account by considering a sudden insertion of the T/NT interface, in the absence of mean shear. Profiles of the velocity variances, turbulent kinetic energy (TKE), viscous dissipation rate (epsilon), turbulence length scales, and pressure statistics are derived, showing an excellent agreement with results from direct numerical simulations (DNS). Interestingly, the normalized inviscid flow statistics at the T/NT interface do not depend on the form of the assumed TKE spectrum. Outside the turbulent region, where the flow is irrotational (except inside a thin viscous boundary layer), epsilon decays as z^{-6}, where z is the distance from the T/NT interface. The mean pressure distribution is calculated using RDT, and exhibits a decrease towards the turbulence region due to the associated velocity fluctuations, consistent with the generation of a mean entrainment velocity. The vorticity variance and epsilon display large maxima at the T/NT interface due to the inviscid discontinuities of the tangential velocity variances existing there, and these maxima are quantitatively related to the thickness delta of the viscous boundary layer (VBL). For an equilibrium VBL, the RDT analysis suggests that delta ~ eta (where eta is the Kolmogorov microscale), which is consistent with the scaling law identified in a very recent DNS study for shear-free T/NT interfaces.
Resumo:
Sources and sinks of gravitational potential energy (GPE) play a rate-limiting role in the large scale ocean circulation. A key source is turbulent diapycnal mixing, whereby irre- versible mixing across isoneutral surfaces is enhanced by turbulent straining of these surfaces. This has motivated international observational efforts to map diapycnal mixing in the global ocean. However, in order to accurately relate the GPE supplied to the large scale circulation by diapycnal mixing to the mixing energy source, it is first necessary to determine the ratio, ξ , of the GPE generation rate to the available potential energy dissipation rate associated with turbulent mixing. Here, the link between GPE and hydro- static pressure is used to derive the GPE budget for a com- pressible ocean with a nonlinear equation of state. The role of diapycnal mixing is isolated and from this a global cli- matological distribution of ξ is calculated. It is shown that, for a given source of mixing energy, typically three times as much GPE is generated if the mixing takes place in bottom waters rather than in the pycnocline. This is due to GPE destruction by cabbelling in the pycnocline, as opposed to thermobaric enhancement of GPE generation by diapycnal mixing in the deep ocean.
Resumo:
Combined observations by meridian-scanning photometers, all-sky auroral TV camera and the EISCAT radar permitted a detailed analysis of the temporal and spatial development of the midday auroral breakup phenomenon and the related ionospheric ion flow pattern within the 71°–75° invariant latitude radar field of view. The radar data revealed dominating northward and westward ion drifts, of magnitudes close to the corresponding velocities of the discrete, transient auroral forms, during the two different events reported here, characterized by IMF |BY/BZ| < 1 and > 2, respectively (IMF BZ between −8 and −3 nT and BY > 0). The spatial scales of the discrete optical events were ∼50 km in latitude by ∼500 km in longitude, and their lifetimes were less than 10 min. Electric potential enhancements with peak values in the 30–50 kV range are inferred along the discrete arc in the IMF |BY/BZ| < 1 case from the optical data and across the latitudinal extent of the radar field of view in the |BY/BZ| > 2 case. Joule heat dissipation rates in the maximum phase of the discrete structures of ∼ 100 ergs cm−2 s−1 (0.1 W m−2) are estimated from the photometer intensities and the ion drift data. These observations combined with the additional characteristics of the events, documented here and in several recent studies (i.e., their quasi-periodic nature, their motion pattern relative to the persistent cusp or cleft auroral arc, the strong relationship with the interplanetary magnetic field and the associated ion drift/E field events and ground magnetic signatures), are considered to be strong evidence in favour of a transient, intermittent reconnection process at the dayside magnetopause and associated energy and momentum transfer to the ionosphere in the polar cusp and cleft regions. The filamentary spatial structure and the spectral characteristics of the optical signature indicate associated localized ˜1-kV potential drops between the magnetopause and the ionosphere during the most intense auroral events. The duration of the events compares well with the predicted characteristic times of momentum transfer to the ionosphere associated with the flux transfer event-related current tubes. It is suggested that, after this 2–10 min interval, the sheath particles can no longer reach the ionosphere down the open flux tube, due to the subsequent super-Alfvénic flow along the magnetopause, conductivities are lower and much less momentum is extracted from the solar wind by the ionosphere. The recurrence time (3–15 min) and the local time distribution (∼0900–1500 MLT) of the dayside auroral breakup events, combined with the above information, indicate the important roles of transient magnetopause reconnection and the polar cusp and cleft regions in the transfer of momentum and energy between the solar wind and the magnetosphere.
Resumo:
A weather balloon and its suspended instrument package behave like a pendulum with a moving pivot. This dynamical system is exploited here for the detection of atmospheric turbulence. By adding an accelerometer to the instrument package, the size of the swings induced by atmospheric turbulence can be measured. In test flights, strong turbulence has induced accelerations greater than 5g, where g = 9.81 m s−2. Calibration of the accelerometer data with a vertically orientated lidar has allowed eddy dissipation rate values of between 10−3 and 10−2 m2 s−3 to be derived from the accelerometer data. The novel use of a whole weather balloon and its adapted instrument package can be used as a new instrument to make standardized in situ measurements of turbulence.
Resumo:
Mixing layer height (MLH) is one of the key parameters in describing lower tropospheric dynamics and capturing its diurnal variability is crucial, especially for interpreting surface observations. In this paper we introduce a method for identifying MLH below the minimum range of a scanning Doppler lidar when operated at vertical. The method we propose is based on velocity variance in low-elevation-angle conical scanning and is applied to measurements in two very different coastal environments: Limassol, Cyprus, during summer and Loviisa, Finland, during winter. At both locations, the new method agrees well with MLH derived from turbulent kinetic energy dissipation rate profiles obtained from vertically pointing measurements. The low-level scanning routine frequently indicated non-zero MLH less than 100 m above the surface. Such low MLHs were more common in wintertime Loviisa on the Baltic Sea coast than during summertime in Mediterranean Limassol.
Resumo:
The turbulent structure of a stratocumulus-topped marine boundary layer over a 2-day period is observed with a Doppler lidar at Mace Head in Ireland. Using profiles of vertical velocity statistics, the bulk of the mixing is identified as cloud driven. This is supported by the pertinent feature of negative vertical velocity skewness in the sub-cloud layer which extends, on occasion, almost to the surface. Both coupled and decoupled turbulence characteristics are observed. The length and timescales related to the cloud-driven mixing are investigated and shown to provide additional information about the structure and the source of the mixing inside the boundary layer. They are also shown to place constraints on the length of the sampling periods used to derive products, such as the turbulent dissipation rate, from lidar measurements. For this, the maximum wavelengths that belong to the inertial subrange are studied through spectral analysis of the vertical velocity. The maximum wavelength of the inertial subrange in the cloud-driven layer scales relatively well with the corresponding layer depth during pronounced decoupled structure identified from the vertical velocity skewness. However, on many occasions, combining the analysis of the inertial subrange and vertical velocity statistics suggests higher decoupling height than expected from the skewness profiles. Our results show that investigation of the length scales related to the inertial subrange significantly complements the analysis of the vertical velocity statistics and enables a more confident interpretation of complex boundary layer structures using measurements from a Doppler lidar.