953 resultados para tidal stream turbines
Resumo:
Context Lung-protective mechanical ventilation with the use of lower tidal volumes has been found to improve outcomes of patients with acute respiratory distress syndrome (ARDS). It has been suggested that use of lower tidal volumes also benefits patients who do not have ARDS. Objective To determine whether use of lower tidal volumes is associated with improved outcomes of patients receiving ventilation who do not have ARDS. Data Sources MEDLINE, CINAHL, Web of Science, and Cochrane Central Register of Controlled Trials up to August 2012. Study Selection Eligible studies evaluated use of lower vs higher tidal volumes in patients without ARDS at onset of mechanical ventilation and reported lung injury development, overall mortality, pulmonary infection, atelectasis, and biochemical alterations. Data Extraction Three reviewers extracted data on study characteristics, methods, and outcomes. Disagreement was resolved by consensus. Data Synthesis Twenty articles (2822 participants) were included. Meta-analysis using a fixed-effects model showed a decrease in lung injury development (risk ratio [RR], 0.33; 95% CI, 0.23 to 0.47; I-2, 0%; number needed to treat [NNT], 11), and mortality (RR, 0.64; 95% CI, 0.46 to 0.89; I-2, 0%; NNT, 23) in patients receiving ventilation with lower tidal volumes. The results of lung injury development were similar when stratified by the type of study (randomized vs nonrandomized) and were significant only in randomized trials for pulmonary infection and only in nonrandomized trials for mortality. Meta-analysis using a random-effects model showed, in protective ventilation groups, a lower incidence of pulmonary infection (RR, 0.45; 95% CI, 0.22 to 0.92; I-2, 32%; NNT, 26), lower mean (SD) hospital length of stay (6.91 [2.36] vs 8.87 [2.93] days, respectively; standardized mean difference [SMD], 0.51; 95% CI, 0.20 to 0.82; I-2, 75%), higher mean (SD) PaCO2 levels (41.05 [3.79] vs 37.90 [4.19] mm Hg, respectively; SMD, -0.51; 95% CI, -0.70 to -0.32; I-2, 54%), and lower mean (SD) pH values (7.37 [0.03] vs 7.40 [0.04], respectively; SMD, 1.16; 95% CI, 0.31 to 2.02; I-2, 96%) but similar mean (SD) ratios of PaO2 to fraction of inspired oxygen (304.40 [65.7] vs 312.97 [68.13], respectively; SMD, 0.11; 95% CI, -0.06 to 0.27; I-2, 60%). Tidal volume gradients between the 2 groups did not influence significantly the final results. Conclusions Among patients without ARDS, protective ventilation with lower tidal volumes was associated with better clinical outcomes. Some of the limitations of the meta-analysis were the mixed setting of mechanical ventilation (intensive care unit or operating room) and the duration of mechanical ventilation. JAMA. 2012;308(16):1651-1659 www.jama.com
Resumo:
In this communication we report results from the application to the study of the rotation of the Moon of the creeping tide theory just proposed (Ferraz-Mello, Cel. Mech. Dyn. Astron., submitted. ArXiv astro-ph 1204.3957). The choice of the Moon for the first application of this new theory is motivated by the fact that the Moon is one of the best observed celestial bodies and the comparison of the theoretical predictions of the theory with observations i may validate the theory or point out the need of further improvements. Particularly, the tidal perturbations of the rotation of the Moon - the physical libration of the Moon - have been detected in the Lunar Laser Ranging measurements (Williams et al. JGR 106, 27933, 2001). The major difficulty in this application comes from the fact that tidal torques in a planet-satellite system are very sensitive to the distance between the two-bodies, which is strongly affected by Solar perturbations. In the case of the Moon, the main solar perturbations - the Evection and the Variation - are more important than most of the Keplerian oscillations, being smaller only than the first Keplerian harmonic (equation of the centre). Besides, two of the three components of the Moon's libration in longitude whose tidal contributions were determined by LLR are related to these perturbations. The results may allow us to determine the main parameter of a possible Moon's creeping tide. The preliminary results point to a relaxation factor (gamma) 2 to 4 times smaller than the one predicted from the often cited values of thr Moon's quality factor Q (between 30 and 40), and points to larger Q values.
Resumo:
We study the orbital evolution of a two co-orbital planet system which undergo tidal interactions with the central star. Our main goal is to investigate the final outcome of a system originally evolving in a 1:1 resonant configuration when the tidal effect acts to change the orbital elements. Preliminary results of the numerical simulations of the exact equations of motions indicate that, at least for equal mass planets, the combined effect of resonant motion and tidal interaction leads the system to orbital instability, including collisions between the planets. We discuss the cases of two hot super-Earths and two hot-Saturn planets, comparing with the results of dynamical maps.
Resumo:
The timing of larval release may greatly affect the survivorship and distribution of pelagic stages and reveal important aspects of life history tactics in marine invertebrates. Endogenous rhythms of breeding individuals and populations are valuable indicators of selected strategies because they are free of the neutral effect of stochastic environmental variation. The high-shore intertidal barnacle Chthamalus bisinuatus exhibits endogenous tidal and tidal amplitude rhythms in a way that larval release would more likely occur during fortnightly neap periods at high tide. Such timing would minimize larval loss due to stranding and promote larval retention close to shore. This fully explains temporal patterns in populations facing the open sea and inhabiting eutrophic areas. However, rhythmic activity breaks down to an irregular pattern in a population within the São Sebastião Channel subjected to large variation of food supply around a mesotrophic average. Peaks of chl a concentration precede release events by 6 d, suggesting resource limitation for egg production within the channel. Also, extreme daily temperatures imposing mortality risk correlate to release rate just 1 d ahead, suggesting a terminal reproductive strategy. Oceanographic conditions apparently dictate whether barnacles follow a rhythmic trend of larval release supported by endogenous timing or, alternatively, respond to the stochastic variation of key environmental factors, resulting in an erratic temporal pattern.
Resumo:
[ES]Charla divulgativa impartida en el Postdoctoal symposium de la Woods Hole oceanographic Institution. Artículo original pulicado en Journal of Geophysical Research-Oceans
Resumo:
In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.
Resumo:
Nowadays offshore wind turbines represents a valid answer for energy production but with an increasing in costs mainly due to foundation technology required. Hybrid foundations composed by suction caissons over which is welded a tower supporting the nacelle and the blades allows a strong costs reduction. Here a monopod configuration is studied in a sandy soil in a 10 m water depth. Bearing capacity, sliding resistance and pull-out resistance are evaluated. In a second part the installation process occurring in four steps is analysed. considering also the effect of stress enhancement due to frictional forces opposing to penetration growing at skirt sides both inside and outside. In a three dimensional finite element model using Straus7 the soil non-linearity is considered in an approximate way through an iterative procedure using the Yokota empirical decay curves.
Resumo:
Constant developments in the field of offshore wind energy have increased the range of water depths at which wind farms are planned to be installed. Therefore, in addition to monopile support structures suitable in shallow waters (up to 30 m), different types of support structures, able to withstand severe sea conditions at the greater water depths, have been developed. For water depths above 30 m, the jacket is one of the preferred support types. Jacket represents a lightweight support structure, which, in combination with complex nature of environmental loads, is prone to highly dynamic behavior. As a consequence, high stresses with great variability in time can be observed in all structural members. The highest concentration of stresses occurs in joints due to their nature (structural discontinuities) and due to the existence of notches along the welds present in the joints. This makes them the weakest elements of the jacket in terms of fatigue. In the numerical modeling of jackets for offshore wind turbines, a reduction of local stresses at the chord-brace joints, and consequently an optimization of the model, can be achieved by implementing joint flexibility in the chord-brace joints. Therefore, in this work, the influence of joint flexibility on the fatigue damage in chord-brace joints of a numerical jacket model, subjected to advanced load simulations, is studied.
Resumo:
In piattaforme di Stream Processing è spesso necessario eseguire elaborazioni differenziate degli stream di input. Questa tesi ha l'obiettivo di realizzare uno scheduler in grado di attribuire priorità di esecuzione differenti agli operatori deputati all'elaborazione degli stream.
Resumo:
The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.
Resumo:
Geochemical mapping is a valuable tool for the control of territory that can be used not only in the identification of mineral resources and geological, agricultural and forestry studies but also in the monitoring of natural resources by giving solutions to environmental and economic problems. Stream sediments are widely used in the sampling campaigns carried out by the world's governments and research groups for their characteristics of broad representativeness of rocks and soils, for ease of sampling and for the possibility to conduct very detailed sampling In this context, the environmental role of stream sediments provides a good basis for the implementation of environmental management measures, in fact the composition of river sediments is an important factor in understanding the complex dynamics that develop within catchment basins therefore they represent a critical environmental compartment: they can persistently incorporate pollutants after a process of contamination and release into the biosphere if the environmental conditions change. It is essential to determine whether the concentrations of certain elements, in particular heavy metals, can be the result of natural erosion of rocks containing high concentrations of specific elements or are generated as residues of human activities related to a certain study area. This PhD thesis aims to extract from an extensive database on stream sediments of the Romagna rivers the widest spectrum of informations. The study involved low and high order stream in the mountain and hilly area, but also the sediments of the floodplain area, where intensive agriculture is active. The geochemical signals recorded by the stream sediments will be interpreted in order to reconstruct the natural variability related to bedrock and soil contribution, the effects of the river dynamics, the anomalous sites, and with the calculation of background values be able to evaluate their level of degradation and predict the environmental risk.
Resumo:
Big data è il termine usato per descrivere una raccolta di dati così estesa in termini di volume,velocità e varietà da richiedere tecnologie e metodi analitici specifici per l'estrazione di valori significativi. Molti sistemi sono sempre più costituiti e caratterizzati da enormi moli di dati da gestire,originati da sorgenti altamente eterogenee e con formati altamente differenziati,oltre a qualità dei dati estremamente eterogenei. Un altro requisito in questi sistemi potrebbe essere il fattore temporale: sempre più sistemi hanno bisogno di ricevere dati significativi dai Big Data il prima possibile,e sempre più spesso l’input da gestire è rappresentato da uno stream di informazioni continuo. In questo campo si inseriscono delle soluzioni specifiche per questi casi chiamati Online Stream Processing. L’obiettivo di questa tesi è di proporre un prototipo funzionante che elabori dati di Instant Coupon provenienti da diverse fonti con diversi formati e protocolli di informazioni e trasmissione e che memorizzi i dati elaborati in maniera efficiente per avere delle risposte in tempo reale. Le fonti di informazione possono essere di due tipologie: XMPP e Eddystone. Il sistema una volta ricevute le informazioni in ingresso, estrapola ed elabora codeste fino ad avere dati significativi che possono essere utilizzati da terze parti. Lo storage di questi dati è fatto su Apache Cassandra. Il problema più grosso che si è dovuto risolvere riguarda il fatto che Apache Storm non prevede il ribilanciamento delle risorse in maniera automatica, in questo caso specifico però la distribuzione dei clienti durante la giornata è molto varia e ricca di picchi. Il sistema interno di ribilanciamento sfrutta tecnologie innovative come le metriche e sulla base del throughput e della latenza esecutiva decide se aumentare/diminuire il numero di risorse o semplicemente non fare niente se le statistiche sono all’interno dei valori di soglia voluti.
Resumo:
Evaluation of the technical and diagnostic feasibility of commercial multiplex real-time polymerase chain reaction (PCR) for detection of blood stream infections in a cohort of intensive care unit (ICU) patients with severe sepsis, performed in addition to conventional blood cultures.