169 resultados para Large-scale Structure


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Wave-activity conservation laws are key to understanding wave propagation in inhomogeneous environments. Their most general formulation follows from the Hamiltonian structure of geophysical fluid dynamics. For large-scale atmospheric dynamics, the Eliassen–Palm wave activity is a well-known example and is central to theoretical analysis. On the mesoscale, while such conservation laws have been worked out in two dimensions, their application to a horizontally homogeneous background flow in three dimensions fails because of a degeneracy created by the absence of a background potential vorticity gradient. Earlier three-dimensional results based on linear WKB theory considered only Doppler-shifted gravity waves, not waves in a stratified shear flow. Consideration of a background flow depending only on altitude is motivated by the parameterization of subgrid-scales in climate models where there is an imposed separation of horizontal length and time scales, but vertical coupling within each column. Here we show how this degeneracy can be overcome and wave-activity conservation laws derived for three-dimensional disturbances to a horizontally homogeneous background flow. Explicit expressions for pseudoenergy and pseudomomentum in the anelastic and Boussinesq models are derived, and it is shown how the previously derived relations for the two-dimensional problem can be treated as a limiting case of the three-dimensional problem. The results also generalize earlier three-dimensional results in that there is no slowly varying WKB-type requirement on the background flow, and the results are extendable to finite amplitude. The relationship A E =cA P between pseudoenergy A E and pseudomomentum A P, where c is the horizontal phase speed in the direction of symmetry associated with A P, has important applications to gravity-wave parameterization and provides a generalized statement of the first Eliassen–Palm theorem.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The orientation of the heliospheric magnetic field (HMF) in near‒Earth space is generally a good indicator of the polarity of HMF foot points at the photosphere. There are times, however, when the HMF folds back on itself (is inverted), as indicated by suprathermal electrons locally moving sunward, even though they must ultimately be carrying the heat flux away from the Sun. Analysis of the near‒Earth solar wind during the period 1998–2011 reveals that inverted HMF is present approximately 5.5% of the time and is generally associated with slow, dense solar wind and relatively weak HMF intensity. Inverted HMF is mapped to the coronal source surface, where a new method is used to estimate coronal structure from the potential‒field source‒surface model. We find a strong association with bipolar streamers containing the heliospheric current sheet, as expected, but also with unipolar or pseudostreamers, which contain no current sheet. Because large‒scale inverted HMF is a widely accepted signature of interchange reconnection at the Sun, this finding provides strong evidence for models of the slow solar wind which involve coronal loop opening by reconnection within pseudostreamer belts as well as the bipolar streamer belt. Occurrence rates of bipolar‒ and pseudostreamers suggest that they are equally likely to result in inverted HMF and, therefore, presumably undergo interchange reconnection at approximately the same rate. Given the different magnetic topologies involved, this suggests the rate of reconnection is set externally, possibly by the differential rotation rate which governs the circulation of open solar flux.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In order to improve the quality of healthcare services, the integrated large-scale medical information system is needed to adapt to the changing medical environment. In this paper, we propose a requirement driven architecture of healthcare information system with hierarchical architecture. The system operates through the mapping mechanism between these layers and thus can organize functions dynamically adapting to user’s requirement. Furthermore, we introduce the organizational semiotics methods to capture and analyze user’s requirement through ontology chart and norms. Based on these results, the structure of user’s requirement pattern (URP) is established as the driven factor of our system. Our research makes a contribution to design architecture of healthcare system which can adapt to the changing medical environment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In January 2008, central and southern China experienced persistent low temperatures, freezing rain, and snow. The large-scale conditions associated with the occurrence and development of these snowstorms are examined in order to identify the key synoptic controls leading to this event. Three main factors are identified: 1) the persistent blocking high over Siberia, which remained quasi-stationary around 65°E for 3 weeks, led to advection of dry and cold Siberian air down to central and southern China; 2) a strong persistent southwesterly flow associated with the western Pacific subtropical high led to enhanced moisture advection from the Bay of Bengal into central and southern China; and 3) the deep inversion layer in the lower troposphere associated with the extended snow cover over most of central and southern China. The combination of these three factors is likely responsible for the unusual severity of the event, and hence a long return period

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There exists a well-developed body of theory based on quasi-geostrophic (QG) dynamics that is central to our present understanding of large-scale atmospheric and oceanic dynamics. An important question is the extent to which this body of theory may generalize to more accurate dynamical models. As a first step in this process, we here generalize a set of theoretical results, concerning the evolution of disturbances to prescribed basic states, to semi-geostrophic (SG) dynamics. SG dynamics, like QG dynamics, is a Hamiltonian balanced model whose evolution is described by the material conservation of potential vorticity, together with an invertibility principle relating the potential vorticity to the advecting fields. SG dynamics has features that make it a good prototype for balanced models that are more accurate than QG dynamics. In the first part of this two-part study, we derive a pseudomomentum invariant for the SG equations, and use it to obtain: (i) linear and nonlinear generalized Charney–Stern theorems for disturbances to parallel flows; (ii) a finite-amplitude local conservation law for the invariant, obeying the group-velocity property in the WKB limit; and (iii) a wave-mean-flow interaction theorem consisting of generalized Eliassen–Palm flux diagnostics, an elliptic equation for the stream-function tendency, and a non-acceleration theorem. All these results are analogous to their QG forms. The pseudomomentum invariant – a conserved second-order disturbance quantity that is associated with zonal symmetry – is constructed using a variational principle in a similar manner to the QG calculations. Such an approach is possible when the equations of motion under the geostrophic momentum approximation are transformed to isentropic and geostrophic coordinates, in which the ageostrophic advection terms are no longer explicit. Symmetry-related wave-activity invariants such as the pseudomomentum then arise naturally from the Hamiltonian structure of the SG equations. We avoid use of the so-called ‘massless layer’ approach to the modelling of isentropic gradients at the lower boundary, preferring instead to incorporate explicitly those boundary contributions into the wave-activity and stability results. This makes the analogy with QG dynamics most transparent. This paper treats the f-plane Boussinesq form of SG dynamics, and its recent extension to β-plane, compressible flow by Magnusdottir & Schubert. In the limit of small Rossby number, the results reduce to their respective QG forms. Novel features particular to SG dynamics include apparently unnoticed lateral boundary stability criteria in (i), and the necessity of including additional zonal-mean eddy correlation terms besides the zonal-mean potential vorticity fluxes in the wave-mean-flow balance in (iii). In the companion paper, wave-activity conservation laws and stability theorems based on the SG form of the pseudoenergy are presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multiple equilibria in a coupled ocean–atmosphere–sea ice general circulation model (GCM) of an aquaplanet with many degrees of freedom are studied. Three different stable states are found for exactly the same set of parameters and external forcings: a cold state in which a polar sea ice cap extends into the midlatitudes; a warm state, which is ice free; and a completely sea ice–covered “snowball” state. Although low-order energy balance models of the climate are known to exhibit intransitivity (i.e., more than one climate state for a given set of governing equations), the results reported here are the first to demonstrate that this is a property of a complex coupled climate model with a consistent set of equations representing the 3D dynamics of the ocean and atmosphere. The coupled model notably includes atmospheric synoptic systems, large-scale circulation of the ocean, a fully active hydrological cycle, sea ice, and a seasonal cycle. There are no flux adjustments, with the system being solely forced by incoming solar radiation at the top of the atmosphere. It is demonstrated that the multiple equilibria owe their existence to the presence of meridional structure in ocean heat transport: namely, a large heat transport out of the tropics and a relatively weak high-latitude transport. The associated large midlatitude convergence of ocean heat transport leads to a preferred latitude at which the sea ice edge can rest. The mechanism operates in two very different ocean circulation regimes, suggesting that the stabilization of the large ice cap could be a robust feature of the climate system. Finally, the role of ocean heat convergence in permitting multiple equilibria is further explored in simpler models: an atmospheric GCM coupled to a slab mixed layer ocean and an energy balance model

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The authors study the role of ocean heat transport (OHT) in the maintenance of a warm, equable, ice-free climate. An ensemble of idealized aquaplanet GCM calculations is used to assess the equilibrium sensitivity of global mean surface temperature and its equator-to-pole gradient (ΔT) to variations in OHT, prescribed through a simple analytical formula representing export out of the tropics and poleward convergence. Low-latitude OHT warms the mid- to high latitudes without cooling the tropics; increases by 1°C and ΔT decreases by 2.6°C for every 0.5-PW increase in OHT across 30° latitude. This warming is relatively insensitive to the detailed meridional structure of OHT. It occurs in spite of near-perfect atmospheric compensation of large imposed variations in OHT: the total poleward heat transport is nearly fixed. The warming results from a convective adjustment of the extratropical troposphere. Increased OHT drives a shift from large-scale to convective precipitation in the midlatitude storm tracks. Warming arises primarily from enhanced greenhouse trapping associated with convective moistening of the upper troposphere. Warming extends to the poles by atmospheric processes even in the absence of high-latitude OHT. A new conceptual model for equable climates is proposed, in which OHT plays a key role by driving enhanced deep convection in the midlatitude storm tracks. In this view, the climatic impact of OHT depends on its effects on the greenhouse properties of the atmosphere, rather than its ability to increase the total poleward energy transport.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The responsibility to record civilian casualties in both armed conflict and civil disturbances must be an integral element of the responsibility to protect, particularly in the application of the just cause principles. The first part of this article examines the threshold issue of the possibility of large-scale civilian casualties which triggers the international community’s responsibility to react. The reports recommending the responsibility to protect emphasise the need to establish the actuality or risk of ‘large scale’ loss of life which is not possible in the current context without a civilian casualty recording structure. The second part of the article outlines the international legal obligation to record civilian casualties based on international humanitarian law and international human rights law. Thirdly, the responsibility to protect and the legal obligation to record casualties are brought together within the framework of Ban Ki-moon’s reports on implementation of the Responsibility to Protect. The fourth and final part of the article reviews the situations in Sri Lanka and Syria. Both states represent egregious examples of governments hiding the existence of casualties, resulting in paralysis within the international community. These situations establish, beyond doubt, that the national obligation to record civilian casualties must be part and parcel of the responsibility to protect.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The large scale urban consumption of energy (LUCY) model simulates all components of anthropogenic heat flux (QF) from the global to individual city scale at 2.5 × 2.5 arc-minute resolution. This includes a database of different working patterns and public holidays, vehicle use and energy consumption in each country. The databases can be edited to include specific diurnal and seasonal vehicle and energy consumption patterns, local holidays and flows of people within a city. If better information about individual cities is available within this (open-source) database, then the accuracy of this model can only improve, to provide the community data from global-scale climate modelling or the individual city scale in the future. The results show that QF varied widely through the year, through the day, between countries and urban areas. An assessment of the heat emissions estimated revealed that they are reasonably close to those produced by a global model and a number of small-scale city models, so results from LUCY can be used with a degree of confidence. From LUCY, the global mean urban QF has a diurnal range of 0.7–3.6 W m−2, and is greater on weekdays than weekends. The heat release from building is the largest contributor (89–96%), to heat emissions globally. Differences between months are greatest in the middle of the day (up to 1 W m−2 at 1 pm). December to February, the coldest months in the Northern Hemisphere, have the highest heat emissions. July and August are at the higher end. The least QF is emitted in May. The highest individual grid cell heat fluxes in urban areas were located in New York (577), Paris (261.5), Tokyo (178), San Francisco (173.6), Vancouver (119) and London (106.7). Copyright © 2010 Royal Meteorological Society

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Diurnal warming events between 5 and 7 K, spatially coherent over large areas (∼1000 km), are observed in independent satellite measurements of ocean surface temperature. The majority of the large events occurred in the extra-tropics. Given sufficient heating (from solar radiation), the location and magnitude of these events appears to be primarily determined by large-scale wind patterns. The amplitude of the measured diurnal heating scales inversely with the spatial resolution of the different sensors used in this study. These results indicate that predictions of peak diurnal warming using wind speeds with a 25 km spatial resolution available from satellite sensors and those with 50–100 km resolution from Numerical Weather Prediction models may have underestimated warming. Thus, the use of these winds in modeling diurnal effects will be limited in accuracy by both the temporal and spatial resolution of the wind fields.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results: We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2 of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log(2) units (6 of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions: This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The parameterisation of diabatic processes in numerical models is critical for the accuracy of weather forecasts and for climate projections. A novel approach to the evaluation of these processes in models is introduced in this contribution. The approach combines a suite of on-line tracer diagnostics with off-line trajectory calculations. Each tracer tracks accumulative changes in potential temperature associated with a particular parameterised diabatic process in the model. A comparison of tracers therefore allows the identification of the most active diabatic processes and their downstream impacts. The tracers are combined with trajectories computed using model-resolved winds, allowing the various diabatic contributions to be tracked back to their time and location of occurrence. We have used this approach to investigate diabatic processes within a simulated extratropical cyclone. We focus on the warm conveyor belt, in which the dominant diabatic contributions come from large-scale latent heating and parameterised convection. By contrasting two simulations, one with standard convection parameterisation settings and another with reduced parameterised convection, the effects of parameterised convection on the structure of the cyclone have been determined. Under reduced parameterised convection conditions, the large-scale latent heating is forced to release convective instability that would otherwise have been released by the convection parameterisation. Although the spatial distribution of precipitation depends on the details of the split between parameterised convection and large-scale latent heating, the total precipitation amount associated with the cyclone remains largely unchanged. For reduced parameterised convection, a more rapid and stronger latent heating episode takes place as air ascends within the warm conveyor belt.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In late February 2010 the extraordinary windstorm Xynthia crossed over Southwestern and Central Europe and caused severe damage, affecting particularly the Spanish and French Atlantic coasts. The storm was embedded in uncommon large-scale atmospheric and boundary conditions prior to and during its development, namely enhanced sea surface temperatures (SST) within the low-level entrainment zone of air masses, an unusual southerly position of the polar jet stream, and a remarkable split jet structure in the upper troposphere. To analyse the processes that led to the rapid intensification of this exceptional storm originating close to the subtropics (30°N), the sensitivity of the cyclone intensification to latent heat release is determined using the regional climate model COSMO-CLM forced with ERA-Interim data. A control simulation with observed SST shows that moist and warm air masses originating from the subtropical North Atlantic were involved in the cyclogenesis process and led to the formation of a vertical tower with high values of potential vorticity (PV). Sensitivity studies with reduced SST or increased laminar boundary roughness for heat led to reduced surface latent heat fluxes. This induced both a weaker and partly retarded development of the cyclone and a weakening of the PV-tower together with reduced diabatic heating rates, particularly at lower and mid levels. We infer that diabatic processes played a crucial role during the phase of rapid deepening of Xynthia and thus to its intensity over the Southeastern North Atlantic. We suggest that windstorms like Xynthia may occur more frequently under future climate conditions due to the warming SSTs and potentially enhanced latent heat release, thus increasing the windstorm risk for Southwestern Europe.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Wetland and Wetland CH4 Intercomparison of Models Project (WETCHIMP) was created to evaluate our present ability to simulate large-scale wetland characteristics and corresponding methane (CH4) emissions. A multi-model comparison is essential to evaluate the key uncertainties in the mechanisms and parameters leading to methane emissions. Ten modelling groups joined WETCHIMP to run eight global and two regional models with a common experimental protocol using the same climate and atmospheric carbon dioxide (CO2) forcing datasets. We reported the main conclusions from the intercomparison effort in a companion paper (Melton et al., 2013). Here we provide technical details for the six experiments, which included an equilibrium, a transient, and an optimized run plus three sensitivity experiments (temperature, precipitation, and atmospheric CO2 concentration). The diversity of approaches used by the models is summarized through a series of conceptual figures, and is used to evaluate the wide range of wetland extent and CH4 fluxes predicted by the models in the equilibrium run. We discuss relationships among the various approaches and patterns in consistencies of these model predictions. Within this group of models, there are three broad classes of methods used to estimate wetland extent: prescribed based on wetland distribution maps, prognostic relationships between hydrological states based on satellite observations, and explicit hydrological mass balances. A larger variety of approaches was used to estimate the net CH4 fluxes from wetland systems. Even though modelling of wetland extent and CH4 emissions has progressed significantly over recent decades, large uncertainties still exist when estimating CH4 emissions: there is little consensus on model structure or complexity due to knowledge gaps, different aims of the models, and the range of temporal and spatial resolutions of the models.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Before the advent of genome-wide association studies (GWASs), hundreds of candidate genes for obesity-susceptibility had been identified through a variety of approaches. We examined whether those obesity candidate genes are enriched for associations with body mass index (BMI) compared with non-candidate genes by using data from a large-scale GWAS. A thorough literature search identified 547 candidate genes for obesity-susceptibility based on evidence from animal studies, Mendelian syndromes, linkage studies, genetic association studies and expression studies. Genomic regions were defined to include the genes ±10 kb of flanking sequence around candidate and non-candidate genes. We used summary statistics publicly available from the discovery stage of the genome-wide meta-analysis for BMI performed by the genetic investigation of anthropometric traits consortium in 123 564 individuals. Hypergeometric, rank tail-strength and gene-set enrichment analysis tests were used to test for the enrichment of association in candidate compared with non-candidate genes. The hypergeometric test of enrichment was not significant at the 5% P-value quantile (P = 0.35), but was nominally significant at the 25% quantile (P = 0.015). The rank tail-strength and gene-set enrichment tests were nominally significant for the full set of genes and borderline significant for the subset without SNPs at P < 10(-7). Taken together, the observed evidence for enrichment suggests that the candidate gene approach retains some value. However, the degree of enrichment is small despite the extensive number of candidate genes and the large sample size. Studies that focus on candidate genes have only slightly increased chances of detecting associations, and are likely to miss many true effects in non-candidate genes, at least for obesity-related traits.