901 resultados para onshore AC grid


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present the results of simulations carried out with the Met Office Unified Model at 12km, 4km and 1.5km resolution for a large region centred on West Africa using several different representations of the convection processes. These span the range of resolutions from much coarser than the size of the convection processes to the cloud-system resolving and thus encompass the intermediate "grey-zone". The diurnal cycle in the extent of convective regions in the models is tested against observations from the Geostationary Earth Radiation Budget instrument on Meteosat-8. By this measure, the two best-performing simulations are a 12km model without convective parametrization, using Smagorinsky style sub-grid scale mixing in all three dimensions and a 1.5km simulations with two-dimensional Smagorinsky mixing. Of these, the 12km model produces a better match to the magnitude of the total cloud fraction but the 1.5km results in better timing for its peak value. The results suggest that the previously-reported improvement in the representation of the diurnal cycle of convective organisation in the 4km model compared to the standard 12km configuration is principally a result of the convection scheme employed rather than the improved resolution per se. The details of and implications for high-resolution model simulations are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

By modelling the average activity of large neuronal populations, continuum mean field models (MFMs) have become an increasingly important theoretical tool for understanding the emergent activity of cortical tissue. In order to be computationally tractable, long-range propagation of activity in MFMs is often approximated with partial differential equations (PDEs). However, PDE approximations in current use correspond to underlying axonal velocity distributions incompatible with experimental measurements. In order to rectify this deficiency, we here introduce novel propagation PDEs that give rise to smooth unimodal distributions of axonal conduction velocities. We also argue that velocities estimated from fibre diameters in slice and from latency measurements, respectively, relate quite differently to such distributions, a significant point for any phenomenological description. Our PDEs are then successfully fit to fibre diameter data from human corpus callosum and rat subcortical white matter. This allows for the first time to simulate long-range conduction in the mammalian brain with realistic, convenient PDEs. Furthermore, the obtained results suggest that the propagation of activity in rat and human differs significantly beyond mere scaling. The dynamical consequences of our new formulation are investigated in the context of a well known neural field model. On the basis of Turing instability analyses, we conclude that pattern formation is more easily initiated using our more realistic propagator. By increasing characteristic conduction velocities, a smooth transition can occur from self-sustaining bulk oscillations to travelling waves of various wavelengths, which may influence axonal growth during development. Our analytic results are also corroborated numerically using simulations on a large spatial grid. Thus we provide here a comprehensive analysis of empirically constrained activity propagation in the context of MFMs, which will allow more realistic studies of mammalian brain activity in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The surface mass balance for Greenland and Antarctica has been calculated using model data from an AMIP-type experiment for the period 1979–2001 using the ECHAM5 spectral transform model at different triangular truncations. There is a significant reduction in the calculated ablation for the highest model resolution, T319 with an equivalent grid distance of ca 40 km. As a consequence the T319 model has a positive surface mass balance for both ice sheets during the period. For Greenland, the models at lower resolution, T106 and T63, on the other hand, have a much stronger ablation leading to a negative surface mass balance. Calculations have also been undertaken for a climate change experiment using the IPCC scenario A1B, with a T213 resolution (corresponding to a grid distance of some 60 km) and comparing two 30-year periods from the end of the twentieth century and the end of the twenty-first century, respectively. For Greenland there is change of 495 km3/year, going from a positive to a negative surface mass balance corresponding to a sea level rise of 1.4 mm/year. For Antarctica there is an increase in the positive surface mass balance of 285 km3/year corresponding to a sea level fall by 0.8 mm/year. The surface mass balance changes of the two ice sheets lead to a sea level rise of 7 cm at the end of this century compared to end of the twentieth century. Other possible mass losses such as due to changes in the calving of icebergs are not considered. It appears that such changes must increase significantly, and several times more than the surface mass balance changes, if the ice sheets are to make a major contribution to sea level rise this century. The model calculations indicate large inter-annual variations in all relevant parameters making it impossible to identify robust trends from the examined periods at the end of the twentieth century. The calculated inter-annual variations are similar in magnitude to observations. The 30-year trend in SMB at the end of the twenty-first century is significant. The increase in precipitation on the ice sheets follows closely the Clausius-Clapeyron relation and is the main reason for the increase in the surface mass balance of Antarctica. On Greenland precipitation in the form of snow is gradually starting to decrease and cannot compensate for the increase in ablation. Another factor is the proportionally higher temperature increase on Greenland leading to a larger ablation. It follows that a modest increase in temperature will not be sufficient to compensate for the increase in accumulation, but this will change when temperature increases go beyond any critical limit. Calculations show that such a limit for Greenland might well be passed during this century. For Antarctica this will take much longer and probably well into following centuries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ECMWF operational grid point model (with a resolution of 1.875° of latitude and longitude) and its limited area version (with a resolution of !0.47° of latitude and longitude) with boundary values from the global model have been used to study the simulation of the typhoon Tip. The fine-mesh model was capable of simulating the main structural features of the typhoon and predicting a fall in central pressure of 60 mb in 3 days. The structure of the forecast typhoon, with a warm core (maximum potential temperature anomaly 17 K). intense swirling wind (maximum 55 m s-1 at 850 mb) and spiralling precipitation patterns is characteristic of a tropical cyclone. Comparison with the lower resolution forecast shows that the horizontal resolution is a determining factor in predicting not only the structure and intensity but even the movement of these vortices. However, an accurate and refined initial analysis is considered to be a prerequisite for a correct forecast of this phenomenon.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A method to solve a quasi-geostrophic two-layer model including the variation of static stability is presented. The divergent part of the wind is incorporated by means of an iterative procedure. The procedure is rather fast and the time of computation is only 60–70% longer than for the usual two-layer model. The method of solution is justified by the conservation of the difference between the gross static stability and the kinetic energy. To eliminate the side-boundary conditions the experiments have been performed on a zonal channel model. The investigation falls mainly into three parts: The first part (section 5) contains a discussion of the significance of some physically inconsistent approximations. It is shown that physical inconsistencies are rather serious and for these inconsistent models which were studied the total kinetic energy increased faster than the gross static stability. In the next part (section 6) we are studying the effect of a Jacobian difference operator which conserves the total kinetic energy. The use of this operator in two-layer models will give a slight improvement but probably does not have any practical use in short periodic forecasts. It is also shown that the energy-conservative operator will change the wave-speed in an erroneous way if the wave-number or the grid-length is large in the meridional direction. In the final part (section 7) we investigate the behaviour of baroclinic waves for some different initial states and for two energy-consistent models, one with constant and one with variable static stability. According to the linear theory the waves adjust rather rapidly in such a way that the temperature wave will lag behind the pressure wave independent of the initial configuration. Thus, both models give rise to a baroclinic development even if the initial state is quasi-barotropic. The effect of the variation of static stability is very small, qualitative differences in the development are only observed during the first 12 hours. For an amplifying wave we will get a stabilization over the troughs and an instabilization over the ridges.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A resonant transmitter–receiver system is described for the wireless transmission of energy at a useful distance for grid-coordinate power and information. Experimental results are given showing delivery of power of an unmodified Tesla resonator contrasted with a modified version achieving improved efficiency over a 4 m range. A theoretical basis is provided to back up the experimental results obtained and to link the study with previous research in the field. A number of potential routes are suggested for further investigations and some possible applications of the technology are considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The very first numerical models which were developed more than 20 years ago were drastic simplifications of the real atmosphere and they were mostly restricted to describe adiabatic processes. For prediction of a day or two of the mid tropospheric flow these models often gave reasonable results but the result deteriorated quickly when the prediction was extended further in time. The prediction of the surface flow was unsatisfactory even for short predictions. It was evident that both the energy generating processes as well as the dissipative processes have to be included in numerical models in order to predict the weather patterns in the lower part of the atmosphere and to predict the atmosphere in general beyond a day or two. Present-day computers make it possible to attack the weather forecasting problem in a more comprehensive and complete way and substantial efforts have been made during the last decade in particular to incorporate the non-adiabatic processes in numerical prediction models. The physics of radiational transfer, condensation of moisture, turbulent transfer of heat, momentum and moisture and the dissipation of kinetic energy are the most important processes associated with the formation of energy sources and sinks in the atmosphere and these have to be incorporated in numerical prediction models extended over more than a few days. The mechanisms of these processes are mainly related to small scale disturbances in space and time or even molecular processes. It is therefore one of the basic characteristics of numerical models that these small scale disturbances cannot be included in an explicit way. The reason for this is the discretization of the model's atmosphere by a finite difference grid or the use of a Galerkin or spectral function representation. The second reason why we cannot explicitly introduce these processes into a numerical model is due to the fact that some physical processes necessary to describe them (such as the local buoyance) are a priori eliminated by the constraints of hydrostatic adjustment. Even if this physical constraint can be relaxed by making the models non-hydrostatic the scale problem is virtually impossible to solve and for the foreseeable future we have to try to incorporate the ensemble or gross effect of these physical processes on the large scale synoptic flow. The formulation of the ensemble effect in terms of grid-scale variables (the parameters of the large-scale flow) is called 'parameterization'. For short range prediction of the synoptic flow at middle and high latitudes, very simple parameterization has proven to be rather successful.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Numerical forecasts of the atmosphere based on the fundamental dynamical and thermodynamical equations have now been carried for almost 30 years. The very first models which were used were drastic simplifications of the governing equations and permitting only the prediction of the geostrophic wind in the middle of the troposphere based on the conservation of absolute vorticity. Since then we have seen a remarkable development in models predicting the large-scale synoptic flow. Verification carried out at NMC Washington indicates an improvement of about 40% in 24h forecasts for the 500mb geopotential since the end of the 1950’s. The most advanced models of today use the equations of motion in their more original form (i.e. primitive equations) which are better suited to predicting the atmosphere at low latitudes as well as small scale systems. The model which we have developed at the Centre, for instance, will be able to predict weather systems from a scale of 500-1000 km and a vertical extension of a few hundred millibars up to global weather systems extending through the whole depth of the atmosphere. With a grid resolution of 1.5 and 15 vertical levels and covering the whole globe it is possible to describe rather accurately the thermodynamical processes associated with cyclone development. It is further possible to incorporate sub-grid-scale processes such as radiation, exchange of sensible heat, release of latent heat etc. in order to predict the development of new weather systems and the decay of old ones. Later in this introduction I will exemplify this by showing some results of forecasts by the Centre’s model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the introduction of new observing systems based on asynoptic observations, the analysis problem has changed in character. In the near future we may expect that a considerable part of meteorological observations will be unevenly distributed in four dimensions, i.e. three dimensions in space and one in time. The term analysis, or objective analysis in meteorology, means the process of interpolating observed meteorological observations from unevenly distributed locations to a network of regularly spaced grid points. Necessitated by the requirement of numerical weather prediction models to solve the governing finite difference equations on such a grid lattice, the objective analysis is a three-dimensional (or mostly two-dimensional) interpolation technique. As a consequence of the structure of the conventional synoptic network with separated data-sparse and data-dense areas, four-dimensional analysis has in fact been intensively used for many years. Weather services have thus based their analysis not only on synoptic data at the time of the analysis and climatology, but also on the fields predicted from the previous observation hour and valid at the time of the analysis. The inclusion of the time dimension in objective analysis will be called four-dimensional data assimilation. From one point of view it seems possible to apply the conventional technique on the new data sources by simply reducing the time interval in the analysis-forecasting cycle. This could in fact be justified also for the conventional observations. We have a fairly good coverage of surface observations 8 times a day and several upper air stations are making radiosonde and radiowind observations 4 times a day. If we have a 3-hour step in the analysis-forecasting cycle instead of 12 hours, which is applied most often, we may without any difficulties treat all observations as synoptic. No observation would thus be more than 90 minutes off time and the observations even during strong transient motion would fall within a horizontal mesh of 500 km * 500 km.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Pax Americana and the grand strategy of hegemony (or “Primacy”) that underpins it may be becoming unsustainable. Particularly in the wake of exhausting wars, the Global Financial Crisis, and the shift of wealth from West to East, it may no longer be possible or prudent for the United States to act as the unipolar sheriff or guardian of a world order. But how viable are the alternatives, and what difficulties will these alternatives entail in their design and execution? This analysis offers a sympathetic but critical analysis of alternative U.S. National Security Strategies of “retrenchment” that critics of American diplomacy offer. In these strategies, the United States would anticipate the coming of a more multipolar world and organize its behavior around the dual principles of “concert” and “balance,” seeking a collaborative relationship with other great powers, while being prepared to counterbalance any hostile aggressor that threatens world order. The proponents of such strategies argue that by scaling back its global military presence and its commitments, the United States can trade prestige for security, shift burdens, and attain a more free hand. To support this theory, they often look to the 19th-century concert of Europe as a model of a successful security regime and to general theories about the natural balancing behavior of states. This monograph examines this precedent and measures its usefulness for contemporary statecraft to identify how great power concerts are sustained and how they break down. The project also applies competing theories to how states might behave if world politics are in transition: Will they balance, bandwagon, or hedge? This demonstrates the multiple possible futures that could shape and be shaped by a new strategy. viii A new strategy based on an acceptance of multipolarity and the limits of power is prudent. There is scope for such a shift. The convergence of several trends—including transnational problems needing collaborative efforts, the military advantages of defenders, the reluctance of states to engage in unbridled competition, and hegemony fatigue among the American people—means that an opportunity exists internationally and at home for a shift to a new strategy. But a Concert-Balance strategy will still need to deal with several potential dilemmas. These include the difficulty of reconciling competitive balancing with cooperative concerts, the limits of balancing without a forward-reaching onshore military capability, possible unanticipated consequences such as a rise in regional power competition or the emergence of blocs (such as a Chinese East Asia or an Iranian Gulf), and the challenge of sustaining domestic political support for a strategy that voluntarily abdicates world leadership. These difficulties can be mitigated, but they must be met with pragmatic and gradual implementation as well as elegant theorizing and the need to avoid swapping one ironclad, doctrinaire grand strategy for another.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Changes to the electroencephalogram (EEG) observed during general anesthesia are modeled with a physiological mean field theory of electrocortical activity. To this end a parametrization of the postsynaptic impulse response is introduced which takes into account pharmacological effects of anesthetic agents on neuronal ligand-gated ionic channels. Parameter sets for this improved theory are then identified which respect known anatomical constraints and predict mean firing rates and power spectra typically encountered in human subjects. Through parallelized simulations of the eight nonlinear, two-dimensional partial differential equations on a grid representing an entire human cortex, it is demonstrated that linear approximations are sufficient for the prediction of a range of quantitative EEG variables. More than 70 000 plausible parameter sets are finally selected and subjected to a simulated induction with the stereotypical inhaled general anesthetic isoflurane. Thereby 86 parameter sets are identified that exhibit a strong “biphasic” rise in total power, a feature often observed in experiments. A sensitivity study suggests that this “biphasic” behavior is distinguishable even at low agent concentrations. Finally, our results are briefly compared with previous work by other groups and an outlook on future fits to experimental data is provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the role of the anthropogenic heat flux on the urban heat island of London. To do this, the time-varying anthropogenic heat flux is added to an urban surface-energy balance parametrization, the Met Office–Reading Urban Surface Exchange Scheme (MORUSES), implemented in a 1 km resolution version of the UK Met Office Unified Model. The anthropogenic heat flux is derived from energy-demand data for London and is specified on the model's 1 km grid; it includes variations on diurnal and seasonal time-scales. We contrast a spring case with a winter case, to illustrate the effects of the larger anthropogenic heat flux in winter and the different roles played by thermodynamics in the different seasons. The surface-energy balance channels the anthropogenic heat into heating the urban surface, which warms slowly because of the large heat capacity of the urban surface. About one third of this additional warming goes into increasing the outgoing long-wave radiation and only about two thirds goes into increasing the sensible heat flux that warms the atmosphere. The anthropogenic heat flux has a larger effect on screen-level temperatures in the winter case, partly because the anthropogenic flux is larger then and partly because the boundary layer is shallower in winter. For the specific winter case studied here, the anthropogenic heat flux maintains a well-mixed boundary layer through the whole night over London, whereas the surrounding rural boundary layer becomes strongly stably stratified. This finding is likely to have important implications for air quality in winter. On the whole, inclusion of the anthropogenic heat flux improves the comparison between model simulations and measurements of screen-level temperature slightly and indicates that the anthropogenic heat flux is beginning to be an important factor in the London urban heat island.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Details are given of the development and application of a 2D depth-integrated, conformal boundary-fitted, curvilinear model for predicting the depth-mean velocity field and the spatial concentration distribution in estuarine and coastal waters. A numerical method for conformal mesh generation, based on a boundary integral equation formulation, has been developed. By this method a general polygonal region with curved edges can be mapped onto a regular polygonal region with the same number of horizontal and vertical straight edges and a multiply connected region can be mapped onto a regular region with the same connectivity. A stretching transformation on the conformally generated mesh has also been used to provide greater detail where it is needed close to the coast, with larger mesh sizes further offshore, thereby minimizing the computing effort whilst maximizing accuracy. The curvilinear hydrodynamic and solute model has been developed based on a robust rectilinear model. The hydrodynamic equations are approximated using the ADI finite difference scheme with a staggered grid and the solute transport equation is approximated using a modified QUICK scheme. Three numerical examples have been chosen to test the curvilinear model, with an emphasis placed on complex practical applications

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A statistical–dynamical downscaling (SDD) approach is applied to determine present day and future high-resolution rainfall distributions in the catchment of the river Aksu at the southern slopes of the Tienshan Mountains, Central Asia. First, a circulation weather type (CWT) classification is employed to define typical lower atmospheric flow regimes from ERA-40 reanalysis data. Selected representatives of each CWT are dynamically downscaled with the regional climate model COSMO-CLM 4.8 at a horizontal grid resolution of 0.0625°, using the ERA-40 reanalysis data as boundary conditions. Finally, the simulated representatives are recombined to obtain a high-resolution rainfall climatology for present day climate. The methodology is also applied to ensemble simulations of three different scenarios of the global climate model ECHAM5/MPI-OM1 to derive projections of rainfall changes until 2100. Comparisons of downscaled seasonal and annual rainfall with observational data suggest that the statistical–dynamical approach is appropriate to capture the observed present-day precipitation climatology over the low lands and the first elevations of the Tienshan Mountains. On the other hand, a strong bias is found at higher altitudes, where precipitation is clearly underestimated by SDD. The application of SDD to the ECHAM5/MPI-OM1 ensemble reveals that precipitation changes by the end of the 21st century depend on the season. While for autumn an increase of seasonal precipitation is found for all simulations, a decrease in precipitation is obtained during winter for most parts of the Aksu catchment. The spread between different ECHAM5/MPI-OM1 ensemble members is strongest in spring, where trends of opposite sign are found. The largest changes in rainfall are simulated for the summer season, which also shows the most pronounced spatial heterogeneity. Most ECHAM5/MPI-OM1 realizations indicate a decrease of annual precipitation over large parts of the Tienshan, and an increase restricted to the southeast of the study area. These results provide a good basis for downscaling present-day and future rainfall distributions for hydrological purposes.