924 resultados para Geophysical techniques


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Almost all research fields in geosciences use numerical models and observations and combine these using data-assimilation techniques. With ever-increasing resolution and complexity, the numerical models tend to be highly nonlinear and also observations become more complicated and their relation to the models more nonlinear. Standard data-assimilation techniques like (ensemble) Kalman filters and variational methods like 4D-Var rely on linearizations and are likely to fail in one way or another. Nonlinear data-assimilation techniques are available, but are only efficient for small-dimensional problems, hampered by the so-called ‘curse of dimensionality’. Here we present a fully nonlinear particle filter that can be applied to higher dimensional problems by exploiting the freedom of the proposal density inherent in particle filtering. The method is illustrated for the three-dimensional Lorenz model using three particles and the much more complex 40-dimensional Lorenz model using 20 particles. By also applying the method to the 1000-dimensional Lorenz model, again using only 20 particles, we demonstrate the strong scale-invariance of the method, leading to the optimistic conjecture that the method is applicable to realistic geophysical problems. Copyright c 2010 Royal Meteorological Society

Relevância:

20.00% 20.00%

Publicador:

Resumo:

New ways of combining observations with numerical models are discussed in which the size of the state space can be very large, and the model can be highly nonlinear. Also the observations of the system can be related to the model variables in highly nonlinear ways, making this data-assimilation (or inverse) problem highly nonlinear. First we discuss the connection between data assimilation and inverse problems, including regularization. We explore the choice of proposal density in a Particle Filter and show how the ’curse of dimensionality’ might be beaten. In the standard Particle Filter ensembles of model runs are propagated forward in time until observations are encountered, rendering it a pure Monte-Carlo method. In large-dimensional systems this is very inefficient and very large numbers of model runs are needed to solve the data-assimilation problem realistically. In our approach we steer all model runs towards the observations resulting in a much more efficient method. By further ’ensuring almost equal weight’ we avoid performing model runs that are useless in the end. Results are shown for the 40 and 1000 dimensional Lorenz 1995 model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Developing high-quality scientific research will be most effective if research communities with diverse skills and interests are able to share information and knowledge, are aware of the major challenges across disciplines, and can exploit economies of scale to provide robust answers and better inform policy. We evaluate opportunities and challenges facing the development of a more interactive research environment by developing an interdisciplinary synthesis of research on a single geographic region. We focus on the Amazon as it is of enormous regional and global environmental importance and faces a highly uncertain future. To take stock of existing knowledge and provide a framework for analysis we present a set of mini-reviews from fourteen different areas of research, encompassing taxonomy, biodiversity, biogeography, vegetation dynamics, landscape ecology, earth-atmosphere interactions, ecosystem processes, fire, deforestation dynamics, hydrology, hunting, conservation planning, livelihoods, and payments for ecosystem services. Each review highlights the current state of knowledge and identifies research priorities, including major challenges and opportunities. We show that while substantial progress is being made across many areas of scientific research, our understanding of specific issues is often dependent on knowledge from other disciplines. Accelerating the acquisition of reliable and contextualized knowledge about the fate of complex pristine and modified ecosystems is partly dependent on our ability to exploit economies of scale in shared resources and technical expertise, recognise and make explicit interconnections and feedbacks among sub-disciplines, increase the temporal and spatial scale of existing studies, and improve the dissemination of scientific findings to policy makers and society at large. Enhancing interaction among research efforts is vital if we are to make the most of limited funds and overcome the challenges posed by addressing large-scale interdisciplinary questions. Bringing together a diverse scientific community with a single geographic focus can help increase awareness of research questions both within and among disciplines, and reveal the opportunities that may exist for advancing acquisition of reliable knowledge. This approach could be useful for a variety of globally important scientific questions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The decline of bees has raised concerns regarding their conservation and the maintenance of ecosystem services they provide to bee-pollinated wild flowers and crops. Although the Mediterranean region is a hotspot for bee species richness, their status remains poorly studied. There is an urgent need for cost-effective, reliable, and unbiased sampling methods that give good bee species richness estimates. This study aims: (a) to assess bee species richness in two common Mediterranean habitat types: semi-natural scrub (phrygana) and managed olive groves; (b) to compare species richness in those systems to that of other biogeographic regions, and (c) to assess whether six different sampling methods (pan traps, variable and standardized transect walks, observation plots and trap nests), previously tested in other European biogeographic regions, are suitable in Mediterranean communities. Eight study sites, four per habitat type, were selected on the island of Lesvos, Greece. The species richness observed was high compared to other habitat types worldwide for which comparable data exist. Pan traps collected the highest proportion of the total bee species richness across all methods at the scale of a study site. Variable and standardized transect walks detected the highest total richness over all eight study sites. Trap nests and observation plots detected only a limited fraction of the bee species richness. To assess the total bee species richness in bee diversity hotspots, such as the studied habitats, we suggest a combination of transect walks conducted by trained bee collectors and pan trap sampling

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Bond Graph is a graphical modelling technique that allows the representation of energy flow between the components of a system. When used to model power electronic systems, it is necessary to incorporate bond graph elements to represent a switch. In this paper, three different methods of modelling switching devices are compared and contrasted: the Modulated Transformer with a binary modulation ratio (MTF), the ideal switch element, and the Switched Power Junction (SPJ) method. These three methods are used to model a dc-dc Boost converter and then run simulations in MATLAB/SIMULINK. To provide a reference to compare results, the converter is also simulated using PSPICE. Both quantitative and qualitative comparisons are made to determine the suitability of each of the three Bond Graph switch models in specific power electronics applications

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We apply modern synchrotron-based structural techniques to the study of serine adsorbed on the pure andAumodified intrinsically chiral Cu{531} surface. XPS and NEXAFS data in combination with DFT show that on the pure surface both enantiomers adsorb in l4 geometries (with de-protonated b-OH groups) at low coverage and in l3 geometries at saturation coverage. Significantly larger enantiomeric differences are seen for the l4 geometries, which involve substrate bonds of three side groups of the chiral center, i.e. a three-point interaction. The l3 adsorption geometry, where only the carboxylate and amino groups form substrate bonds, leads to smaller but still significant enantiomeric differences, both in geometry and the decomposition behavior. When Cu{531} is modified by the deposition of 1 and 2ML Au the orientations of serine at saturation coverage are significantly different from those on the clean surface. In all cases, however, a l3 bond coordination is found at saturation involving different numbers of Au atoms, which leads to relatively small enantiomeric differences.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Magnetic clouds (MCs) are a subset of interplanetary coronal mass ejections (ICMEs) which exhibit signatures consistent with a magnetic flux rope structure. Techniques for reconstructing flux rope orientation from single-point in situ observations typically assume the flux rope is locally cylindrical, e.g., minimum variance analysis (MVA) and force-free flux rope (FFFR) fitting. In this study, we outline a non-cylindrical magnetic flux rope model, in which the flux rope radius and axial curvature can both vary along the length of the axis. This model is not necessarily intended to represent the global structure of MCs, but it can be used to quantify the error in MC reconstruction resulting from the cylindrical approximation. When the local flux rope axis is approximately perpendicular to the heliocentric radial direction, which is also the effective spacecraft trajectory through a magnetic cloud, the error in using cylindrical reconstruction methods is relatively small (≈ 10∘). However, as the local axis orientation becomes increasingly aligned with the radial direction, the spacecraft trajectory may pass close to the axis at two separate locations. This results in a magnetic field time series which deviates significantly from encounters with a force-free flux rope, and consequently the error in the axis orientation derived from cylindrical reconstructions can be as much as 90∘. Such two-axis encounters can result in an apparent ‘double flux rope’ signature in the magnetic field time series, sometimes observed in spacecraft data. Analysing each axis encounter independently produces reasonably accurate axis orientations with MVA, but larger errors with FFFR fitting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The solubility of penciclovir (C10N5O3H17) in a novel film formulation designed for the treatment of cold sores was determined using X-ray, thermal, microscopic and release rate techniques. Solubilities of 0.15–0.23, 0.44, 0.53 and 0.42% (w/w) resulted for each procedure. Linear calibration lines were achieved for experimentally and theoretically determined differential scanning calorimetry (DSC) and X-ray powder diffractometry (XRPD) data. Intra- and inter-batch data precision values were determined; intra values were more precise. Microscopy was additionally useful for examining crystal shape, size distribution and homogeneity of drug distribution within the film. Whereas DSC also determined melting point, XRPD identified polymorphs and release data provided relevant kinetics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optimal state estimation from given observations of a dynamical system by data assimilation is generally an ill-posed inverse problem. In order to solve the problem, a standard Tikhonov, or L2, regularization is used, based on certain statistical assumptions on the errors in the data. The regularization term constrains the estimate of the state to remain close to a prior estimate. In the presence of model error, this approach does not capture the initial state of the system accurately, as the initial state estimate is derived by minimizing the average error between the model predictions and the observations over a time window. Here we examine an alternative L1 regularization technique that has proved valuable in image processing. We show that for examples of flow with sharp fronts and shocks, the L1 regularization technique performs more accurately than standard L2 regularization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although modern control techniques such as eigenstructure assignment have been given extensive coverage in control literature there is a reluctance to use them in practice as they are often not believed to be as `visible' or as simple as classical methods. A simple aircraft example is used, and it is shown that eigenstructure assignment can be used easily to produce a more viable controller than with simple classical techniques.