899 resultados para the SIMPLE algorithm


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The multicomponent nonideal gas lattice Boltzmann model by Shan and Chen (S-C) is used to study the immiscible displacement in a sinusoidal tube. The movement of interface and the contact point (contact line in three-dimension) is studied. Due to the roughness of the boundary, the contact point shows "stick-slip" mechanics. The "stick-slip" effect decreases as the speed of the interface increases. For fluids that are nonwetting, the interface is almost perpendicular to the boundaries at most time, although its shapes at different position of the tube are rather different. When the tube becomes narrow, the interface turns a complex curves rather than remains simple menisci. The velocity is found to vary considerably between the neighbor nodes close to the contact point, consistent with the experimental observation that the velocity is multi-values on the contact line. Finally, the effect of three boundary conditions is discussed. The average speed is found different for different boundary conditions. The simple bounce-back rule makes the contact point move fastest. Both the simple bounce-back and the no-slip bounce-back rules are more sensitive to the roughness of the boundary in comparison with the half-way bounce-back rule. The simulation results suggest that the S-C model may be a promising tool in simulating the displacement behaviour of two immiscible fluids in complex geometry.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Inverse methods are widely used in various fields of atmospheric science. However, such methods are not commonly used within the boundary-layer community, where robust observations of surface fluxes are a particular concern. We present a new technique for deriving surface sensible heat fluxes from boundary-layer turbulence observations using an inverse method. Doppler lidar observations of vertical velocity variance are combined with two well-known mixed-layer scaling forward models for a convective boundary layer (CBL). The inverse method is validated using large-eddy simulations of a CBL with increasing wind speed. The majority of the estimated heat fluxes agree within error with the proscribed heat flux, across all wind speeds tested. The method is then applied to Doppler lidar data from the Chilbolton Observatory, UK. Heat fluxes are compared with those from a mast-mounted sonic anemometer. Errors in estimated heat fluxes are on average 18 %, an improvement on previous techniques. However, a significant negative bias is observed (on average −63%) that is more pronounced in the morning. Results are improved for the fully-developed CBL later in the day, which suggests that the bias is largely related to the choice of forward model, which is kept deliberately simple for this study. Overall, the inverse method provided reasonable flux estimates for the simple case of a CBL. Results shown here demonstrate that this method has promise in utilizing ground-based remote sensing to derive surface fluxes. Extension of the method is relatively straight-forward, and could include more complex forward models, or other measurements.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we report coordinated multispacecraft and ground-based observations of a double substorm onset close to Scandinavia on November 17, 1996. The Wind and the Geotail spacecraft, which were located in the solar wind and the subsolar magnetosheath, respectively, recorded two periods of southward directed interplanetary magnetic field (IMF). These periods were separated by a short northward IMF excursion associated with a solar wind pressure pulse, which compressed the magnetosphere to such a degree that Geotail for a short period was located outside the bow shock. The first period of southward IMF initiated a substorm growth. phase, which was clearly detected by an array of ground-based instrumentation and by Interball in the northern tail lobe. A first substorm onset occurred in close relation to the solar wind pressure pulse impinging on the magnetopause and almost simultaneously with the northward turning of the IMF. However, this substorm did not fully develop. In clear association with the expansion of the magnetosphere at the end of the pressure pulse, the auroral expansion was stopped, and the northern sky cleared. We will present evidence that the change in the solar wind dynamic pressure actively quenched the energy available for any further substorm expansion. Directly after this period, the magnetometer network detected signatures of a renewed substorm growth phase, which was initiated by the second southward turning of the IMF and which finally lead to a second, and this time complete, substorm intensification. We have used our multipoint observations in order to understand the solar wind control of the substorm onset and substorm quenching. The relative timings between the observations on the various satellites and on the ground were used to infer a possible causal relationship between the solar wind pressure variations and consequent substorm development. Furthermore, using a relatively simple algorithm to model the tail lobe field and the total tail flux, we show that there indeed exists a close relationship between the relaxation of a solar wind pressure pulse, the reduction of the tail lobe field, and the quenching of the initial substorm.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Traditionally, the cusp has been described in terms of a time-stationary feature of the magnetosphere which allows access of magnetosheath-like plasma to low altitudes. Statistical surveys of data from low-altitude spacecraft have shown the average characteristics and position of the cusp. Recently, however, it has been suggested that the ionospheric footprint of flux transfer events (FTEs) may be identified as variations of the “cusp” on timescales of a few minutes. In this model, the cusp can vary in form between a steady-state feature in one limit and a series of discrete ionospheric FTE signatures in the other limit. If this time-dependent cusp scenario is correct, then the signatures of the transient reconnection events must be able, on average, to reproduce the statistical cusp occurrence previously determined from the satellite observations. In this paper, we predict the precipitation signatures which are associated with transient magnetopause reconnection, following recent observations of the dependence of dayside ionospheric convection on the orientation of the IMF. We then employ a simple model of the longitudinal motion of FTE signatures to show how such events can easily reproduce the local time distribution of cusp occurrence probabilities, as observed by low-altitude satellites. This is true even in the limit where the cusp is a series of discrete events. Furthermore, we investigate the existence of double cusp patches predicted by the simple model and show how these events may be identified in the data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A new sparse kernel density estimator is introduced based on the minimum integrated square error criterion for the finite mixture model. Since the constraint on the mixing coefficients of the finite mixture model is on the multinomial manifold, we use the well-known Riemannian trust-region (RTR) algorithm for solving this problem. The first- and second-order Riemannian geometry of the multinomial manifold are derived and utilized in the RTR algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with an accuracy competitive with those of existing kernel density estimators.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: In many experimental pipelines, clustering of multidimensional biological datasets is used to detect hidden structures in unlabelled input data. Taverna is a popular workflow management system that is used to design and execute scientific workflows and aid in silico experimentation. The availability of fast unsupervised methods for clustering and visualization in the Taverna platform is important to support a data-driven scientific discovery in complex and explorative bioinformatics applications. Results: This work presents a Taverna plugin, the Biological Data Interactive Clustering Explorer (BioDICE), that performs clustering of high-dimensional biological data and provides a nonlinear, topology preserving projection for the visualization of the input data and their similarities. The core algorithm in the BioDICE plugin is Fast Learning Self Organizing Map (FLSOM), which is an improved variant of the Self Organizing Map (SOM) algorithm. The plugin generates an interactive 2D map that allows the visual exploration of multidimensional data and the identification of groups of similar objects. The effectiveness of the plugin is demonstrated on a case study related to chemical compounds. Conclusions: The number and variety of available tools and its extensibility have made Taverna a popular choice for the development of scientific data workflows. This work presents a novel plugin, BioDICE, which adds a data-driven knowledge discovery component to Taverna. BioDICE provides an effective and powerful clustering tool, which can be adopted for the explorative analysis of biological datasets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Within the ESA Climate Change Initiative (CCI) project Aerosol_cci (2010–2013), algorithms for the production of long-term total column aerosol optical depth (AOD) datasets from European Earth Observation sensors are developed. Starting with eight existing pre-cursor algorithms three analysis steps are conducted to improve and qualify the algorithms: (1) a series of experiments applied to one month of global data to understand several major sensitivities to assumptions needed due to the ill-posed nature of the underlying inversion problem, (2) a round robin exercise of "best" versions of each of these algorithms (defined using the step 1 outcome) applied to four months of global data to identify mature algorithms, and (3) a comprehensive validation exercise applied to one complete year of global data produced by the algorithms selected as mature based on the round robin exercise. The algorithms tested included four using AATSR, three using MERIS and one using PARASOL. This paper summarizes the first step. Three experiments were conducted to assess the potential impact of major assumptions in the various aerosol retrieval algorithms. In the first experiment a common set of four aerosol components was used to provide all algorithms with the same assumptions. The second experiment introduced an aerosol property climatology, derived from a combination of model and sun photometer observations, as a priori information in the retrievals on the occurrence of the common aerosol components. The third experiment assessed the impact of using a common nadir cloud mask for AATSR and MERIS algorithms in order to characterize the sensitivity to remaining cloud contamination in the retrievals against the baseline dataset versions. The impact of the algorithm changes was assessed for one month (September 2008) of data: qualitatively by inspection of monthly mean AOD maps and quantitatively by comparing daily gridded satellite data against daily averaged AERONET sun photometer observations for the different versions of each algorithm globally (land and coastal) and for three regions with different aerosol regimes. The analysis allowed for an assessment of sensitivities of all algorithms, which helped define the best algorithm versions for the subsequent round robin exercise; all algorithms (except for MERIS) showed some, in parts significant, improvement. In particular, using common aerosol components and partly also a priori aerosol-type climatology is beneficial. On the other hand the use of an AATSR-based common cloud mask meant a clear improvement (though with significant reduction of coverage) for the MERIS standard product, but not for the algorithms using AATSR. It is noted that all these observations are mostly consistent for all five analyses (global land, global coastal, three regional), which can be understood well, since the set of aerosol components defined in Sect. 3.1 was explicitly designed to cover different global aerosol regimes (with low and high absorption fine mode, sea salt and dust).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

4-Dimensional Variational Data Assimilation (4DVAR) assimilates observations through the minimisation of a least-squares objective function, which is constrained by the model flow. We refer to 4DVAR as strong-constraint 4DVAR (sc4DVAR) in this thesis as it assumes the model is perfect. Relaxing this assumption gives rise to weak-constraint 4DVAR (wc4DVAR), leading to a different minimisation problem with more degrees of freedom. We consider two wc4DVAR formulations in this thesis, the model error formulation and state estimation formulation. The 4DVAR objective function is traditionally solved using gradient-based iterative methods. The principle method used in Numerical Weather Prediction today is the Gauss-Newton approach. This method introduces a linearised `inner-loop' objective function, which upon convergence, updates the solution of the non-linear `outer-loop' objective function. This requires many evaluations of the objective function and its gradient, which emphasises the importance of the Hessian. The eigenvalues and eigenvectors of the Hessian provide insight into the degree of convexity of the objective function, while also indicating the difficulty one may encounter while iterative solving 4DVAR. The condition number of the Hessian is an appropriate measure for the sensitivity of the problem to input data. The condition number can also indicate the rate of convergence and solution accuracy of the minimisation algorithm. This thesis investigates the sensitivity of the solution process minimising both wc4DVAR objective functions to the internal assimilation parameters composing the problem. We gain insight into these sensitivities by bounding the condition number of the Hessians of both objective functions. We also precondition the model error objective function and show improved convergence. We show that both formulations' sensitivities are related to error variance balance, assimilation window length and correlation length-scales using the bounds. We further demonstrate this through numerical experiments on the condition number and data assimilation experiments using linear and non-linear chaotic toy models.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Rising greenhouse gas emissions (GHGEs) have implications for health and up to 30 % of emissions globally are thought to arise from agriculture. Synergies exist between diets low in GHGEs and health however some foods have the opposite relationship, such as sugar production being a relatively low source of GHGEs. In order to address this and to further characterise a healthy sustainable diet, we model the effect on UK non-communicable disease mortality and GHGEs of internalising the social cost of carbon into the price of food alongside a 20 % tax on sugar sweetened beverages (SSBs). Developing previously published work, we simulate four tax scenarios: (A) a GHGEs tax of £2.86/tonne of CO2 equivalents (tCO2e)/100 g product on all products with emissions greater than the mean across all food groups (0.36 kgCO2e/100 g); (B) scenario A but with subsidies on foods with emissions lower than 0.36 kgCO2e/100 g such that the effect is revenue neutral; (C) scenario A but with a 20 % sales tax on SSBs; (D) scenario B but with a 20 % sales tax on SSBs. An almost ideal demand system is used to estimate price elasticities and a comparative risk assessment model is used to estimate changes to non-communicable disease mortality. We estimate that scenario A would lead to 300 deaths delayed or averted, 18,900 ktCO2e fewer GHGEs, and £3.0 billion tax revenue; scenario B, 90 deaths delayed or averted and 17,100 ktCO2e fewer GHGEs; scenario C, 1,200 deaths delayed or averted, 18,500 ktCO2e fewer GHGEs, and £3.4 billion revenue; and scenario D, 2,000 deaths delayed or averted and 16,500 ktCO2e fewer GHGEs. Deaths averted are mainly due to increased fibre and reduced fat consumption; a SSB tax reduces SSB and sugar consumption. Incorporating the social cost of carbon into the price of food has the potential to improve health, reduce GHGEs, and raise revenue. The simple addition of a tax on SSBs can mitigate negative health consequences arising from sugar being low in GHGEs. Further conflicts remain, including increased consumption of unhealthy foods such as cakes and nutrients such as salt.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A simple polynya flux model driven by standard atmospheric forcing is used to investigate the ice formation that took place during an exceptionally strong and consistent western New Siberian (WNS) polynya event in 2004 in the Laptev Sea. Whether formation rates are high enough to erode the stratification of the water column beneath is examined by adding the brine released during the 2004 polynya event to the average winter density stratification of the water body, preconditioned by summers with a cyclonic atmospheric forcing (comparatively weakly stratified water column). Beforehand, the model performance is tested through a simulation of a well‐documented event in April 2008. Neglecting the replenishment of water masses by advection into the polynya area, we find the probability for the occurrence of density‐driven convection down to the bottom to be low. Our findings can be explained by the distinct vertical density gradient that characterizes the area of the WNS polynya and the apparent lack of extreme events in the eastern Laptev Sea. The simple approach is expected to be sufficiently rigorous, since the simulated event is exceptionally strong and consistent, the ice production and salt rejection rates are likely to be overestimated, and the amount of salt rejected is distrusted over a comparatively weakly stratified water column. We conclude that the observed erosion of the halocline and formation of vertically mixed water layers during a WNS polynya event is therefore predominantly related to wind‐ and tidally driven turbulent mixing processes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With the fast development of wireless communications, ZigBee and semiconductor devices, home automation networks have recently become very popular. Since typical consumer products deployed in home automation networks are often powered by tiny and limited batteries, one of the most challenging research issues is concerning energy reduction and the balancing of energy consumption across the network in order to prolong the home network lifetime for consumer devices. The introduction of clustering and sink mobility techniques into home automation networks have been shown to be an efficient way to improve the network performance and have received significant research attention. Taking inspiration from nature, this paper proposes an Ant Colony Optimization (ACO) based clustering algorithm specifically with mobile sink support for home automation networks. In this work, the network is divided into several clusters and cluster heads are selected within each cluster. Then, a mobile sink communicates with each cluster head to collect data directly through short range communications. The ACO algorithm has been utilized in this work in order to find the optimal mobility trajectory for the mobile sink. Extensive simulation results from this research show that the proposed algorithm significantly improves home network performance when using mobile sinks in terms of energy consumption and network lifetime as compared to other routing algorithms currently deployed for home automation networks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The increased availability of digital elevation models and satellite image data enable testing of morphometric relationships between sand dune variables (dune height, spacing and equivalent sand thickness), which were originally established using limited field survey data. These long-established geomorphological hypotheses can now be tested against very much larger samples than were possible when available data were limited to what could be collected by field surveys alone. This project uses ASTER Global Digital Elevation Model (GDEM) data to compare morphometric relationships between sand dune variables in the southwest Kalahari dunefield to those of the Namib Sand Sea, to test whether the relationships found in an active sand sea (Namib) also hold for the fixed dune system of the nearby southwest Kalahari. The data show significant morphometric differences between the simple linear dunes of the Namib sand sea and the southwest Kalahari; the latter do not show the expected positive relationship between dune height and spacing. The southwest Kalahari dunes show a similar range of dune spacings, but they are less tall, on average, than the Namib sand sea dunes. There is a clear spatial pattern to these morphometric data; the tallest and most closely spaced dunes are towards the southeast of the Kalahari dunefield; and this is where the highest values of equivalent sand thickness result. We consider the possible reasons for the observed differences and highlight the need for more studies comparing sand seas and dunefields from different environmental settings.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Causing civilian casualties during military operations has become a much politicised topic in international relations since the Second World War. Since the last decade of the 20th century, different scholars and political analysts have claimed that human life is valued more and more among the general international community. This argument has led many researchers to assume that democratic culture and traditions, modern ethical and moral issues have created a desire for a world without war or, at least, a demand that contemporary armed conflicts, if unavoidable, at least have to be far less lethal forcing the military to seek new technologies that can minimise civilian casualties and collateral damage. Non-Lethal Weapons (NLW) – weapons that are intended to minimise civilian casualties and collateral damage – are based on the technology that, during the 1990s, was expected to revolutionise the conduct of warfare making it significantly less deadly. The rapid rise of interest in NLW, ignited by the American military twenty five years ago, sparked off an entirely new military, as well as an academic, discourse concerning their potential contribution to military success on the 21st century battlefields. It seems, however, that except for this debate, very little has been done within the military forces themselves. This research suggests that the roots of this situation are much deeper than the simple professional misconduct of the military establishment, or the poor political behaviour of political leaders, who had sent them to fight. Following the story of NLW in the U.S., Russia and Israel this research focuses on the political and cultural aspects that have been supposed to force the military organisations of these countries to adopt new technologies and operational and organisational concepts regarding NLW in an attempt to minimise enemy civilian casualties during their military operations. This research finds that while American, Russian and Israeli national characters are, undoubtedly, products of the unique historical experience of each one of these nations, all of three pay very little regard to foreigners’ lives. Moreover, while it is generally argued that the international political pressure is a crucial factor that leads to the significant reduction of harmed civilians and destroyed civilian infrastructure, the findings of this research suggest that the American, Russian and Israeli governments are well prepared and politically equipped to fend off international criticism. As the analyses of the American, Russian and Israeli cases reveal, the political-military leaderships of these countries have very little external or domestic reasons to minimise enemy civilian casualties through fundamental-revolutionary change in their conduct of war. In other words, this research finds that employment of NLW have failed because the political leadership asks the militaries to reduce the enemy civilian casualties to a politically acceptable level, rather than to the technologically possible minimum; as in the socio-cultural-political context of each country, support for the former appears to be significantly higher than for the latter.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we develop a novel constrained recursive least squares algorithm for adaptively combining a set of given multiple models. With data available in an online fashion, the linear combination coefficients of submodels are adapted via the proposed algorithm.We propose to minimize the mean square error with a forgetting factor, and apply the sum to one constraint to the combination parameters. Moreover an l1-norm constraint to the combination parameters is also applied with the aim to achieve sparsity of multiple models so that only a subset of models may be selected into the final model. Then a weighted l2-norm is applied as an approximation to the l1-norm term. As such at each time step, a closed solution of the model combination parameters is available. The contribution of this paper is to derive the proposed constrained recursive least squares algorithm that is computational efficient by exploiting matrix theory. The effectiveness of the approach has been demonstrated using both simulated and real time series examples.