50 resultados para Topologies on an arbitrary set
Resumo:
The no response test is a new scheme in inverse problems for partial differential equations which was recently proposed in [D. R. Luke and R. Potthast, SIAM J. Appl. Math., 63 (2003), pp. 1292–1312] in the framework of inverse acoustic scattering problems. The main idea of the scheme is to construct special probing waves which are small on some test domain. Then the response for these waves is constructed. If the response is small, the unknown object is assumed to be a subset of the test domain. The response is constructed from one, several, or many particular solutions of the problem under consideration. In this paper, we investigate the convergence of the no response test for the reconstruction information about inclusions D from the Cauchy values of solutions to the Helmholtz equation on an outer surface $\partial\Omega$ with $\overline{D} \subset \Omega$. We show that the one‐wave no response test provides a criterion to test the analytic extensibility of a field. In particular, we investigate the construction of approximations for the set of singular points $N(u)$ of the total fields u from one given pair of Cauchy data. Thus, the no response test solves a particular version of the classical Cauchy problem. Also, if an infinite number of fields is given, we prove that a multifield version of the no response test reconstructs the unknown inclusion D. This is the first convergence analysis which could be achieved for the no response test.
Resumo:
There are various situations in which it is natural to ask whether a given collection of k functions, ρ j (r 1,…,r j ), j=1,…,k, defined on a set X, are the first k correlation functions of a point process on X. Here we describe some necessary and sufficient conditions on the ρ j ’s for this to be true. Our primary examples are X=ℝ d , X=ℤ d , and X an arbitrary finite set. In particular, we extend a result by Ambartzumian and Sukiasian showing realizability at sufficiently small densities ρ 1(r). Typically if any realizing process exists there will be many (even an uncountable number); in this case we prove, when X is a finite set, the existence of a realizing Gibbs measure with k body potentials which maximizes the entropy among all realizing measures. We also investigate in detail a simple example in which a uniform density ρ and translation invariant ρ 2 are specified on ℤ; there is a gap between our best upper bound on possible values of ρ and the largest ρ for which realizability can be established.
Case study of the use of remotely sensed data for modeling flood inundation on the river Severn, UK.
Resumo:
A methodology for using remotely sensed data to both generate and evaluate a hydraulic model of floodplain inundation is presented for a rural case study in the United Kingdom: Upton-upon-Severn. Remotely sensed data have been processed and assembled to provide an excellent test data set for both model construction and validation. In order to assess the usefulness of the data and the issues encountered in their use, two models for floodplain inundation were constructed: one based on an industry standard one-dimensional approach and the other based on a simple two-dimensional approach. The results and their implications for the future use of remotely sensed data for predicting flood inundation are discussed. Key conclusions for the use of remotely sensed data are that care must be taken to integrate different data sources for both model construction and validation and that improvements in ground height data shift the focus in terms of model uncertainties to other sources such as boundary conditions. The differences between the two models are found to be of minor significance.
Resumo:
We consider boundary value problems posed on an interval [0,L] for an arbitrary linear evolution equation in one space dimension with spatial derivatives of order n. We characterize a class of such problems that admit a unique solution and are well posed in this sense. Such well-posed boundary value problems are obtained by prescribing N conditions at x=0 and n–N conditions at x=L, where N depends on n and on the sign of the highest-degree coefficient n in the dispersion relation of the equation. For the problems in this class, we give a spectrally decomposed integral representation of the solution; moreover, we show that these are the only problems that admit such a representation. These results can be used to establish the well-posedness, at least locally in time, of some physically relevant nonlinear evolution equations in one space dimension.
Resumo:
Disease-weather relationships influencing Septoria leaf blotch (SLB) preceding growth stage (GS) 31 were identified using data from 12 sites in the UK covering 8 years. Based on these relationships, an early-warning predictive model for SLB on winter wheat was formulated to predict the occurrence of a damaging epidemic (defined as disease severity of 5% or > 5% on the top three leaf layers). The final model was based on accumulated rain > 3 mm in the 80-day period preceding GS 31 (roughly from early-February to the end of April) and accumulated minimum temperature with a 0A degrees C base in the 50-day period starting from 120 days preceding GS 31 (approximately January and February). The model was validated on an independent data set on which the prediction accuracy was influenced by cultivar resistance. Over all observations, the model had a true positive proportion of 0.61, a true negative proportion of 0.73, a sensitivity of 0.83, and a specificity of 0.18. True negative proportion increased to 0.85 for resistant cultivars and decreased to 0.50 for susceptible cultivars. Potential fungicide savings are most likely to be made with resistant cultivars, but such benefits would need to be identified with an in-depth evaluation.
Resumo:
In the past decade, airborne based LIght Detection And Ranging (LIDAR) has been recognised by both the commercial and public sectors as a reliable and accurate source for land surveying in environmental, engineering and civil applications. Commonly, the first task to investigate LIDAR point clouds is to separate ground and object points. Skewness Balancing has been proven to be an efficient non-parametric unsupervised classification algorithm to address this challenge. Initially developed for moderate terrain, this algorithm needs to be adapted to handle sloped terrain. This paper addresses the difficulty of object and ground point separation in LIDAR data in hilly terrain. A case study on a diverse LIDAR data set in terms of data provider, resolution and LIDAR echo has been carried out. Several sites in urban and rural areas with man-made structure and vegetation in moderate and hilly terrain have been investigated and three categories have been identified. A deeper investigation on an urban scene with a river bank has been selected to extend the existing algorithm. The results show that an iterative use of Skewness Balancing is suitable for sloped terrain.
Resumo:
The popularity of wireless local area networks (WLANs) has resulted in their dense deployments around the world. While this increases capacity and coverage, the problem of increased interference can severely degrade the performance of WLANs. However, the impact of interference on throughput in dense WLANs with multiple access points (APs) has had very limited prior research. This is believed to be due to 1) the inaccurate assumption that throughput is always a monotonically decreasing function of interference and 2) the prohibitively high complexity of an accurate analytical model. In this work, firstly we provide a useful classification of commonly found interference scenarios. Secondly, we investigate the impact of interference on throughput for each class based on an approach that determines the possibility of parallel transmissions. Extensive packet-level simulations using OPNET have been performed to support the observations made. Interestingly, results have shown that in some topologies, increased interference can lead to higher throughput and vice versa.
Resumo:
We agree with Duckrow and Albano [Phys. Rev. E 67, 063901 (2003)] and Quian Quiroga et al. [Phys. Rev. E 67, 063902 (2003)] that mutual information (MI) is a useful measure of dependence for electroencephalogram (EEG) data, but we show that the improvement seen in the performance of MI on extracting dependence trends from EEG is more dependent on the type of MI estimator rather than any embedding technique used. In an independent study we conducted in search for an optimal MI estimator, and in particular for EEG applications, we examined the performance of a number of MI estimators on the data set used by Quian Quiroga et al. in their original study, where the performance of different dependence measures on real data was investigated [Phys. Rev. E 65, 041903 (2002)]. We show that for EEG applications the best performance among the investigated estimators is achieved by k-nearest neighbors, which supports the conjecture by Quian Quiroga et al. in Phys. Rev. E 67, 063902 (2003) that the nearest neighbor estimator is the most precise method for estimating MI.
Resumo:
We agree with Duckrow and Albano [Phys. Rev. E 67, 063901 (2003)] and Quian Quiroga [Phys. Rev. E 67, 063902 (2003)] that mutual information (MI) is a useful measure of dependence for electroencephalogram (EEG) data, but we show that the improvement seen in the performance of MI on extracting dependence trends from EEG is more dependent on the type of MI estimator rather than any embedding technique used. In an independent study we conducted in search for an optimal MI estimator, and in particular for EEG applications, we examined the performance of a number of MI estimators on the data set used by Quian Quiroga in their original study, where the performance of different dependence measures on real data was investigated [Phys. Rev. E 65, 041903 (2002)]. We show that for EEG applications the best performance among the investigated estimators is achieved by k-nearest neighbors, which supports the conjecture by Quian Quiroga in Phys. Rev. E 67, 063902 (2003) that the nearest neighbor estimator is the most precise method for estimating MI.
Resumo:
Uncertainty affects all aspects of the property market but one area where the impact of uncertainty is particularly significant is within feasibility analyses. Any development is impacted by differences between market conditions at the conception of the project and the market realities at the time of completion. The feasibility study needs to address the possible outcomes based on an understanding of the current market. This requires the appraiser to forecast the most likely outcome relating to the sale price of the completed development, the construction costs and the timing of both. It also requires the appraiser to understand the impact of finance on the project. All these issues are time sensitive and analysis needs to be undertaken to show the impact of time to the viability of the project. The future is uncertain and a full feasibility analysis should be able to model the upside and downside risk pertaining to a range of possible outcomes. Feasibility studies are extensively used in Italy to determine land value but they tend to be single point analysis based upon a single set of “likely” inputs. In this paper we look at the practical impact of uncertainty in variables using a simulation model (Crystal Ball ©) with an actual case study of an urban redevelopment plan for an Italian Municipality. This allows the appraiser to address the issues of uncertainty involved and thus provide the decision maker with a better understanding of the risk of development. This technique is then refined using a “two-dimensional technique” to distinguish between “uncertainty” and “variability” and thus create a more robust model.
Resumo:
The problem of water wave scattering by a circular ice floe, floating in fluid of finite depth, is formulated and solved numerically. Unlike previous investigations of such situations, here we allow the thickness of the floe (and the fluid depth) to vary axisymmetrically and also incorporate a realistic non-zero draught. A numerical approximation to the solution of this problem is obtained to an arbitrary degree of accuracy by combining a Rayleigh–Ritz approximation of the vertical motion with an appropriate variational principle. This numerical solution procedure builds upon the work of Bennets et al. (2007, J. Fluid Mech., 579, 413–443). As part of the numerical formulation, we utilize a Fourier cosine expansion of the azimuthal motion, resulting in a system of ordinary differential equations to solve in the radial coordinate for each azimuthal mode. The displayed results concentrate on the response of the floe rather than the scattered wave field and show that the effects of introducing the new features of varying floe thickness and a realistic draught are significant.
Resumo:
Ethnopharmacological relevance: Studies on traditional Chinese medicine (TCM), like those of other systems of traditional medicine (TM), are very variable in their quality, content and focus, resulting in issues around their acceptability to the global scientific community. In an attempt to address these issues, an European Union funded FP7 consortium, composed of both Chinese and European scientists and named “Good practice in traditional Chinese medicine” (GP-TCM), has devised a series of guidelines and technical notes to facilitate good practice in collecting, assessing and publishing TCM literature as well as highlighting the scope of information that should be in future publications on TMs. This paper summarises these guidelines, together with what has been learned through GP-TCM collaborations, focusing on some common problems and proposing solutions. The recommendations also provide a template for the evaluation of other types of traditional medicine such as Ayurveda, Kampo and Unani. Materials and methods: GP-TCM provided a means by which experts in different areas relating to TCM were able to collaborate in forming a literature review good practice panel which operated through e-mail exchanges, teleconferences and focused discussions at annual meetings. The panel involved coordinators and representatives of each GP-TCM work package (WP) with the latter managing the testing and refining of such guidelines within the context of their respective WPs and providing feedback. Results: A Good Practice Handbook for Scientific Publications on TCM was drafted during the three years of the consortium, showing the value of such networks. A “deliverable – central questions – labour division” model had been established to guide the literature evaluation studies of each WP. The model investigated various scoring systems and their ability to provide consistent and reliable semi-quantitative assessments of the literature, notably in respect of the botanical ingredients involved and the scientific quality of the work described. This resulted in the compilation of (i) a robust scoring system and (ii) a set of minimum standards for publishing in the herbal medicines field, based on an analysis of the main problems identified in published TCM literature.
Resumo:
A theory of available energy for axisymmetric circulations is presented. The theory is a generalization of the classical theory of available potential energy, in that it accounts for both thermal and angular momentum constraints on the circulation. The generalization relies on the Hamiltonian structure of the (conservative) dynamics, is exact at finite amplitude, and has a local form. Application of the theory is presented for the case of an axisymmetric vortex on an f -plane in the context of the Boussinesq equations.
Resumo:
Hourly data (1994–2009) of surface ozone concentrations at eight monitoring sites have been investigated to assess target level and long–term objective exceedances and their trends. The European Union (EU) ozone target value for human health (60 ppb–maximum daily 8–hour running mean) has been exceeded for a number of years for almost all sites but never exceeded the set limit of 25 exceedances in one year. Second highest annual hourly and 4th highest annual 8–hourly mean ozone concentrations have shown a statistically significant negative trend for in–land sites of Cork–Glashaboy, Monaghan and Lough Navar and no significant trend for the Mace Head site. Peak afternoon ozone concentrations averaged over a three year period from 2007 to 2009 have been found to be lower than corresponding values over a three–year period from 1996 to 1998 for two sites: Cork–Glashaboy and Lough Navar sites. The EU long–term objective value of AOT40 (Accumulated Ozone Exposure over a threshold of 40 ppb) for protection of vegetation (3 ppm–hour, calculated from May to July) has been exceeded, on an individual year basis, for two sites: Mace Head and Valentia. The critical level for the protection of forest (10 ppm–hour from April to September) has not been exceeded for any site except at Valentia in the year 2003. AOT40–Vegetation shows a significant negative trend for a 3–year running average at Cork–Glashaboy (–0.13±0.02 ppm–hour per year), at Lough Navar (–0.05±0.02 ppm–hour per year) and at Monaghan (–0.03±0.03 ppm–hour per year–not statistically significant) sites. No statistically significant trend was observed for the coastal site of Mace head. Overall, with the exception of the Mace Head and Monaghan sites, ozone measurement records at Irish sites show a downward negative trend in peak values that affect human health and vegetation.
Resumo:
In this paper, the concept of available potential energy (APE) density is extended to a multicomponent Boussinesq fluid with a nonlinear equation of state. As shown by previous studies, the APE density is naturally interpreted as the work against buoyancy forces that a parcel needs to perform to move from a notional reference position at which its buoyancy vanishes to its actual position; because buoyancy can be defined relative to an arbitrary reference state, so can APE density. The concept of APE density is therefore best viewed as defining a class of locally defined energy quantities, each tied to a different reference state, rather than as a single energy variable. An important result, for which a new proof is given, is that the volume integrated APE density always exceeds Lorenz’s globally defined APE, except when the reference state coincides with Lorenz’s adiabatically re-arranged reference state of minimum potential energy. A parcel reference position is systematically defined as a level of neutral buoyancy (LNB): depending on the nature of the fluid and on how the reference state is defined, a parcel may have one, none, or multiple LNB within the fluid. Multiple LNB are only possible for a multicomponent fluid whose density depends on pressure. When no LNB exists within the fluid, a parcel reference position is assigned at the minimum or maximum geopotential height. The class of APE densities thus defined admits local and global balance equations, which all exhibit a conversion with kinetic energy, a production term by boundary buoyancy fluxes, and a dissipation term by internal diffusive effects. Different reference states alter the partition between APE production and dissipation, but neither affect the net conversion between kinetic energy and APE, nor the difference between APE production and dissipation. We argue that the possibility of constructing APE-like budgets based on reference states other than Lorenz’s reference state is more important than has been previously assumed, and we illustrate the feasibility of doing so in the context of an idealised and realistic oceanic example, using as reference states one with constant density and another one defined as the horizontal mean density field; in the latter case, the resulting APE density is found to be a reasonable approximation of the APE density constructed from Lorenz’s reference state, while being computationally cheaper.