969 resultados para level set method
Resumo:
This letter presents an effective approach for selection of appropriate terrain modeling methods in forming a digital elevation model (DEM). This approach achieves a balance between modeling accuracy and modeling speed. A terrain complexity index is defined to represent a terrain's complexity. A support vector machine (SVM) classifies terrain surfaces into either complex or moderate based on this index associated with the terrain elevation range. The classification result recommends a terrain modeling method for a given data set in accordance with its required modeling accuracy. Sample terrain data from the lunar surface are used in constructing an experimental data set. The results have shown that the terrain complexity index properly reflects the terrain complexity, and the SVM classifier derived from both the terrain complexity index and the terrain elevation range is more effective and generic than that designed from either the terrain complexity index or the terrain elevation range only. The statistical results have shown that the average classification accuracy of SVMs is about 84.3% ± 0.9% for terrain types (complex or moderate). For various ratios of complex and moderate terrain types in a selected data set, the DEM modeling speed increases up to 19.5% with given DEM accuracy.
Resumo:
A fingerprint method for detecting anthropogenic climate change is applied to new simulations with a coupled ocean-atmosphere general circulation model (CGCM) forced by increasing concentrations of greenhouse gases and aerosols covering the years 1880 to 2050. In addition to the anthropogenic climate change signal, the space-time structure of the natural climate variability for near-surface temperatures is estimated from instrumental data over the last 134 years and two 1000 year simulations with CGCMs. The estimates are compared with paleoclimate data over 570 years. The space-time information on both the signal and the noise is used to maximize the signal-to-noise ratio of a detection variable obtained by applying an optimal filter (fingerprint) to the observed data. The inclusion of aerosols slows the predicted future warming. The probability that the observed increase in near-surface temperatures in recent decades is of natural origin is estimated to be less than 5%. However, this number is dependent on the estimated natural variability level, which is still subject to some uncertainty.
Resumo:
In this paper, we develop a method, termed the Interaction Distribution (ID) method, for analysis of quantitative ecological network data. In many cases, quantitative network data sets are under-sampled, i.e. many interactions are poorly sampled or remain unobserved. Hence, the output of statistical analyses may fail to differentiate between patterns that are statistical artefacts and those which are real characteristics of ecological networks. The ID method can support assessment and inference of under-sampled ecological network data. In the current paper, we illustrate and discuss the ID method based on the properties of plant-animal pollination data sets of flower visitation frequencies. However, the ID method may be applied to other types of ecological networks. The method can supplement existing network analyses based on two definitions of the underlying probabilities for each combination of pollinator and plant species: (1), pi,j: the probability for a visit made by the i’th pollinator species to take place on the j’th plant species; (2), qi,j: the probability for a visit received by the j’th plant species to be made by the i’th pollinator. The method applies the Dirichlet distribution to estimate these two probabilities, based on a given empirical data set. The estimated mean values for pi,j and qi,j reflect the relative differences between recorded numbers of visits for different pollinator and plant species, and the estimated uncertainty of pi,j and qi,j decreases with higher numbers of recorded visits.
Resumo:
Purpose – This study aims to examine the moderating effects of external environment and organisational structure in the relationship between business-level strategy and organisational performance. Design/methodology/approach – The focus of the study is on manufacturing firms in the UK belonging to the electrical and mechanical engineering sectors, and respondents were CEOs. Both objective and subjective measures were used to assess performance. Non-response bias was assessed statistically and appropriate measures taken to minimise the impact of common method variance (CMV). Findings – The results indicate that environmental dynamism and hostility act as moderators in the relationship between business-level strategy and relative competitive performance. In low-hostility environments a cost-leadership strategy and in high-hostility environments a differentiation strategy lead to better performance compared with competitors. In highly dynamic environments a cost-leadership strategy and in low dynamism environments a differentiation strategy are more helpful in improving financial performance. Organisational structure moderates the relationship of both the strategic types with ROS. However, in the case of ROA, the moderating effect of structure was found only in its relationship with cost-leadership strategy. A mechanistic structure is helpful in improving the financial performance of organisations adopting either a cost-leadership or a differentiation strategy. Originality/value – Unlike many other empirical studies, the study makes an important contribution to the literature by examining the moderating effects of both environment and structure on the relationship between business-level strategy and performance in a detailed manner, using moderated regression analysis.
Resumo:
We consider the numerical treatment of second kind integral equations on the real line of the form ∅(s) = ∫_(-∞)^(+∞)▒〖κ(s-t)z(t)ϕ(t)dt,s=R〗 (abbreviated ϕ= ψ+K_z ϕ) in which K ϵ L_1 (R), z ϵ L_∞ (R) and ψ ϵ BC(R), the space of bounded continuous functions on R, are assumed known and ϕ ϵ BC(R) is to be determined. We first derive sharp error estimates for the finite section approximation (reducing the range of integration to [-A, A]) via bounds on (1-K_z )^(-1)as an operator on spaces of weighted continuous functions. Numerical solution by a simple discrete collocation method on a uniform grid on R is then analysed: in the case when z is compactly supported this leads to a coefficient matrix which allows a rapid matrix-vector multiply via the FFT. To utilise this possibility we propose a modified two-grid iteration, a feature of which is that the coarse grid matrix is approximated by a banded matrix, and analyse convergence and computational cost. In cases where z is not compactly supported a combined finite section and two-grid algorithm can be applied and we extend the analysis to this case. As an application we consider acoustic scattering in the half-plane with a Robin or impedance boundary condition which we formulate as a boundary integral equation of the class studied. Our final result is that if z (related to the boundary impedance in the application) takes values in an appropriate compact subset Q of the complex plane, then the difference between ϕ(s)and its finite section approximation computed numerically using the iterative scheme proposed is ≤C_1 [kh log〖(1⁄kh)+(1-Θ)^((-1)⁄2) (kA)^((-1)⁄2) 〗 ] in the interval [-ΘA,ΘA](Θ<1) for kh sufficiently small, where k is the wavenumber and h the grid spacing. Moreover this numerical approximation can be computed in ≤C_2 N logN operations, where N = 2A/h is the number of degrees of freedom. The values of the constants C1 and C2 depend only on the set Q and not on the wavenumber k or the support of z.
Resumo:
We propose a Nystr¨om/product integration method for a class of second kind integral equations on the real line which arise in problems of two-dimensional scalar and elastic wave scattering by unbounded surfaces. Stability and convergence of the method is established with convergence rates dependent on the smoothness of components of the kernel. The method is applied to the problem of acoustic scattering by a sound soft one-dimensional surface which is the graph of a function f, and superalgebraic convergence is established in the case when f is infinitely smooth. Numerical results are presented illustrating this behavior for the case when f is periodic (the diffraction grating case). The Nystr¨om method for this problem is stable and convergent uniformly with respect to the period of the grating, in contrast to standard integral equation methods for diffraction gratings which fail at a countable set of grating periods.
Resumo:
Currently there are few observations of the urban wind field at heights other than rooftop level. Remote sensing instruments such as Doppler lidars provide wind speed data at many heights, which would be useful in determining wind loadings of tall buildings, and predicting local air quality. Studies comparing remote sensing with traditional anemometers carried out in flat, homogeneous terrain often use scan patterns which take several minutes. In an urban context the flow changes quickly in space and time, so faster scans are required to ensure little change in the flow over the scan period. We compare 3993 h of wind speed data collected using a three-beam Doppler lidar wind profiling method with data from a sonic anemometer (190 m). Both instruments are located in central London, UK; a highly built-up area. Based on wind profile measurements every 2 min, the uncertainty in the hourly mean wind speed due to the sampling frequency is 0.05–0.11 m s−1. The lidar tended to overestimate the wind speed by ≈0.5 m s−1 for wind speeds below 20 m s−1. Accuracy may be improved by increasing the scanning frequency of the lidar. This method is considered suitable for use in urban areas.
Resumo:
Introduction: Young onset dementia (YOD) affects about 1 in 1500 people aged under 65 years in the UK. It is associated with loss of employment, independence and an increase in psychological distress. This project set out to identify the benefits of a 2 hour week) structured activity programme of gardening for people with YOD. Method: A mixed qualitative quantitative study of therapeutic gardening for people with YOD, measuring outcomes for both participants with YOD and their carers. 12 participants were recruited from a county wide older adults mental health service, based on onset of dementia being before 65 years of age(range 43-65 years). 2 dropped out and 1 died during the project. Measures included the Mini Mental State Examination, Bradford Well Being Profile, Large Allen Cognitive Level Screen and Pool Activity Level. Results: Over a one year period the carers of the people with YOD found that the project had given participants a renewed sense of purpose and increased well-being. while cognitive functioning declined. Conclusions: This study suggests that a meaningful guided activity programme can maintain or improve well-being in the presence of cognitive deterioration.
Resumo:
Introduction. Feature usage is a pre-requisite to realising the benefits of investments in feature rich systems. We propose that conceptualising the dependent variable 'system use' as 'level of use' and specifying it as a formative construct has greater value for measuring the post-adoption use of feature rich systems. We then validate the content of the construct as a first step in developing a research instrument to measure it. The context of our study is the post-adoption use of electronic medical records (EMR) by primary care physicians. Method. Initially, a literature review of the empirical context defines the scope based on prior studies. Having identified core features from the literature, they are further refined with the help of experts in a consensus seeking process that follows the Delphi technique. Results.The methodology was successfully applied to EMRs, which were selected as an example of feature rich systems. A review of EMR usage and regulatory standards provided the feature input for the first round of the Delphi process. A panel of experts then reached consensus after four rounds, identifying ten task-based features that would be indicators of level of use. Conclusions. To study why some users deploy more advanced features than others, theories of post-adoption require a rich formative dependent variable that measures level of use. We have demonstrated that a context sensitive literature review followed by refinement through a consensus seeking process is a suitable methodology to validate the content of this dependent variable. This is the first step of instrument development prior to statistical confirmation with a larger sample.
Resumo:
Capacity dimensioning is one of the key problems in wireless network planning. Analytical and simulation methods are usually used to pursue the accurate capacity dimensioning of wireless network. In this paper, an analytical capacity dimensioning method for WCDMA with high speed wireless link is proposed based on the analysis on relations among system performance and high speed wireless transmission technologies, such as H-ARQ, AMC and fast scheduling. It evaluates system capacity in closed-form expressions from link level and system level. Numerical results show that the proposed method can calculate link level and system level capacity for WCDMA system with HSDPA and HSUPA.
Resumo:
In this paper a modified algorithm is suggested for developing polynomial neural network (PNN) models. Optimal partial description (PD) modeling is introduced at each layer of the PNN expansion, a task accomplished using the orthogonal least squares (OLS) method. Based on the initial PD models determined by the polynomial order and the number of PD inputs, OLS selects the most significant regressor terms reducing the output error variance. The method produces PNN models exhibiting a high level of accuracy and superior generalization capabilities. Additionally, parsimonious models are obtained comprising a considerably smaller number of parameters compared to the ones generated by means of the conventional PNN algorithm. Three benchmark examples are elaborated, including modeling of the gas furnace process as well as the iris and wine classification problems. Extensive simulation results and comparison with other methods in the literature, demonstrate the effectiveness of the suggested modeling approach.
The capability-affordance model: a method for analysis and modelling of capabilities and affordances
Resumo:
Existing capability models lack qualitative and quantitative means to compare business capabilities. This paper extends previous work and uses affordance theories to consistently model and analyse capabilities. We use the concept of objective and subjective affordances to model capability as a tuple of a set of resource affordance system mechanisms and action paths, dependent on one or more critical affordance factors. We identify an affordance chain of subjective affordances by which affordances work together to enable an action and an affordance path that links action affordances to create a capability system. We define the mechanism and path underlying capability. We show how affordance modelling notation, AMN, can represent affordances comprising a capability. We propose a method to quantitatively and qualitatively compare capabilities using efficiency, effectiveness and quality metrics. The method is demonstrated by a medical example comparing the capability of syringe and needless anaesthetic systems.
Resumo:
Prism is a modular classification rule generation method based on the ‘separate and conquer’ approach that is alternative to the rule induction approach using decision trees also known as ‘divide and conquer’. Prism often achieves a similar level of classification accuracy compared with decision trees, but tends to produce a more compact noise tolerant set of classification rules. As with other classification rule generation methods, a principle problem arising with Prism is that of overfitting due to over-specialised rules. In addition, over-specialised rules increase the associated computational complexity. These problems can be solved by pruning methods. For the Prism method, two pruning algorithms have been introduced recently for reducing overfitting of classification rules - J-pruning and Jmax-pruning. Both algorithms are based on the J-measure, an information theoretic means for quantifying the theoretical information content of a rule. Jmax-pruning attempts to exploit the J-measure to its full potential because J-pruning does not actually achieve this and may even lead to underfitting. A series of experiments have proved that Jmax-pruning may outperform J-pruning in reducing overfitting. However, Jmax-pruning is computationally relatively expensive and may also lead to underfitting. This paper reviews the Prism method and the two existing pruning algorithms above. It also proposes a novel pruning algorithm called Jmid-pruning. The latter is based on the J-measure and it reduces overfitting to a similar level as the other two algorithms but is better in avoiding underfitting and unnecessary computational effort. The authors conduct an experimental study on the performance of the Jmid-pruning algorithm in terms of classification accuracy and computational efficiency. The algorithm is also evaluated comparatively with the J-pruning and Jmax-pruning algorithms.
Resumo:
This paper presents a new method to calculate sky view factors (SVFs) from high resolution urban digital elevation models using a shadow casting algorithm. By utilizing weighted annuli to derive SVF from hemispherical images, the distance light source positions can be predefined and uniformly spread over the whole hemisphere, whereas another method applies a random set of light source positions with a cosine-weighted distribution of sun altitude angles. The 2 methods have similar results based on a large number of SVF images. However, when comparing variations at pixel level between an image generated using the new method presented in this paper with the image from the random method, anisotropic patterns occur. The absolute mean difference between the 2 methods is 0.002 ranging up to 0.040. The maximum difference can be as much as 0.122. Since SVF is a geometrically derived parameter, the anisotropic errors created by the random method must be considered as significant.
Resumo:
The assessment of age-at-death in non-adult skeletal remains is under constant review. However, in many past societies an individual's physical maturation may have been more important in social terms than their exact age, particularly during the period of adolescence. In a recent article (Shapland and Lewis: Am J Phys Anthropol 151 (2013) 302–310) highlighted a set of dental and skeletal indicators that may be useful in mapping the progress of the pubertal growth spurt. This article presents a further skeletal indicator of adolescent development commonly used by modern clinicians: cervical vertebrae maturation (CVM). This method is applied to a collection of 594 adolescents from the medieval cemetery of St. Mary Spital, London. Analysis reveals a potential delay in ages of attainment of the later CVM stages compared with modern adolescents, presumably reflecting negative environmental conditions for growth and development. The data gathered on CVM is compared to other skeletal indicators of pubertal maturity and long bone growth from this site to ascertain the usefulness of this method on archaeological collections.