87 resultados para Level-Set Method
Resumo:
In this paper, we develop a method, termed the Interaction Distribution (ID) method, for analysis of quantitative ecological network data. In many cases, quantitative network data sets are under-sampled, i.e. many interactions are poorly sampled or remain unobserved. Hence, the output of statistical analyses may fail to differentiate between patterns that are statistical artefacts and those which are real characteristics of ecological networks. The ID method can support assessment and inference of under-sampled ecological network data. In the current paper, we illustrate and discuss the ID method based on the properties of plant-animal pollination data sets of flower visitation frequencies. However, the ID method may be applied to other types of ecological networks. The method can supplement existing network analyses based on two definitions of the underlying probabilities for each combination of pollinator and plant species: (1), pi,j: the probability for a visit made by the i’th pollinator species to take place on the j’th plant species; (2), qi,j: the probability for a visit received by the j’th plant species to be made by the i’th pollinator. The method applies the Dirichlet distribution to estimate these two probabilities, based on a given empirical data set. The estimated mean values for pi,j and qi,j reflect the relative differences between recorded numbers of visits for different pollinator and plant species, and the estimated uncertainty of pi,j and qi,j decreases with higher numbers of recorded visits.
Resumo:
Purpose – This study aims to examine the moderating effects of external environment and organisational structure in the relationship between business-level strategy and organisational performance. Design/methodology/approach – The focus of the study is on manufacturing firms in the UK belonging to the electrical and mechanical engineering sectors, and respondents were CEOs. Both objective and subjective measures were used to assess performance. Non-response bias was assessed statistically and appropriate measures taken to minimise the impact of common method variance (CMV). Findings – The results indicate that environmental dynamism and hostility act as moderators in the relationship between business-level strategy and relative competitive performance. In low-hostility environments a cost-leadership strategy and in high-hostility environments a differentiation strategy lead to better performance compared with competitors. In highly dynamic environments a cost-leadership strategy and in low dynamism environments a differentiation strategy are more helpful in improving financial performance. Organisational structure moderates the relationship of both the strategic types with ROS. However, in the case of ROA, the moderating effect of structure was found only in its relationship with cost-leadership strategy. A mechanistic structure is helpful in improving the financial performance of organisations adopting either a cost-leadership or a differentiation strategy. Originality/value – Unlike many other empirical studies, the study makes an important contribution to the literature by examining the moderating effects of both environment and structure on the relationship between business-level strategy and performance in a detailed manner, using moderated regression analysis.
Resumo:
We consider the numerical treatment of second kind integral equations on the real line of the form ∅(s) = ∫_(-∞)^(+∞)▒〖κ(s-t)z(t)ϕ(t)dt,s=R〗 (abbreviated ϕ= ψ+K_z ϕ) in which K ϵ L_1 (R), z ϵ L_∞ (R) and ψ ϵ BC(R), the space of bounded continuous functions on R, are assumed known and ϕ ϵ BC(R) is to be determined. We first derive sharp error estimates for the finite section approximation (reducing the range of integration to [-A, A]) via bounds on (1-K_z )^(-1)as an operator on spaces of weighted continuous functions. Numerical solution by a simple discrete collocation method on a uniform grid on R is then analysed: in the case when z is compactly supported this leads to a coefficient matrix which allows a rapid matrix-vector multiply via the FFT. To utilise this possibility we propose a modified two-grid iteration, a feature of which is that the coarse grid matrix is approximated by a banded matrix, and analyse convergence and computational cost. In cases where z is not compactly supported a combined finite section and two-grid algorithm can be applied and we extend the analysis to this case. As an application we consider acoustic scattering in the half-plane with a Robin or impedance boundary condition which we formulate as a boundary integral equation of the class studied. Our final result is that if z (related to the boundary impedance in the application) takes values in an appropriate compact subset Q of the complex plane, then the difference between ϕ(s)and its finite section approximation computed numerically using the iterative scheme proposed is ≤C_1 [kh log〖(1⁄kh)+(1-Θ)^((-1)⁄2) (kA)^((-1)⁄2) 〗 ] in the interval [-ΘA,ΘA](Θ<1) for kh sufficiently small, where k is the wavenumber and h the grid spacing. Moreover this numerical approximation can be computed in ≤C_2 N logN operations, where N = 2A/h is the number of degrees of freedom. The values of the constants C1 and C2 depend only on the set Q and not on the wavenumber k or the support of z.
Resumo:
We propose a Nystr¨om/product integration method for a class of second kind integral equations on the real line which arise in problems of two-dimensional scalar and elastic wave scattering by unbounded surfaces. Stability and convergence of the method is established with convergence rates dependent on the smoothness of components of the kernel. The method is applied to the problem of acoustic scattering by a sound soft one-dimensional surface which is the graph of a function f, and superalgebraic convergence is established in the case when f is infinitely smooth. Numerical results are presented illustrating this behavior for the case when f is periodic (the diffraction grating case). The Nystr¨om method for this problem is stable and convergent uniformly with respect to the period of the grating, in contrast to standard integral equation methods for diffraction gratings which fail at a countable set of grating periods.
Resumo:
Currently there are few observations of the urban wind field at heights other than rooftop level. Remote sensing instruments such as Doppler lidars provide wind speed data at many heights, which would be useful in determining wind loadings of tall buildings, and predicting local air quality. Studies comparing remote sensing with traditional anemometers carried out in flat, homogeneous terrain often use scan patterns which take several minutes. In an urban context the flow changes quickly in space and time, so faster scans are required to ensure little change in the flow over the scan period. We compare 3993 h of wind speed data collected using a three-beam Doppler lidar wind profiling method with data from a sonic anemometer (190 m). Both instruments are located in central London, UK; a highly built-up area. Based on wind profile measurements every 2 min, the uncertainty in the hourly mean wind speed due to the sampling frequency is 0.05–0.11 m s−1. The lidar tended to overestimate the wind speed by ≈0.5 m s−1 for wind speeds below 20 m s−1. Accuracy may be improved by increasing the scanning frequency of the lidar. This method is considered suitable for use in urban areas.
Resumo:
Introduction: Young onset dementia (YOD) affects about 1 in 1500 people aged under 65 years in the UK. It is associated with loss of employment, independence and an increase in psychological distress. This project set out to identify the benefits of a 2 hour week) structured activity programme of gardening for people with YOD. Method: A mixed qualitative quantitative study of therapeutic gardening for people with YOD, measuring outcomes for both participants with YOD and their carers. 12 participants were recruited from a county wide older adults mental health service, based on onset of dementia being before 65 years of age(range 43-65 years). 2 dropped out and 1 died during the project. Measures included the Mini Mental State Examination, Bradford Well Being Profile, Large Allen Cognitive Level Screen and Pool Activity Level. Results: Over a one year period the carers of the people with YOD found that the project had given participants a renewed sense of purpose and increased well-being. while cognitive functioning declined. Conclusions: This study suggests that a meaningful guided activity programme can maintain or improve well-being in the presence of cognitive deterioration.
Resumo:
Introduction. Feature usage is a pre-requisite to realising the benefits of investments in feature rich systems. We propose that conceptualising the dependent variable 'system use' as 'level of use' and specifying it as a formative construct has greater value for measuring the post-adoption use of feature rich systems. We then validate the content of the construct as a first step in developing a research instrument to measure it. The context of our study is the post-adoption use of electronic medical records (EMR) by primary care physicians. Method. Initially, a literature review of the empirical context defines the scope based on prior studies. Having identified core features from the literature, they are further refined with the help of experts in a consensus seeking process that follows the Delphi technique. Results.The methodology was successfully applied to EMRs, which were selected as an example of feature rich systems. A review of EMR usage and regulatory standards provided the feature input for the first round of the Delphi process. A panel of experts then reached consensus after four rounds, identifying ten task-based features that would be indicators of level of use. Conclusions. To study why some users deploy more advanced features than others, theories of post-adoption require a rich formative dependent variable that measures level of use. We have demonstrated that a context sensitive literature review followed by refinement through a consensus seeking process is a suitable methodology to validate the content of this dependent variable. This is the first step of instrument development prior to statistical confirmation with a larger sample.
Resumo:
Capacity dimensioning is one of the key problems in wireless network planning. Analytical and simulation methods are usually used to pursue the accurate capacity dimensioning of wireless network. In this paper, an analytical capacity dimensioning method for WCDMA with high speed wireless link is proposed based on the analysis on relations among system performance and high speed wireless transmission technologies, such as H-ARQ, AMC and fast scheduling. It evaluates system capacity in closed-form expressions from link level and system level. Numerical results show that the proposed method can calculate link level and system level capacity for WCDMA system with HSDPA and HSUPA.
Resumo:
In this paper a modified algorithm is suggested for developing polynomial neural network (PNN) models. Optimal partial description (PD) modeling is introduced at each layer of the PNN expansion, a task accomplished using the orthogonal least squares (OLS) method. Based on the initial PD models determined by the polynomial order and the number of PD inputs, OLS selects the most significant regressor terms reducing the output error variance. The method produces PNN models exhibiting a high level of accuracy and superior generalization capabilities. Additionally, parsimonious models are obtained comprising a considerably smaller number of parameters compared to the ones generated by means of the conventional PNN algorithm. Three benchmark examples are elaborated, including modeling of the gas furnace process as well as the iris and wine classification problems. Extensive simulation results and comparison with other methods in the literature, demonstrate the effectiveness of the suggested modeling approach.
The capability-affordance model: a method for analysis and modelling of capabilities and affordances
Resumo:
Existing capability models lack qualitative and quantitative means to compare business capabilities. This paper extends previous work and uses affordance theories to consistently model and analyse capabilities. We use the concept of objective and subjective affordances to model capability as a tuple of a set of resource affordance system mechanisms and action paths, dependent on one or more critical affordance factors. We identify an affordance chain of subjective affordances by which affordances work together to enable an action and an affordance path that links action affordances to create a capability system. We define the mechanism and path underlying capability. We show how affordance modelling notation, AMN, can represent affordances comprising a capability. We propose a method to quantitatively and qualitatively compare capabilities using efficiency, effectiveness and quality metrics. The method is demonstrated by a medical example comparing the capability of syringe and needless anaesthetic systems.
Resumo:
Prism is a modular classification rule generation method based on the ‘separate and conquer’ approach that is alternative to the rule induction approach using decision trees also known as ‘divide and conquer’. Prism often achieves a similar level of classification accuracy compared with decision trees, but tends to produce a more compact noise tolerant set of classification rules. As with other classification rule generation methods, a principle problem arising with Prism is that of overfitting due to over-specialised rules. In addition, over-specialised rules increase the associated computational complexity. These problems can be solved by pruning methods. For the Prism method, two pruning algorithms have been introduced recently for reducing overfitting of classification rules - J-pruning and Jmax-pruning. Both algorithms are based on the J-measure, an information theoretic means for quantifying the theoretical information content of a rule. Jmax-pruning attempts to exploit the J-measure to its full potential because J-pruning does not actually achieve this and may even lead to underfitting. A series of experiments have proved that Jmax-pruning may outperform J-pruning in reducing overfitting. However, Jmax-pruning is computationally relatively expensive and may also lead to underfitting. This paper reviews the Prism method and the two existing pruning algorithms above. It also proposes a novel pruning algorithm called Jmid-pruning. The latter is based on the J-measure and it reduces overfitting to a similar level as the other two algorithms but is better in avoiding underfitting and unnecessary computational effort. The authors conduct an experimental study on the performance of the Jmid-pruning algorithm in terms of classification accuracy and computational efficiency. The algorithm is also evaluated comparatively with the J-pruning and Jmax-pruning algorithms.
Resumo:
This paper presents a new method to calculate sky view factors (SVFs) from high resolution urban digital elevation models using a shadow casting algorithm. By utilizing weighted annuli to derive SVF from hemispherical images, the distance light source positions can be predefined and uniformly spread over the whole hemisphere, whereas another method applies a random set of light source positions with a cosine-weighted distribution of sun altitude angles. The 2 methods have similar results based on a large number of SVF images. However, when comparing variations at pixel level between an image generated using the new method presented in this paper with the image from the random method, anisotropic patterns occur. The absolute mean difference between the 2 methods is 0.002 ranging up to 0.040. The maximum difference can be as much as 0.122. Since SVF is a geometrically derived parameter, the anisotropic errors created by the random method must be considered as significant.
Resumo:
The assessment of age-at-death in non-adult skeletal remains is under constant review. However, in many past societies an individual's physical maturation may have been more important in social terms than their exact age, particularly during the period of adolescence. In a recent article (Shapland and Lewis: Am J Phys Anthropol 151 (2013) 302–310) highlighted a set of dental and skeletal indicators that may be useful in mapping the progress of the pubertal growth spurt. This article presents a further skeletal indicator of adolescent development commonly used by modern clinicians: cervical vertebrae maturation (CVM). This method is applied to a collection of 594 adolescents from the medieval cemetery of St. Mary Spital, London. Analysis reveals a potential delay in ages of attainment of the later CVM stages compared with modern adolescents, presumably reflecting negative environmental conditions for growth and development. The data gathered on CVM is compared to other skeletal indicators of pubertal maturity and long bone growth from this site to ascertain the usefulness of this method on archaeological collections.
Resumo:
Previous research has shown that listening to stories supports vocabulary growth in preschool and school-aged children and that lexical entries for even very difficult or rare words can be established if these are defined when they are first introduced. However, little is known about the nature of the lexical representations children form for the words they encounter while listening to stories, or whether these are sufficiently robust to support the child’s own use of such ‘high-level’ vocabulary. This study explored these questions by administering multiple assessments of children’s knowledge about a set of newly-acquired vocabulary. Four- and 6-year-old children were introduced to nine difficult new words (including nouns, verbs and adjectives) through three exposures to a story read by their class teacher. The story included a definition of each new word at its first encounter. Learning of the target vocabulary was assessed by means of two tests of semantic understanding – a forced choice picture-selection task and a definition production task – and a grammaticality judgment task, which asked children to choose between a syntactically-appropriate and syntactically-inappropriate usage of the word. Children in both age groups selected the correct pictorial representation and provided an appropriate definition for the target words in all three word classes significantly more often than they did for a matched set of non-exposed control words. However, only the older group was able to identify the syntactically-appropriate sentence frames in the grammaticality judgment task. Further analyses elucidate some of the components of the lexical representations children lay down when they hear difficult new vocabulary in stories and how different tests of word knowledge might overlap in their assessment of these components.
Resumo:
An efficient two-level model identification method aiming at maximising a model׳s generalisation capability is proposed for a large class of linear-in-the-parameters models from the observational data. A new elastic net orthogonal forward regression (ENOFR) algorithm is employed at the lower level to carry out simultaneous model selection and elastic net parameter estimation. The two regularisation parameters in the elastic net are optimised using a particle swarm optimisation (PSO) algorithm at the upper level by minimising the leave one out (LOO) mean square error (LOOMSE). There are two elements of original contributions. Firstly an elastic net cost function is defined and applied based on orthogonal decomposition, which facilitates the automatic model structure selection process with no need of using a predetermined error tolerance to terminate the forward selection process. Secondly it is shown that the LOOMSE based on the resultant ENOFR models can be analytically computed without actually splitting the data set, and the associate computation cost is small due to the ENOFR procedure. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.