20 resultados para Aiken Technical College--Statistics
Resumo:
In the thesis it is discussed in what ways concepts and methodology developed in evolutionary biology can be applied to the explanation and research of language change. The parallel nature of the mechanisms of biological evolution and language change is explored along with the history of the exchange of ideas between these two disciplines. Against this background computational methods developed in evolutionary biology are taken into consideration in terms of their applicability to the study of historical relationships between languages. Different phylogenetic methods are explained in common terminology, avoiding the technical language of statistics. The thesis is on one hand a synthesis of earlier scientific discussion, and on the other an attempt to map out the problems of earlier approaches in addition to finding new guidelines in the study of language change on their basis. Primarily literature about the connections between evolutionary biology and language change, along with research articles describing applications of phylogenetic methods into language change have been used as source material. The thesis starts out by describing the initial development of the disciplines of evolutionary biology and historical linguistics, a process which right from the beginning can be seen to have involved an exchange of ideas concerning the mechanisms of language change and biological evolution. The historical discussion lays the foundation for the handling of the generalised account of selection developed during the recent few decades. This account is aimed for creating a theoretical framework capable of explaining both biological evolution and cultural change as selection processes acting on self-replicating entities. This thesis focusses on the capacity of the generalised account of selection to describe language change as a process of this kind. In biology, the mechanisms of evolution are seen to form populations of genetically related organisms through time. One of the central questions explored in this thesis is whether selection theory makes it possible to picture languages are forming populations of a similar kind, and what a perspective like this can offer to the understanding of language in general. In historical linguistics, the comparative method and other, complementing methods have been traditionally used to study the development of languages from a common ancestral language. Computational, quantitative methods have not become widely used as part of the central methodology of historical linguistics. After the fading of a limited popularity enjoyed by the lexicostatistical method since the 1950s, only in the recent years have also the computational methods of phylogenetic inference used in evolutionary biology been applied to the study of early language history. In this thesis the possibilities offered by the traditional methodology of historical linguistics and the new phylogenetic methods are compared. The methods are approached through the ways in which they have been applied to the Indo-European languages, which is the most thoroughly investigated language family using both the traditional and the phylogenetic methods. The problems of these applications along with the optimal form of the linguistic data used in these methods are explored in the thesis. The mechanisms of biological evolution are seen in the thesis as parallel in a limited sense to the mechanisms of language change, however sufficiently so that the development of a generalised account of selection is deemed as possibly fruiful for understanding language change. These similarities are also seen to support the validity of using phylogenetic methods in the study of language history, although the use of linguistic data and the models of language change employed by these models are seen to await further development.
Resumo:
Road traffic accidents are a large problem everywhere in the world. However, regional differences in traffic safety between countries are considerable. For example, traffic safety records are much worse in Southern Europe and the Middle East than in Northern and Western Europe. Despite the large regional differences in traffic safety, factors contributing to different accident risk figures in different countries and regions have remained largely unstudied. The general aim of this study was to investigate regional differences in traffic safety between Southern European/Middle Eastern (i.e., Greece, Iran, Turkey) and Northern/Western European (i.e., Finland, Great Britain, The Netherlands) countries and to identify factors related to these differences. We conducted seven sub-studies in which I applied a traffic culture framework, including a multi-level approach, to traffic safety. We used aggregated level data (national statistics), surveys among drivers, and data on traffic accidents and fatalities in the analyses. In the first study, we investigated the influence of macro level factors (i.e., economic, societal, and cultural) on traffic safety across countries. The results showed that a high GNP per capita and conservatism correlated with a low number of traffic fatalities, whereas a high degree of uncertainty avoidance, neuroticism, and egalitarianism correlated with a high number of traffic fatalities. In the second, third, and fourth studies, we examined whether the conceptualisation of road user characteristics (i.e., driver behaviour and performance) varied across traffic cultures and how these factors determined overall safety, and the differences between countries in traffic safety. The results showed that the factorial agreement for driver behaviour (i.e., aggressive driving) and performance (i.e., safety skills) was unsatisfactory in Greece, Iran, and Turkey, where the lack of social tolerance and interpersonal aggressive violations seem to be important characteristics of driving. In addition, we found that driver behaviour (i.e., aggressive violations and errors) mediated the relationship between culture/country and accidents. Besides, drivers from "dangerous" Southern European countries and Iran scored higher on aggressive violations and errors than did drivers from "safe" Northern European countries. However, "speeding" appeared to be a "pan-cultural" problem in traffic. Similarly, aggressive driving seems largely depend on road users' interactions and drivers' interpretation (i.e., cognitive biases) of the behaviour of others in every country involved in the study. Moreover, in all countries, a risky general driving style was mostly related to being young and male. The results of the fifth and sixth studies showed that among young Turkish drivers, gender stereotypes (i.e., masculinity and femininity) greatly influence driver behaviour and performance. Feminine drivers were safety-oriented whereas masculine drivers were skill-oriented and risky drivers. Since everyday driving tasks involve not only erroneous (i.e., risky or dangerous driving) or correct performance (i.e., normal habitual driving), but also "positive" driver behaviours, we developed a reliable scale for measuring "positive" driver behaviours among Turkish drivers in the seventh study. Consequently, I revised Reason's model [Reason, J. T., 1990. Human error. Cambridge University Press: New York] of aberrant driver behaviour to represent a general driving style, including all possible intentional behaviours in traffic while evaluating the differences between countries in traffic safety. The results emphasise the importance of economic, societal and cultural factors, general driving style and skills, which are related to exposure, cognitive biases as well as age, sex, and gender, in differences between countries in traffic safety.
Resumo:
Various reasons, such as ethical issues in maintaining blood resources, growing costs, and strict requirements for safe blood, have increased the pressure for efficient use of resources in blood banking. The competence of blood establishments can be characterized by their ability to predict the volume of blood collection to be able to provide cellular blood components in a timely manner as dictated by hospital demand. The stochastically varying clinical need for platelets (PLTs) sets a specific challenge for balancing supply with requests. Labour has been proven a primary cost-driver and should be managed efficiently. International comparisons of blood banking could recognize inefficiencies and allow reallocation of resources. Seventeen blood centres from 10 countries in continental Europe, Great Britain, and Scandinavia participated in this study. The centres were national institutes (5), parts of the local Red Cross organisation (5), or integrated into university hospitals (7). This study focused on the departments of blood component preparation of the centres. The data were obtained retrospectively by computerized questionnaires completed via Internet for the years 2000-2002. The data were used in four original articles (numbered I through IV) that form the basis of this thesis. Non-parametric data envelopment analysis (DEA, II-IV) was applied to evaluate and compare the relative efficiency of blood component preparation. Several models were created using different input and output combinations. The focus of comparisons was on the technical efficiency (II-III) and the labour efficiency (I, IV). An empirical cost model was tested to evaluate the cost efficiency (IV). Purchasing power parities (PPP, IV) were used to adjust the costs of the working hours and to make the costs comparable among countries. The total annual number of whole blood (WB) collections varied from 8,880 to 290,352 in the centres (I). Significant variation was also observed in the annual volume of produced red blood cells (RBCs) and PLTs. The annual number of PLTs produced by any method varied from 2,788 to 104,622 units. In 2002, 73% of all PLTs were produced by the buffy coat (BC) method, 23% by aphaeresis and 4% by the platelet-rich plasma (PRP) method. The annual discard rate of PLTs varied from 3.9% to 31%. The mean discard rate (13%) remained in the same range throughout the study period and demonstrated similar levels and variation in 2003-2004 according to a specific follow-up question (14%, range 3.8%-24%). The annual PLT discard rates were, to some extent, associated with production volumes. The mean RBC discard rate was 4.5% (range 0.2%-7.7%). Technical efficiency showed marked variation (median 60%, range 41%-100%) among the centres (II). Compared to the efficient departments, the inefficient departments used excess labour resources (and probably) production equipment to produce RBCs and PLTs. Technical efficiency tended to be higher when the (theoretical) proportion of lost WB collections (total RBC+PLT loss) from all collections was low (III). The labour efficiency varied remarkably, from 25% to 100% (median 47%) when working hours were the only input (IV). Using the estimated total costs as the input (cost efficiency) revealed an even greater variation (13%-100%) and overall lower efficiency level compared to labour only as the input. In cost efficiency only, the savings potential (observed inefficiency) was more than 50% in 10 departments, whereas labour and cost savings potentials were both more than 50% in six departments. The association between department size and efficiency (scale efficiency) could not be verified statistically in the small sample. In conclusion, international evaluation of the technical efficiency in component preparation departments revealed remarkable variation. A suboptimal combination of manpower and production output levels was the major cause of inefficiency, and the efficiency did not directly relate to production volume. Evaluation of the reasons for discarding components may offer a novel approach to study efficiency. DEA was proven applicable in analyses including various factors as inputs and outputs. This study suggests that analytical models can be developed to serve as indicators of technical efficiency and promote improvements in the management of limited resources. The work also demonstrates the importance of integrating efficiency analysis into international comparisons of blood banking.
Resumo:
The efforts of combining quantum theory with general relativity have been great and marked by several successes. One field where progress has lately been made is the study of noncommutative quantum field theories that arise as a low energy limit in certain string theories. The idea of noncommutativity comes naturally when combining these two extremes and has profound implications on results widely accepted in traditional, commutative, theories. In this work I review the status of one of the most important connections in physics, the spin-statistics relation. The relation is deeply ingrained in our reality in that it gives us the structure for the periodic table and is of crucial importance for the stability of all matter. The dramatic effects of noncommutativity of space-time coordinates, mainly the loss of Lorentz invariance, call the spin-statistics relation into question. The spin-statistics theorem is first presented in its traditional setting, giving a clarifying proof starting from minimal requirements. Next the notion of noncommutativity is introduced and its implications studied. The discussion is essentially based on twisted Poincaré symmetry, the space-time symmetry of noncommutative quantum field theory. The controversial issue of microcausality in noncommutative quantum field theory is settled by showing for the first time that the light wedge microcausality condition is compatible with the twisted Poincaré symmetry. The spin-statistics relation is considered both from the point of view of braided statistics, and in the traditional Lagrangian formulation of Pauli, with the conclusion that Pauli's age-old theorem stands even this test so dramatic for the whole structure of space-time.
Resumo:
This thesis presents novel modelling applications for environmental geospatial data using remote sensing, GIS and statistical modelling techniques. The studied themes can be classified into four main themes: (i) to develop advanced geospatial databases. Paper (I) demonstrates the creation of a geospatial database for the Glanville fritillary butterfly (Melitaea cinxia) in the Åland Islands, south-western Finland; (ii) to analyse species diversity and distribution using GIS techniques. Paper (II) presents a diversity and geographical distribution analysis for Scopulini moths at a world-wide scale; (iii) to study spatiotemporal forest cover change. Paper (III) presents a study of exotic and indigenous tree cover change detection in Taita Hills Kenya using airborne imagery and GIS analysis techniques; (iv) to explore predictive modelling techniques using geospatial data. In Paper (IV) human population occurrence and abundance in the Taita Hills highlands was predicted using the generalized additive modelling (GAM) technique. Paper (V) presents techniques to enhance fire prediction and burned area estimation at a regional scale in East Caprivi Namibia. Paper (VI) compares eight state-of-the-art predictive modelling methods to improve fire prediction, burned area estimation and fire risk mapping in East Caprivi Namibia. The results in Paper (I) showed that geospatial data can be managed effectively using advanced relational database management systems. Metapopulation data for Melitaea cinxia butterfly was successfully combined with GPS-delimited habitat patch information and climatic data. Using the geospatial database, spatial analyses were successfully conducted at habitat patch level or at more coarse analysis scales. Moreover, this study showed it appears evident that at a large-scale spatially correlated weather conditions are one of the primary causes of spatially correlated changes in Melitaea cinxia population sizes. In Paper (II) spatiotemporal characteristics of Socupulini moths description, diversity and distribution were analysed at a world-wide scale and for the first time GIS techniques were used for Scopulini moth geographical distribution analysis. This study revealed that Scopulini moths have a cosmopolitan distribution. The majority of the species have been described from the low latitudes, sub-Saharan Africa being the hot spot of species diversity. However, the taxonomical effort has been uneven among biogeographical regions. Paper III showed that forest cover change can be analysed in great detail using modern airborne imagery techniques and historical aerial photographs. However, when spatiotemporal forest cover change is studied care has to be taken in co-registration and image interpretation when historical black and white aerial photography is used. In Paper (IV) human population distribution and abundance could be modelled with fairly good results using geospatial predictors and non-Gaussian predictive modelling techniques. Moreover, land cover layer is not necessary needed as a predictor because first and second-order image texture measurements derived from satellite imagery had more power to explain the variation in dwelling unit occurrence and abundance. Paper V showed that generalized linear model (GLM) is a suitable technique for fire occurrence prediction and for burned area estimation. GLM based burned area estimations were found to be more superior than the existing MODIS burned area product (MCD45A1). However, spatial autocorrelation of fires has to be taken into account when using the GLM technique for fire occurrence prediction. Paper VI showed that novel statistical predictive modelling techniques can be used to improve fire prediction, burned area estimation and fire risk mapping at a regional scale. However, some noticeable variation between different predictive modelling techniques for fire occurrence prediction and burned area estimation existed.
Resumo:
Microarrays are high throughput biological assays that allow the screening of thousands of genes for their expression. The main idea behind microarrays is to compute for each gene a unique signal that is directly proportional to the quantity of mRNA that was hybridized on the chip. A large number of steps and errors associated with each step make the generated expression signal noisy. As a result, microarray data need to be carefully pre-processed before their analysis can be assumed to lead to reliable and biologically relevant conclusions. This thesis focuses on developing methods for improving gene signal and further utilizing this improved signal for higher level analysis. To achieve this, first, approaches for designing microarray experiments using various optimality criteria, considering both biological and technical replicates, are described. A carefully designed experiment leads to signal with low noise, as the effect of unwanted variations is minimized and the precision of the estimates of the parameters of interest are maximized. Second, a system for improving the gene signal by using three scans at varying scanner sensitivities is developed. A novel Bayesian latent intensity model is then applied on these three sets of expression values, corresponding to the three scans, to estimate the suitably calibrated true signal of genes. Third, a novel image segmentation approach that segregates the fluorescent signal from the undesired noise is developed using an additional dye, SYBR green RNA II. This technique helped in identifying signal only with respect to the hybridized DNA, and signal corresponding to dust, scratch, spilling of dye, and other noises, are avoided. Fourth, an integrated statistical model is developed, where signal correction, systematic array effects, dye effects, and differential expression, are modelled jointly as opposed to a sequential application of several methods of analysis. The methods described in here have been tested only for cDNA microarrays, but can also, with some modifications, be applied to other high-throughput technologies. Keywords: High-throughput technology, microarray, cDNA, multiple scans, Bayesian hierarchical models, image analysis, experimental design, MCMC, WinBUGS.
Resumo:
For the past twenty years, several indicator sets have been produced on international, national and regional levels. Most of the work has concentrated on the selection of the indicators and on collection of the pertinent data, but less attention has been given to the actual users and their needs. This dissertation focuses on the use of sustainable development indicator sets. The dissertation explores the reasons that have deterred the use of the indicators, discusses the role of sustainable development indicators in a policy-cycle and broadens the view of use by recognising three different types of use. The work presents two indicator development processes: The Finnish national sustainable development indicators and the socio-cultural indicators supporting the measurement of eco-efficiency in the Kymenlaakso Region. The sets are compared by using a framework created in this work to describe indicator process quality. It includes five principles supported by more specific criteria. The principles are high policy relevance, sound indicator quality, efficient participation, effective dissemination and long-term institutionalisation. The framework provided a way to identify the key obstacles for use. The two immediate problems with current indicator sets are that the users are unaware of them and the indicators are often unsuitable to their needs. The reasons for these major flaws are irrelevance of the indicators to the policy needs, technical shortcomings in the context and presentation, failure to engage the users in the development process, non-existent dissemination strategies and lack of institutionalisation to promote and update the indicators. The importance of the different obstacles differs among the users and use types. In addition to the indicator projects, materials used in the dissertation include 38 interviews of high-level policy-makers or civil servants close to them, statistics of the national indicator Internet-page downloads, citations of the national indicator publication, and the media coverage of both indicator sets. According to the results, the most likely use for a sustainable development indicator set by policy-makers is to learn about the concept. Very little evidence of direct use to support decision-making was available. Conceptual use is also common for other user groups, namely the media, civil servants, researchers, students and teachers. Decision-makers themselves consider the most obvious use for the indicators to be the promotion of their own views which is a form of legitimising use. The sustainable development indicators have different types of use in the policy cycle and most commonly expected instrumental use is not very likely or even desirable at all stages. Stages of persuading the public and the decision-makers about new problems as well as in formulating new policies employ legitimising use. Learning by conceptual use is also inherent to policy-making as people involved learn about the new situation. Instrumental use is most likely in policy formulation, implementation and evaluation. The dissertation is an article dissertation, including five papers that are published in scientific journals and an extensive introductory chapter that discusses and weaves together the papers.
Resumo:
The purpose of this study is to analyse education, employment, and work-life experiences of visually impaired persons in expert jobs. The empirical data consists of 30 thematic interviews (24 visually impaired persons, 1 family-member of a visually impaired person, 5 persons working with diversity issues), of supplementary articles, and of statistics on the socio-economic status of the visually impaired. The interviewees experiences of education and employment have been analysed by a qualitative method. The analysis has been deepened by reflecting it against the recent discussion on the concept of diversity. The author s methodological choice as a disability researcher has been to treat the interviewees as co-researchers rather than objects of research. Accessibility in its different forms is a prerequisite of diversity in the workplace, and this study examines what kind of accessibility is required by visually impaired professionals. Access to working life depends on the attitudes prejudices and expectations that society has towards a minority group. Social accessibility is connected with internal relationships in the workplace, and achieving social accessibility is a bilateral process. Information technology has revolutionised the visually impaired people s possibilities of accessing information and performing expert tasks. Accessible environment, good mobility skills, and transportation services enable visually impaired employees to get to their workplaces and to navigate there with ease. Integration has raised the level of education and widened the selection of career options for the visually impaired. However, even visually impaired people with academic degrees often need employment support services. Visually impaired professionals are mainly employed in the public and third sector. Achieving diversity in the labour market is a multiactor process. Social support services are needed, as well as courage and readiness from employers to hire people with disabilities. The organisations of the visually impaired play an important role in affecting the attitudes and providing peer support. Visually impaired employees need good professional skills, blindness skills, and social courage, and they need to be comfortable with their disability. In the workplace, diversity may actualise as diverse ways of working: the work is done by using technical aids or other means of compensating for the lack of eyesight. When an employee must find compensatory solutions for disability-related limitations at work, this will also develop his/her problem-solving abilities. Key words: visually impaired, diversity, accessibility, working life
Resumo:
The purpose of this study was to produce information on and practical recommendations for informed decision-making on and capacity building for sustainable forest management (SFM) and good forest governance. This was done within the overall global framework for sustainable development with special emphasis on the EU and African frameworks and on Southern Sudan and Ethiopia in particular. The case studies on Southern Sudan and Ethiopia focused on local, national and regional issues. Moreover, this study attempted to provide both theoretical and practical new insight. The aim was to build an overall theoretical framework and to study its key contents and main implications for SFM and good forest governance at all administration levels, for providing new tools for capacity building in natural resources management. The theoretical framework and research approach were based on the original research problem and the general and specific aims of the study. The key elements of the framework encompass sustainable development, global and EU governance, sustainable forest management (SFM), good forest governance, as well as international and EU law. The selected research approach comprised matrix-based assessment of international, regional (EU and Africa) and national (Southern Sudan and Ethiopia) policy and legal documents. The specific case study on Southern Sudan also involved interviews and group discussions with local community members and government officials. As a whole, this study attempted to link the global, regional, national and local levels in forest-sector development and especially to analyse how the international policy development in environmental and forestry issues is reflected in field-level progress towards SFM and good forest governance, for the specific cases of Southern Sudan and Ethiopia. The results on Southern Sudan focused on the existing situation and perceived needs in capacity building for SFM and good forest governance at all administration levels. Specifically, the results of the case study on Southern Sudan presented the current situation in selected villages in the northern parts of Renk County in Upper Nile State, and the implications of Multilateral Environmental Agreements (MEAs) and of the new forest policy framework for capacity building actions. The results on Ethiopia focused on training, extension, research, education and new curriculum development within higher education institutions and particularly at the Wondo Genet College of Forestry and Natural Resources (WGCF-NR), which administratively lies under Hawassa University. The results suggest that, for both cases studies, informed decision-making on and capacity building for SFM and good forest governance require comprehensive, long-term, cross-sectoral, coherent and consistent approaches within the dynamic and evolving overall global framework, including its multiple inter-linked levels. The specific priority development and focus areas comprised the establishment of SFM and good forest governance in accordance with the overall sustainable development priorities and with more focus on the international trade in forest products that are derived from sustainable and legal sources with an emphasis on effective forest law enforcement and governance at all levels. In Upper Nile State in Southern Sudan there were positive development signals such as the will of the local people to plant more multipurpose trees on farmlands and range lands as well as the recognition of the importance of forests and trees for sustainable rural development where food security is a key element. In addition, it was evident that the local communities studied in Southern Sudan also wanted to establish good governance systems through partnerships with all actors and through increased local responsibilities. The results also suggest that the implementation of MEAs at the local level in Southern Sudan requires mutually supportive and coherent approaches within the agreements as well as significantly more resources and financial and technical assistance for capacity building, training and extension. Finally, the findings confirm the importance of full utilization of the existing local governance and management systems and their traditional and customary knowledge and practices, and of new development partnerships with full participation of all stakeholders. The planned new forest law for Southern Sudan, based on an already existing new forest policy, is expected to recognize the roles of local-level actors, and it would thus obviously facilitate the achieving of sustainable forest management.
Resumo:
During the last few decades there have been far going financial market deregulation, technical development, advances in information technology, and standardization of legislation between countries. As a result, one can expect that financial markets have grown more interlinked. The proper understanding of the cross-market linkages has implications for investment and risk management, diversification, asset pricing, and regulation. The purpose of this research is to assess the degree of price, return, and volatility linkages between both geographic markets and asset categories within one country, Finland. Another purpose is to analyze risk asymmetries, i.e., the tendency of equity risk to be higher after negative events than after positive events of equal magnitude. The analysis is conducted both with respect to total risk (volatility), and systematic risk (beta). The thesis consists of an introductory part and four essays. The first essay studies to which extent international stock prices comove. The degree of comovements is low, indicating benefits from international diversification. The second essay examines the degree to which the Finnish market is linked to the “world market”. The total risk is divided into two parts, one relating to world factors, and one relating to domestic factors. The impact of world factors has increased over time. After 1993, when foreign investors were allowed to freely invest in Finnish assets, the risk level has been higher than previously. This was also the case during the economic recession in the beginning of the 1990’s. The third essay focuses on the stock, bond, and money markets in Finland. According to a trading model, the degree of volatility linkages should be strong. However, the results contradict this. The linkages are surprisingly weak, even negative. The stock market is the most independent, while the money market is affected by events on the two other markets. The fourth essay concentrates on volatility and beta asymmetries. Contrary to many international studies there are only few cases of risk asymmetries. When they occur, they tend to be driven by the market-wide component rather than the portfolio specific element.
Resumo:
Topics in Spatial Econometrics — With Applications to House Prices Spatial effects in data occur when geographical closeness of observations influences the relation between the observations. When two points on a map are close to each other, the observed values on a variable at those points tend to be similar. The further away the two points are from each other, the less similar the observed values tend to be. Recent technical developments, geographical information systems (GIS) and global positioning systems (GPS) have brought about a renewed interest in spatial matters. For instance, it is possible to observe the exact location of an observation and combine it with other characteristics. Spatial econometrics integrates spatial aspects into econometric models and analysis. The thesis concentrates mainly on methodological issues, but the findings are illustrated by empirical studies on house price data. The thesis consists of an introductory chapter and four essays. The introductory chapter presents an overview of topics and problems in spatial econometrics. It discusses spatial effects, spatial weights matrices, especially k-nearest neighbours weights matrices, and various spatial econometric models, as well as estimation methods and inference. Further, the problem of omitted variables, a few computational and empirical aspects, the bootstrap procedure and the spatial J-test are presented. In addition, a discussion on hedonic house price models is included. In the first essay a comparison is made between spatial econometrics and time series analysis. By restricting the attention to unilateral spatial autoregressive processes, it is shown that a unilateral spatial autoregression, which enjoys similar properties as an autoregression with time series, can be defined. By an empirical study on house price data the second essay shows that it is possible to form coordinate-based, spatially autoregressive variables, which are at least to some extent able to replace the spatial structure in a spatial econometric model. In the third essay a strategy for specifying a k-nearest neighbours weights matrix by applying the spatial J-test is suggested, studied and demonstrated. In the final fourth essay the properties of the asymptotic spatial J-test are further examined. A simulation study shows that the spatial J-test can be used for distinguishing between general spatial models with different k-nearest neighbours weights matrices. A bootstrap spatial J-test is suggested to correct the size of the asymptotic test in small samples.