894 resultados para NON-UNIFORM FINITE-DIFFERENCES


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The non-obese diabetic (NOD) mouse is a model for the study of insulin-dependent diabetes mellitus (IDDM). Recently transgenic NOD mice have been derived (NOD-E) that express the major histocompatibility complex (MHC) class II I-E molecule. NOD-E do not become diabetic and show negligible pancreatic insulitis. The possibility pertained that NOD-E mice are protected from disease by a process of T-cell deletion or anergy. This paper describes our attempts to discover whether this was so, by comparing NOD and NOD-E mouse T-cell receptor V beta usage. Splenocytes and lymph node cells were therefore tested for their ability to proliferate in response to monoclonal anti-V beta antibodies. We were unable to show any consistent differences between NOD and NOD-E responses to the panel of antibodies used. Previously proposed V beta were shown to be unlikely candidates for deletion or anergy. T cells present at low frequency (V beta 5+) in both NOD and NOD-E mice were shown to be as capable of expansion in response to antigenic stimulation as were more frequently expressed V beta. Our data therefore do not support deletion or anergy as mechanisms which could account for the observed disease protection in NOD-E mice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

General Introduction This thesis can be divided into two main parts :the first one, corresponding to the first three chapters, studies Rules of Origin (RoOs) in Preferential Trade Agreements (PTAs); the second part -the fourth chapter- is concerned with Anti-Dumping (AD) measures. Despite wide-ranging preferential access granted to developing countries by industrial ones under North-South Trade Agreements -whether reciprocal, like the Europe Agreements (EAs) or NAFTA, or not, such as the GSP, AGOA, or EBA-, it has been claimed that the benefits from improved market access keep falling short of the full potential benefits. RoOs are largely regarded as a primary cause of the under-utilization of improved market access of PTAs. RoOs are the rules that determine the eligibility of goods to preferential treatment. Their economic justification is to prevent trade deflection, i.e. to prevent non-preferred exporters from using the tariff preferences. However, they are complex, cost raising and cumbersome, and can be manipulated by organised special interest groups. As a result, RoOs can restrain trade beyond what it is needed to prevent trade deflection and hence restrict market access in a statistically significant and quantitatively large proportion. Part l In order to further our understanding of the effects of RoOs in PTAs, the first chapter, written with Pr. Olivier Cadot, Celine Carrère and Pr. Jaime de Melo, describes and evaluates the RoOs governing EU and US PTAs. It draws on utilization-rate data for Mexican exports to the US in 2001 and on similar data for ACP exports to the EU in 2002. The paper makes two contributions. First, we construct an R-index of restrictiveness of RoOs along the lines first proposed by Estevadeordal (2000) for NAFTA, modifying it and extending it for the EU's single-list (SL). This synthetic R-index is then used to compare Roos under NAFTA and PANEURO. The two main findings of the chapter are as follows. First, it shows, in the case of PANEURO, that the R-index is useful to summarize how countries are differently affected by the same set of RoOs because of their different export baskets to the EU. Second, it is shown that the Rindex is a relatively reliable statistic in the sense that, subject to caveats, after controlling for the extent of tariff preference at the tariff-line level, it accounts for differences in utilization rates at the tariff line level. Finally, together with utilization rates, the index can be used to estimate total compliance costs of RoOs. The second chapter proposes a reform of preferential Roos with the aim of making them more transparent and less discriminatory. Such a reform would make preferential blocs more "cross-compatible" and would therefore facilitate cumulation. It would also contribute to move regionalism toward more openness and hence to make it more compatible with the multilateral trading system. It focuses on NAFTA, one of the most restrictive FTAs (see Estevadeordal and Suominen 2006), and proposes a way forward that is close in spirit to what the EU Commission is considering for the PANEURO system. In a nutshell, the idea is to replace the current array of RoOs by a single instrument- Maximum Foreign Content (MFC). An MFC is a conceptually clear and transparent instrument, like a tariff. Therefore changing all instruments into an MFC would bring improved transparency pretty much like the "tariffication" of NTBs. The methodology for this exercise is as follows: In step 1, I estimate the relationship between utilization rates, tariff preferences and RoOs. In step 2, I retrieve the estimates and invert the relationship to get a simulated MFC that gives, line by line, the same utilization rate as the old array of Roos. In step 3, I calculate the trade-weighted average of the simulated MFC across all lines to get an overall equivalent of the current system and explore the possibility of setting this unique instrument at a uniform rate across lines. This would have two advantages. First, like a uniform tariff, a uniform MFC would make it difficult for lobbies to manipulate the instrument at the margin. This argument is standard in the political-economy literature and has been used time and again in support of reductions in the variance of tariffs (together with standard welfare considerations). Second, uniformity across lines is the only way to eliminate the indirect source of discrimination alluded to earlier. Only if two countries face uniform RoOs and tariff preference will they face uniform incentives irrespective of their initial export structure. The result of this exercise is striking: the average simulated MFC is 25% of good value, a very low (i.e. restrictive) level, confirming Estevadeordal and Suominen's critical assessment of NAFTA's RoOs. Adopting a uniform MFC would imply a relaxation from the benchmark level for sectors like chemicals or textiles & apparel, and a stiffening for wood products, papers and base metals. Overall, however, the changes are not drastic, suggesting perhaps only moderate resistance to change from special interests. The third chapter of the thesis considers whether Europe Agreements of the EU, with the current sets of RoOs, could be the potential model for future EU-centered PTAs. First, I have studied and coded at the six-digit level of the Harmonised System (HS) .both the old RoOs -used before 1997- and the "Single list" Roos -used since 1997. Second, using a Constant Elasticity Transformation function where CEEC exporters smoothly mix sales between the EU and the rest of the world by comparing producer prices on each market, I have estimated the trade effects of the EU RoOs. The estimates suggest that much of the market access conferred by the EAs -outside sensitive sectors- was undone by the cost-raising effects of RoOs. The chapter also contains an analysis of the evolution of the CEECs' trade with the EU from post-communism to accession. Part II The last chapter of the thesis is concerned with anti-dumping, another trade-policy instrument having the effect of reducing market access. In 1995, the Uruguay Round introduced in the Anti-Dumping Agreement (ADA) a mandatory "sunset-review" clause (Article 11.3 ADA) under which anti-dumping measures should be reviewed no later than five years from their imposition and terminated unless there was a serious risk of resumption of injurious dumping. The last chapter, written with Pr. Olivier Cadot and Pr. Jaime de Melo, uses a new database on Anti-Dumping (AD) measures worldwide to assess whether the sunset-review agreement had any effect. The question we address is whether the WTO Agreement succeeded in imposing the discipline of a five-year cycle on AD measures and, ultimately, in curbing their length. Two methods are used; count data analysis and survival analysis. First, using Poisson and Negative Binomial regressions, the count of AD measures' revocations is regressed on (inter alia) the count of "initiations" lagged five years. The analysis yields a coefficient on measures' initiations lagged five years that is larger and more precisely estimated after the agreement than before, suggesting some effect. However the coefficient estimate is nowhere near the value that would give a one-for-one relationship between initiations and revocations after five years. We also find that (i) if the agreement affected EU AD practices, the effect went the wrong way, the five-year cycle being quantitatively weaker after the agreement than before; (ii) the agreement had no visible effect on the United States except for aone-time peak in 2000, suggesting a mopping-up of old cases. Second, the survival analysis of AD measures around the world suggests a shortening of their expected lifetime after the agreement, and this shortening effect (a downward shift in the survival function postagreement) was larger and more significant for measures targeted at WTO members than for those targeted at non-members (for which WTO disciplines do not bind), suggesting that compliance was de jure. A difference-in-differences Cox regression confirms this diagnosis: controlling for the countries imposing the measures, for the investigated countries and for the products' sector, we find a larger increase in the hazard rate of AD measures covered by the Agreement than for other measures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: To compare the use of guideline-recommended medical and interventional therapies in older and younger patients with acute coronary syndromes (ACSs). DESIGN: Prospective cohort study. SETTING: Fifty-five hospitals in Switzerland. PARTICIPANTS: Eleven thousand nine hundred thirty-two patients with ACS enrolled between March 1, 2001, and June 30, 2006. ACS definition included ST-segment elevation myocardial infarction (STEMI), non-ST-segment elevation myocardial infarction (NSTEMI), and unstable angina pectoris (UA). MEASUREMENTS: Use of medical and interventional therapies was determined after exclusion of patients with contraindications and after adjustment for comorbidities. Multivariate logistic regression models were used to calculate odds ratios (ORs) per year increase in age. RESULTS: Elderly patients were less likely to receive acetylsalicylic acid (OR=0.976, 95% confidence interval (CI)=0.969-0.980) or beta-blockers (OR=0.985, 95% CI=0.981-0.989). No age-dependent difference was found for heparin use. Elderly patients with STEMI were less likely to receive percutaneous coronary intervention (PCI) or thrombolysis (OR=0.955, 95% CI=0.949-0.961). Elderly patients with NSTEMI or UA less often underwent PCI (OR=0.943, 95% CI=0.937-0.949). CONCLUSION: Elderly patients across the whole spectrum of ACS were less likely to receive guideline-recommended therapies, even after adequate adjustment for comorbidities. Prognosis of elderly patients with ACS may be improved by increasing adherence to guideline-recommended medical and interventional therapies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The presence of three water channels (aquaporins, AQP), AQP1, AQP4 and AQP9 were observed in normal brain and several rodent models of brain pathologies. Little is known about AQP distribution in the primate brain and its knowledge will be useful for future testing of drugs aimed at preventing brain edema formation. We studied the expression and cellular distribution of AQP1, 4 and 9 in the non-human primate brain. The distribution of AQP4 in the non-human primate brain was observed in perivascular astrocytes, comparable to the observation made in the rodent brain. In contrast with rodent, primate AQP1 is expressed in the processes and perivascular endfeet of a subtype of astrocytes mainly located in the white matter and the glia limitans, possibly involved in water homeostasis. AQP1 was also observed in neurons innervating the pial blood vessels, suggesting a possible role in cerebral blood flow regulation. As described in rodent, AQP9 mRNA and protein were detected in astrocytes and in catecholaminergic neurons. However additional locations were observed for AQP9 in populations of neurons located in several cortical areas of primate brains. This report describes a detailed study of AQP1, 4 and 9 distributions in the non-human primate brain, which adds to the data already published in rodent brains. This relevant species differences have to be considered carefully to assess potential drugs acting on AQPs non-human primate models before entering human clinical trials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper explores the relationships between noncooperative bargaining games and the consistent value for non-transferable utility (NTU) cooperative games. A dynamic approach to the consistent value for NTU games is introduced: the consistent vector field. The main contribution of the paper is to show that the consistent field is intimately related to the concept of subgame perfection for finite horizon noncooperative bargaining games, as the horizon goes to infinity and the cost of delay goes to zero. The solutions of the dynamic system associated to the consistent field characterize the subgame perfect equilibrium payoffs of the noncooperative bargaining games. We show that for transferable utility, hyperplane and pure bargaining games, the dynamics of the consistent fields converge globally to the unique consistent value. However, in the general NTU case, the dynamics of the consistent field can be complex. An example is constructed where the consistent field has cyclic solutions; moreover, the finite horizon subgame perfect equilibria do not approach the consistent value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the issue of income convergence across countries and regions witha Bayesian estimator which allows us to use information in an efficient andflexible way. We argue that the very slow convergence rates to a commonlevel of per-capita income found, e.g., by Barro and Xavier Sala-i-Martin,is due to a 'fixed effect bias' that their cross-sectional analysisintroduces in the results. Our approach permits the estimation of differentconvergence rates to different steady states for each cross sectional unit.When this diversity is allowed, we find that convergence of each unit to(its own) steady state income level is much faster than previously estimatedbut that cross sectional differences persist: inequalities will only bereduced by a small amount by the passage of time. The cross countrydistribution of the steady state is largely explained by the cross countrydistribution of initial conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new parametric minimum distance time-domain estimator for ARFIMA processes is introduced in this paper. The proposed estimator minimizes the sum of squared correlations of residuals obtained after filtering a series through ARFIMA parameters. The estimator iseasy to compute and is consistent and asymptotically normally distributed for fractionallyintegrated (FI) processes with an integration order d strictly greater than -0.75. Therefore, it can be applied to both stationary and non-stationary processes. Deterministic components are also allowed in the DGP. Furthermore, as a by-product, the estimation procedure provides an immediate check on the adequacy of the specified model. This is so because the criterion function, when evaluated at the estimated values, coincides with the Box-Pierce goodness of fit statistic. Empirical applications and Monte-Carlo simulations supporting the analytical results and showing the good performance of the estimator in finite samples are also provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Non-response is a major concern among substance use epidemiologists. When differences exist between respondents and non-respondents, survey estimates may be biased. Therefore, researchers have developed time-consuming strategies to convert non-respondents to respondents. The present study examines whether late respondents (converted former non-participants) differ from early respondents, non-consenters or silent refusers (consent givers but non-participants) in a cohort study, and whether non-response bias can be reduced by converting former non-respondents. METHODS: 6099 French- and 5720 German-speaking Swiss 20-year-old males (more than 94% of the source population) completed a short questionnaire on substance use outcomes and socio-demographics, independent of any further participation in a cohort study. Early respondents were those participating in the cohort study after standard recruitment procedures. Late respondents were non-respondents that were converted through individual encouraging telephone contact. Early respondents, non-consenters and silent refusers were compared to late respondents using logistic regressions. Relative non-response biases for early respondents only, for respondents only (early and late) and for consenters (respondents and silent refusers) were also computed. RESULTS: Late respondents showed generally higher patterns of substance use than did early respondents, but lower patterns than did non-consenters and silent refusers. Converting initial non-respondents to respondents reduced the non-response bias, which might be further reduced if silent refusers were converted to respondents. CONCLUSION: Efforts to convert refusers are effective in reducing non-response bias. However, converted late respondents cannot be seen as proxies of non-respondents, and are at best only indicative of existing response bias due to persistent non-respondents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT: BACKGROUND: Sphingomonas wittichii strain RW1 can completely oxidize dibenzo-p-dioxins and dibenzofurans, which are persistent contaminants of soils and sediments. For successful application in soil bioremediation systems, strain RW1 must cope with fluctuations in water availability, or water potential. Thus far, however, little is known about the adaptive strategies used by Sphingomonas bacteria to respond to changes in water potential. To improve our understanding, strain RW1 was perturbed with either the cell-permeating solute sodium chloride or the non-permeating solute polyethylene glycol with a molecular weight of 8000 (PEG8000). These solutes are assumed to simulate the solute and matric components of the total water potential, respectively. The responses to these perturbations were then assessed and compared using a combination of growth assays, transcriptome profiling, and membrane fatty acid analyses. RESULTS: Under conditions producing a similar decrease in water potential but without effect on growth rate, there was only a limited shared response to perturbation with sodium chloride or PEG8000. This shared response included the increased expression of genes involved with trehalose and exopolysaccharide biosynthesis and the reduced expression of genes involved with flagella biosynthesis. Mostly, the responses to perturbation with sodium chloride or PEG8000 were very different. Only sodium chloride triggered the increased expression of two ECF-type RNA polymerase sigma factors and the differential expression of many genes involved with outer membrane and amino acid metabolism. In contrast, only PEG8000 triggered the increased expression of a heat shock-type RNA polymerase sigma factor along with many genes involved with protein turnover and repair. Membrane fatty acid analyses further corroborated these differences. The degree of saturation of membrane fatty acids increased after perturbation with sodium chloride but had the opposite effect and decreased after perturbation with PEG8000. CONCLUSIONS: A combination of growth assays, transcriptome profiling, and membrane fatty acid analyses revealed that permeating and non-permeating solutes trigger different adaptive responses in strain RW1, suggesting these solutes affect cells in fundamentally different ways. Future work is now needed that connects these responses with the responses observed in more realistic scenarios of soil desiccation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many dynamic revenue management models divide the sale period into a finite number of periods T and assume, invoking a fine-enough grid of time, that each period sees at most one booking request. These Poisson-type assumptions restrict the variability of the demand in the model, but researchers and practitioners were willing to overlook this for the benefit of tractability of the models. In this paper, we criticize this model from another angle. Estimating the discrete finite-period model poses problems of indeterminacy and non-robustness: Arbitrarily fixing T leads to arbitrary control values and on the other hand estimating T from data adds an additional layer of indeterminacy. To counter this, we first propose an alternate finite-population model that avoids this problem of fixing T and allows a wider range of demand distributions, while retaining the useful marginal-value properties of the finite-period model. The finite-population model still requires jointly estimating market size and the parameters of the customer purchase model without observing no-purchases. Estimation of market-size when no-purchases are unobservable has rarely been attempted in the marketing or revenue management literature. Indeed, we point out that it is akin to the classical statistical problem of estimating the parameters of a binomial distribution with unknown population size and success probability, and hence likely to be challenging. However, when the purchase probabilities are given by a functional form such as a multinomial-logit model, we propose an estimation heuristic that exploits the specification of the functional form, the variety of the offer sets in a typical RM setting, and qualitative knowledge of arrival rates. Finally we perform simulations to show that the estimator is very promising in obtaining unbiased estimates of population size and the model parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, differences in return autocorrelation across weekdays havebeen investigated. Our research provides strong evidence of the importanceon non-trading periods, not only weekends and holidays but also overnightclosings, to explain return autocorrelation anomalies. While stock returnsare highly autocorrelated, specially on Mondays, when daily returns arecomputed on a open-to-close basis, they do not exhibit any significantlevel of autocorrelation. Our results are compatible with theinformation processing hypotheses as an explanation of the weekendeffect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last decades, a decline in motor skills and in physical activity and an increase in obesity has been observed in children. However, there is a lack of data in young children. We tested if differences in motor skills and in physical activity according to weight or gender were already present in 2- to 4-year-old children. Fifty-eight child care centers in the French part of Switzerland were randomly selected for the Youp'là bouge study. Motor skills were assessed by an obstacle course including 5 motor skills, derived from the Zurich Neuromotor Assessment test. Physical activity was measured with accelerometers (GT1M, Actigraph, Florida, USA) using age-adapted cut-offs. Weight status was assessed using the International Obesity Task Force criteria (healthy weight vs overweight) for body mass index (BMI). Of the 529 children (49% girls, 3.4 ± 0.6 years, BMI 16.2 ± 1.2 kg/m2), 13% were overweight. There were no significant weight status-related differences in the single skills of the obstacle course, but there was a trend (p = 0.059) for a lower performance of overweight children in the overall motor skills score. No significant weight status-related differences in child care-based physical activity were observed. No gender-related differences were found in the overall motor skills score, but boys performed better than girls in 2 of the 5 motor skills (p ≤ 0.04). Total physical activity as well as time spent in moderate-vigorous and in vigorous activity during child care were 12-25% higher and sedentary activity 5% lower in boys compared to girls (all p < 0.01). At this early age, there were no significant weight status- or gender-related differences in global motor skills. However, in accordance to data in older children, child care-based physical activity was higher in boys compared to girls. These results are important to consider when establishing physical activity recommendations or targeting health promotion interventions in young children.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a unified geometric framework for describing both the Lagrangian and Hamiltonian formalisms of regular and non-regular time-dependent mechanical systems, which is based on the approach of Skinner and Rusk (1983). The dynamical equations of motion and their compatibility and consistency are carefully studied, making clear that all the characteristics of the Lagrangian and the Hamiltonian formalisms are recovered in this formulation. As an example, it is studied a semidiscretization of the nonlinear wave equation proving the applicability of the proposed formalism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of statistical models for forensic fingerprint identification purposes has been the subject of increasing research attention in recent years. This can be partly seen as a response to a number of commentators who claim that the scientific basis for fingerprint identification has not been adequately demonstrated. In addition, key forensic identification bodies such as ENFSI [1] and IAI [2] have recently endorsed and acknowledged the potential benefits of using statistical models as an important tool in support of the fingerprint identification process within the ACE-V framework. In this paper, we introduce a new Likelihood Ratio (LR) model based on Support Vector Machines (SVMs) trained with features discovered via morphometric and spatial analyses of corresponding minutiae configurations for both match and close non-match populations often found in AFIS candidate lists. Computed LR values are derived from a probabilistic framework based on SVMs that discover the intrinsic spatial differences of match and close non-match populations. Lastly, experimentation performed on a set of over 120,000 publicly available fingerprint images (mostly sourced from the National Institute of Standards and Technology (NIST) datasets) and a distortion set of approximately 40,000 images, is presented, illustrating that the proposed LR model is reliably guiding towards the right proposition in the identification assessment of match and close non-match populations. Results further indicate that the proposed model is a promising tool for fingerprint practitioners to use for analysing the spatial consistency of corresponding minutiae configurations.