886 resultados para two sector model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The uncertainty associated with how projected climate change will affect global C cycling could have a large impact on predictions of soil C stocks. The purpose of our study was to determine how various soil decomposition and chemistry characteristics relate to soil organic matter (SOM) temperature sensitivity. We accomplished this objective using long-term soil incubations at three temperatures (15, 25, and 35°C) and pyrolysis molecular beam mass spectrometry (py-MBMS) on 12 soils from 6 sites along a mean annual temperature (MAT) gradient (2–25.6°C). The Q10 values calculated from the CO2 respired during a long-term incubation using the Q10-q method showed decomposition of the more resistant fraction to be more temperature sensitive with a Q10-q of 1.95 ± 0.08 for the labile fraction and a Q10-q of 3.33 ± 0.04 for the more resistant fraction. We compared the fit of soil respiration data using a two-pool model (active and slow) with first-order kinetics with a three-pool model and found that the two and three-pool models statistically fit the data equally well. The three-pool model changed the size and rate constant for the more resistant pool. The size of the active pool in these soils, calculated using the two-pool model, increased with incubation temperature and ranged from 0.1 to 14.0% of initial soil organic C. Sites with an intermediate MAT and lowest C/N ratio had the largest active pool. Pyrolysis molecular beam mass spectrometry showed declines in carbohydrates with conversion from grassland to wheat cultivation and a greater amount of protected carbohydrates in allophanic soils which may have lead to differences found between the total amount of CO2 respired, the size of the active pool, and the Q10-q values of the soils.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Understanding the business value of IT has mostly been studied in developed countries, but because most investment in developing countries is derived from external sources, the influence of that investment on business value is likely to be different. We test this notion using a two-layer model. We examine the impact of IT investments on firm processes, and the relationship of these processes to firm performance in a developing country. Our findings suggest that investment in different areas of IT positively relates to improvements in intermediate business processes and these intermediate business processes positively relate to the overall financial performance of firms in a developing country.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This book provides the much needed international dimension on the payoffs of information technology investments. The bulk of the research on the impact of information technology investments has been undertaken in developed economies, mainly the United States. This research provides an alternative dimension - a developing country perspective on how information technology investments impacts organizations. Secondly, there has been much debate and controversy on how we measure information technology investment payoffs. This research uses an innovative two-stage model where it proposes that information technology investments will first impact the process and improvement in the processes will then impact the performance. In doing so, it considers sectors of information technology investment rather than taking it as one. Finally, almost all prior studies in this area have considered only the tangible impact of information technology investments. This research proposes that one can only better understand the benefits by looking at both the tangible and intangible benefits.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Extreme cold and heat waves, characterised by a number of cold or hot days in succession, place a strain on people’s cardiovascular and respiratory systems. The increase in deaths due to these waves may be greater than that predicted by extreme temperatures alone. We examined cold and heat waves in 99 US cities for 14 years (1987–2000) and investigated how the risk of death depended on the temperature threshold used to define a wave, and a wave’s timing, duration and intensity. We defined cold and heat waves using temperatures above and below cold and heat thresholds for two or more days. We tried five cold thresholds using the first to fifth percentiles of temperature, and five heat thresholds using the ninety-fifth to ninety-ninth percentiles. The extra wave effects were estimated using a two-stage model to ensure that their effects were estimated after removing the general effects of temperature. The increases in deaths associated with cold waves were generally small and not statistically significant, and there was even evidence of a decreased risk during the coldest waves. Heat waves generally increased the risk of death, particularly for the hottest heat threshold. Cold waves of a colder intensity or longer duration were not more dangerous. Cold waves earlier in the cool season were more dangerous, as were heat waves earlier in the warm season. In general there was no increased risk of death during cold waves above the known increased risk associated with cold temperatures. Cold or heat waves earlier in the cool or warm season may be more dangerous because of a build up in the susceptible pool or a lack of preparedness for cold or hot temperatures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The standard approach to tax compliance applies the economics-of-crime methodology pioneered by Becker (1968): in its first application, due to Allingham and Sandmo (1972) it models the behaviour of agents as a decision involving a choice of the extent of their income to report to tax authorities, given a certain institutional environment, represented by parameters such as the probability of detection and penalties in the event the agent is caught. While this basic framework yields important insights on tax compliance behavior, it has some critical limitations. Specifically, it indicates a level of compliance that is significantly below what is observed in the data. This thesis revisits the original framework with a view towards addressing this issue, and examining the political economy implications of tax evasion for progressivity in the tax structure. The approach followed involves building a macroeconomic, dynamic equilibrium model for the purpose of examining these issues, by using a step-wise model building procedure starting with some very simple variations of the basic Allingham and Sandmo construct, which are eventually integrated to a dynamic general equilibrium overlapping generations framework with heterogeneous agents. One of the variations involves incorporating the Allingham and Sandmo construct into a two-period model of a small open economy of the type originally attributed to Fisher (1930). A further variation of this simple construct involves allowing agents to initially decide whether to evade taxes or not. In the event they decide to evade, the agents then have to decide the extent of income or wealth they wish to under-report. We find that the ‘evade or not’ assumption has strikingly different and more realistic implications for the extent of evasion, and demonstrate that it is a more appropriate modeling strategy in the context of macroeconomic models, which are essentially dynamic in nature, and involve consumption smoothing across time and across various states of nature. Specifically, since deciding to undertake tax evasion impacts on the consumption smoothing ability of the agent by creating two states of nature in which the agent is ‘caught’ or ‘not caught’, there is a possibility that their utility under certainty, when they choose not to evade, is higher than the expected utility obtained when they choose to evade. Furthermore, the simple two-period model incorporating an ‘evade or not’ choice can be used to demonstrate some strikingly different political economy implications relative to its Allingham and Sandmo counterpart. In variations of the two models that allow for voting on the tax parameter, we find that agents typically choose to vote for a high degree of progressivity by choosing the highest available tax rate from the menu of choices available to them. There is, however, a small range of inequality levels for which agents in the ‘evade or not’ model vote for a relatively low value of the tax rate. The final steps in the model building procedure involve grafting the two-period models with a political economy choice into a dynamic overlapping generations setting with more general, non-linear tax schedules and a ‘cost-of evasion’ function that is increasing in the extent of evasion. Results based on numerical simulations of these models show further improvement in the model’s ability to match empirically plausible levels of tax evasion. In addition, the differences between the political economy implications of the ‘evade or not’ version of the model and its Allingham and Sandmo counterpart are now very striking; there is now a large range of values of the inequality parameter for which agents in the ‘evade or not’ model vote for a low degree of progressivity. This is because, in the ‘evade or not’ version of the model, low values of the tax rate encourages a large number of agents to choose the ‘not-evade’ option, so that the redistributive mechanism is more ‘efficient’ relative to the situations in which tax rates are high. Some further implications of the models of this thesis relate to whether variations in the level of inequality, and parameters such as the probability of detection and penalties for tax evasion matter for the political economy results. We find that (i) the political economy outcomes for the tax rate are quite insensitive to changes in inequality, and (ii) the voting outcomes change in non-monotonic ways in response to changes in the probability of detection and penalty rates. Specifically, the model suggests that changes in inequality should not matter, although the political outcome for the tax rate for a given level of inequality is conditional on whether there is a large or small or large extent of evasion in the economy. We conclude that further theoretical research into macroeconomic models of tax evasion is required to identify the structural relationships underpinning the link between inequality and redistribution in the presence of tax evasion. The models of this thesis provide a necessary first step in that direction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study compared the performance of a local and three robust optimality criteria in terms of the standard error for a one-parameter and a two-parameter nonlinear model with uncertainty in the parameter values. The designs were also compared in conditions where there was misspecification in the prior parameter distribution. The impact of different correlation between parameters on the optimal design was examined in the two-parameter model. The designs and standard errors were solved analytically whenever possible and numerically otherwise.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Differing parental considerations for girls and boys in households are a primary cause of the gender gap in school enrolment and educational attainment in developing countries, particularly in Sub-Saharan Africa and South Asia. While a number of studies have focused on the inequality of educational opportunities in South Asia, little is known about Bhutan. This study uses recent household expenditure data from the Bhutan Living Standard Survey to evaluate the gender gap in the allocation of resources for schooling. The findings, based on cross-sectional as well as household fixed-effect approaches, suggest that girls are less likely to enrol in school but are not allocated fewer resources once they are enrolled.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The resection of DNA double-strand breaks (DSBs) to generate ssDNA tails is a pivotal event in the cellular response to these breaks. In the two-step model of resection, primarily elucidated in yeast, initial resection by Mre11-CtIP is followed by extensive resection by two distinct pathways involving Exo1 or BLM/WRN-Dna2. However, resection pathways and their exact contributions in humans in vivo are not as clearly worked out as in yeast. Here, we examined the contribution of Exo1 to DNA end resection in humans in vivo in response to ionizing radiation (IR) and its relationship with other resection pathways (Mre11-CtIP or BLM/WRN). We find that Exo1 plays a predominant role in resection in human cells along with an alternate pathway dependent on WRN. While Mre11 and CtIP stimulate resection in human cells, they are not absolutely required for this process and Exo1 can function in resection even in the absence of Mre11-CtIP. Interestingly, the recruitment of Exo1 to DNA breaks appears to be inhibited by the NHEJ protein Ku80, and the higher level of resection that occurs upon siRNA-mediated depletion of Ku80 is dependent on Exo1. In addition, Exo1 may be regulated by 53BP1 and Brca1, and the restoration of resection in BRCA1-deficient cells upon depletion of 53BP1 is dependent on Exo1. Finally, we find that Exo1-mediated resection facilitates a transition from ATM- to ATR-mediated cell cycle checkpoint signaling. Our results identify Exo1 as a key mediator of DNA end resection and DSB repair and damage signaling decisions in human cells.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

LiFePO4 is a commercially available battery material with good theoretical discharge capacity, excellent cycle life and increased safety compared with competing Li-ion chemistries. It has been the focus of considerable experimental and theoretical scrutiny in the past decade, resulting in LiFePO4 cathodes that perform well at high discharge rates. This scrutiny has raised several questions about the behaviour of LiFePO4 material during charge and discharge. In contrast to many other battery chemistries that intercalate homogeneously, LiFePO4 can phase-separate into highly and lowly lithiated phases, with intercalation proceeding by advancing an interface between these two phases. The main objective of this thesis is to construct mathematical models of LiFePO4 cathodes that can be validated against experimental discharge curves. This is in an attempt to understand some of the multi-scale dynamics of LiFePO4 cathodes that can be difficult to determine experimentally. The first section of this thesis constructs a three-scale mathematical model of LiFePO4 cathodes that uses a simple Stefan problem (which has been used previously in the literature) to describe the assumed phase-change. LiFePO4 crystals have been observed agglomerating in cathodes to form a porous collection of crystals and this morphology motivates the use of three size-scales in the model. The multi-scale model developed validates well against experimental data and this validated model is then used to examine the role of manufacturing parameters (including the agglomerate radius) on battery performance. The remainder of the thesis is concerned with investigating phase-field models as a replacement for the aforementioned Stefan problem. Phase-field models have recently been used in LiFePO4 and are a far more accurate representation of experimentally observed crystal-scale behaviour. They are based around the Cahn-Hilliard-reaction (CHR) IBVP, a fourth-order PDE with electrochemical (flux) boundary conditions that is very stiff and possesses multiple time and space scales. Numerical solutions to the CHR IBVP can be difficult to compute and hence a least-squares based Finite Volume Method (FVM) is developed for discretising both the full CHR IBVP and the more traditional Cahn-Hilliard IBVP. Phase-field models are subject to two main physicality constraints and the numerical scheme presented performs well under these constraints. This least-squares based FVM is then used to simulate the discharge of individual crystals of LiFePO4 in two dimensions. This discharge is subject to isotropic Li+ diffusion, based on experimental evidence that suggests the normally orthotropic transport of Li+ in LiFePO4 may become more isotropic in the presence of lattice defects. Numerical investigation shows that two-dimensional Li+ transport results in crystals that phase-separate, even at very high discharge rates. This is very different from results shown in the literature, where phase-separation in LiFePO4 crystals is suppressed during discharge with orthotropic Li+ transport. Finally, the three-scale cathodic model used at the beginning of the thesis is modified to simulate modern, high-rate LiFePO4 cathodes. High-rate cathodes typically do not contain (large) agglomerates and therefore a two-scale model is developed. The Stefan problem used previously is also replaced with the phase-field models examined in earlier chapters. The results from this model are then compared with experimental data and fit poorly, though a significant parameter regime could not be investigated numerically. Many-particle effects however, are evident in the simulated discharges, which match the conclusions of recent literature. These effects result in crystals that are subject to local currents very different from the discharge rate applied to the cathode, which impacts the phase-separating behaviour of the crystals and raises questions about the validity of using cathodic-scale experimental measurements in order to determine crystal-scale behaviour.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background Accelerometers have become one of the most common methods of measuring physical activity (PA). Thus, validity of accelerometer data reduction approaches remains an important research area. Yet, few studies directly compare data reduction approaches and other PA measures in free-living samples. Objective To compare PA estimates provided by 3 accelerometer data reduction approaches, steps, and 2 self-reported estimates: Crouter's 2-regression model, Crouter's refined 2-regression model, the weighted cut-point method adopted in the National Health and Nutrition Examination Survey (NHANES; 2003-2004 and 2005-2006 cycles), steps, IPAQ, and 7-day PA recall. Methods A worksite sample (N = 87) completed online-surveys and wore ActiGraph GT1M accelerometers and pedometers (SW-200) during waking hours for 7 consecutive days. Daily time spent in sedentary, light, moderate, and vigorous intensity activity and percentage of participants meeting PA recommendations were calculated and compared. Results Crouter's 2-regression (161.8 +/- 52.3 minutes/day) and refined 2-regression (137.6 +/- 40.3 minutes/day) models provided significantly higher estimates of moderate and vigorous PA and proportions of those meeting PA recommendations (91% and 92%, respectively) as compared with the NHANES weighted cut-point method (39.5 +/- 20.2 minutes/day, 18%). Differences between other measures were also significant. Conclusions When comparing 3 accelerometer cut-point methods, steps, and self-report measures, estimates of PA participation vary substantially.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Commodity price modeling is normally approached in terms of structural time-series models, in which the different components (states) have a financial interpretation. The parameters of these models can be estimated using maximum likelihood. This approach results in a non-linear parameter estimation problem and thus a key issue is how to obtain reliable initial estimates. In this paper, we focus on the initial parameter estimation problem for the Schwartz-Smith two-factor model commonly used in asset valuation. We propose the use of a two-step method. The first step considers a univariate model based only on the spot price and uses a transfer function model to obtain initial estimates of the fundamental parameters. The second step uses the estimates obtained in the first step to initialize a re-parameterized state-space-innovations based estimator, which includes information related to future prices. The second step refines the estimates obtained in the first step and also gives estimates of the remaining parameters in the model. This paper is part tutorial in nature and gives an introduction to aspects of commodity price modeling and the associated parameter estimation problem.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dietitians have reported a lack of confidence in counselling clients with mental health issues. Standardised tools are needed to evaluate programs aiming to improve confidence. The Dietetic Confidence Scale (DCS) was developed to assess dietitians’perception of their capability about working with clients experiencing depression. Exploratory research revealed a 13-item, two-factor model. Dietetic confidence was associated with: 1) Confidence using the Nutrition Care Process; and 2) Confidence in Advocacy for Self-care and Client-care. This study aimed to validate the DCS using this two-factor model.The DCS was administered to 458 dietitians. Confirmatory factor analysis (CFA) assessed the scale’s psychometric validity. Reliability was measured using Cronbach’s alpha (α) co-efficient. CFA results supported the hypothesised two-factor, 13-item model. The Good Fit Index (GFI = 0.95) indicated a strong fit. Item-factor correlations ranged from r = 0.50 to 0.89. The overall scale and subscales showed good reliability (α = 0.93 to 0.76). This is the first study to validate an instrument that measures dietetic confidence about working with clients experiencing depression. The DCS can be used to measure changes in perceived confidence and identify where further training, mentoring or experience is needed. The findings also suggest that initiatives aimed at building dietitians' confidence about working with clients experiencing depression, should focus on improving client-focused nutrition care, foster advocacy, reflective practice, mentoring and encourage professional support networks. Avenues for future research include further validity and reliability testing to expand the generalisability of results; and modifying the scale for other disease or client populations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purposes: The first objective was to propose a new model representing the balance level of adults with intellectual and developmental disabilities (IDD) using Principal Components Analysis (PCA); and the second objective was to use the results from the PCA recorded by regression method to construct and validate summative scales of the standardized values of the index, which may be useful to facilitate a balance assessment in adults with IDD. Methods: A total of 801 individuals with IDD (509 males) mean 33.1±8.5 years old, were recruited from Special Olympic Games in Spain 2009 to 2012. The participants performed the following tests: the timed-stand test, the single leg stance test with open and closed eyes, the Functional Reach Test, the Expanded Timed-Get-up-and-Go Test. Data was analyzed using principal components analysis (PCA) with Oblimin rotation and Kaiser normalization. We examined the construct validity of our proposed two-factor model underlying balance for adults with IDD. The scores from PCA were recorded by regression method and were standardized. Results: The Component Plot and Rotated Space indicated that a two-factor solution (Dynamic and Static Balance components) was optimal. The PCA with direct Oblimin rotation revealed a satisfactory percentage of total variance explained by the two factors: 51.6 and 21.4%, respectively. The median score standardized for component dynamic and static of the balance index for adults with IDD is shown how references values. Conclusions: Our study may lead to improvements in the understanding and assessment of balance in adults with IDD. First, it confirms that a two-factor model may underlie the balance construct, and second, it provides an index that may be useful for identifying the balance level for adults with IDD.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract Background The purpose of this study was the development of a valid and reliable “Mechanical and Inflammatory Low Back Pain Index” (MIL) for assessment of non-specific low back pain (NSLBP). This 7-item tool assists practitioners in determining whether symptoms are predominantly mechanical or inflammatory. Methods Participants (n = 170, 96 females, age = 38 ± 14 years-old) with NSLP were referred to two Spanish physiotherapy clinics and completed the MIL and the following measures: the Roland Morris Questionnaire (RMQ), SF-12 and “Backache Index” (BAI) physical assessment test. For test-retest reliability, 37 consecutive patients were assessed at baseline and three days later during a non-treatment period. Face and content validity, practical characteristics, factor analysis, internal consistency, discriminant validity and convergent validity were assessed from the full sample. Results A total of 27 potential items that had been identified for inclusion were subsequently reduced to 11 by an expert panel. Four items were then removed due to cross-loading under confirmatory factor analysis where a two-factor model yielded a good fit to the data (χ2 = 14.80, df = 13, p = 0.37, CFI = 0.98, and RMSEA = 0.029). The internal consistency was moderate (α = 0.68 for MLBP; 0.72 for ILBP), test-retest reliability high (ICC = 0.91; 95%CI = 0.88-0.93) and discriminant validity good for either MLBP (AUC = 0.74) and ILBP (AUC = 0.92). Convergent validity was demonstrated through similar but weak correlations between the ILBP and both the RMQ and BAI (r = 0.34, p < 0.001) and the MLBP and BAI (r = 0.38, p < 0.001). Conclusions The MIL is a valid and reliable clinical tool for patients with NSLBP that discriminates between mechanical and inflammatory LBP. Keywords: Low back pain; Psychometrics properties; Pain measurement; Screening tool; Inflammatory; Mechanical