42 resultados para Predictability
Predicting intentions and behaviours in populations with or at-risk of diabetes: A systematic review
Resumo:
Purpose To systematically review the Theory of Planned Behaviour studies predicting self-care intentions and behaviours in populations with and at-risk of diabetes. Methods A systematic review using six electronic databases was conducted in 2013. A standardised protocol was used for appraisal. Studies eligibility included a measure of behaviour for healthy eating, physical activity, glucose monitoring, medication use (ii) the TPB variables (iii) the TPB tested in populations with diabetes or at-risk. Results Sixteen studies were appraised for testing the utility of the TPB. Studies included cross-sectional (n=7); prospective (n=5) and randomised control trials (n=4). Intention (18% – 76%) was the most predictive construct for all behaviours. Explained variance for intentions were similar across cross-sectional (28 -76%); prospective (28 -73%); and RCT studies (18 - 63%). RCTs (18 - 43%) provided slightly stronger evidence for predicting behaviour. Conclusions Few studies tested predictability of the TPB in populations with or at-risk of diabetes. This review highlighted differences in the predictive utility of the TPB suggesting that the model is behaviour and population specific. Findings on key determinants of specific behaviours contribute to a better understanding of mechanisms of behaviour change and are useful in designing targeted behavioural interventions for different diabetes populations.
Resumo:
CoMFA and CoMSIA analysis were utilized in this investigation to define the important interacting regions in paclitaxel/tubulin binding site and to develop selective paclitaxel-like active compounds. The starting geometry of paclitaxel analogs was taken from the crystal structure of docetaxel. A total of 28 derivatives of paclitaxel were divided into two groups—a training set comprising of 19 compounds and a test set comprising of nine compounds. They were constructed and geometrically optimized using SYBYL v6.6. CoMFA studies provided a good predictability (q2 = 0.699, r2 = 0.991, PC = 6, S.E.E. = 0.343 and F = 185.910). They showed the steric and electrostatic properties as the major interacting forces whilst the lipophilic property contribution was a minor factor for recognition forces of the binding site. These results were in agreement with the experimental data of the binding activities of these compounds. Five fields in CoMSIA analysis (steric, electrostatic, hydrophobic, hydrogen-bond acceptor and donor properties) were considered contributors in the ligand–receptor interactions. The results obtained from the CoMSIA studies were: q2 = 0.535, r2 = 0.983, PC = 5, S.E.E. = 0.452 and F = 127.884. The data obtained from both CoMFA and CoMSIA studies were interpreted with respect to the paclitaxel/tubulin binding site. This intuitively suggested where the most significant anchoring points for binding affinity are located. This information could be used for the development of new compounds having paclitaxel-like activity with new chemical entities to overcome the existing pharmaceutical barriers and the economical problem associated with the synthesis of the paclitaxel analogs. These will boost the wide use of this useful class of compounds, i.e. in brain tumors as the most of the present active compounds have poor blood–brain barrier crossing ratios and also, various tubulin isotypes has shown resistance to taxanes and other antimitotic agents.
Resumo:
Abstract: Over the years bioelectrical impedance assay (BIA) has gained popularity in the assessment of body composition. However, equations for the prediction of whole body composition use whole body BIA. This study attempts to evaluate the usefulness of segmental BIA in the assessment of whole body composition. A cross sectional descriptive study was conducted at the Professorial Paediatric Unit of Lady Ridgeway Hospital, Colombo, involving 259 (M/F:144/115) 5 to 15 year old healthy children. The height, weight, total and segmental BIA were measured and impedance indices and specific resistivity for the whole body and segments were calculated. Segmental BIA indices showed a significant association with whole body composition measures assessed by total body water (TBW) using the isotope dilution method (D2O). Impedance index was better related to TBW and fat free mass (FFM), while specific resistivity was better related to the fat mass of the body. Regression equations with different combinations of variables showed high predictability of whole body composition. Results of this study showed that segmental BIA can be used as an alternative approach to predict the whole body composition in Sri Lankan children.
Resumo:
The construction industry is a knowledge-based industry where various actors with diverse expertise create unique information within different phases of a project. The industry has been criticized by researchers and practitioners as being unable to apply newly created knowledge effectively to innovate. The fragmented nature of the construction industry reduces the opportunity of project participants to learn from each other and absorb knowledge. Building Information Modelling (BIM), referring to digital representations of constructed facilities, is a promising technological advance that has been proposed to assist in the sharing of knowledge and creation of linkages between firms. Previous studies have mainly focused on the technical attributes of BIM and there is little evidence on its capability to enhance learning in construction firms. This conceptual paper identifies six ‘functional attributes’ of BIM that act as triggers to stimulate learning: (1) comprehensibility; (2) predictability; (3) accuracy; (4) transparency; (5) mutual understanding and; (6) integration.
Resumo:
Interest in the area of collaborative Unmanned Aerial Vehicles (UAVs) in a Multi-Agent System is growing to compliment the strengths and weaknesses of the human-machine relationship. To achieve effective management of multiple heterogeneous UAVs, the status model of the agents must be communicated to each other. This paper presents the effects on operator Cognitive Workload (CW), Situation Awareness (SA), trust and performance by increasing the autonomy capability transparency through text-based communication of the UAVs to the human agents. The results revealed a reduction in CW, increase in SA, increase in the Competence, Predictability and Reliability dimensions of trust, and the operator performance.
Resumo:
Japan is in the midst of massive law reform. Mired in ongoing recession since the early 1990s, Japan has been implementing a new regulatory blueprint to kickstart a sluggish economy through structural change. A key element to this reform process is a rethink of corporate governance and its stakeholder relations. With a patchwork of legislative initiatives in areas as diverse as corporate law, finance, labour relations, consumer protection, public administration and civil justice, this new model is beginning to take shape. But to what extent does this model represent a break from the past? Some commentators are breathlessly predicting the "Americanisation" of Japanese law. They see the triumph of Western-style capitalism - the "End of History", to borrow the words of Francis Fukuyama - with its emphasis on market-based, arms-length transactions. Others are more cautious, advancing the view that there new reforms are merely "creative twists" on what is a uniquely (although slowly evolving) strand of Japanese capitalism. This paper takes issue with both interpretations. It argues that the new reforms merely follow Japan's long tradition of 'adopting and adapting' foreign models to suit domestic purposes. They are neither the wholesale importation of "Anglo-Saxon" regulatory principles nor a thin veneer over a 'uniquely unique' form of Confucian cultural capitalism. Rather, they represent a specific and largely political solution (conservative reformism) to a current economic problem (recession). The larger themes of this paper are 'change' and 'continuity'. 'Change' suggests evolution to something identifiable; 'continuity' suggests adhering to an existing state of affairs. Although notionally opposites, 'change' and 'continuity' have something in common - they both suggest some form of predictability and coherence in regulatory reform. Our paper, by contrast, submits that Japanese corporate governance reform or, indeed, law reform more generally in Japan, is context-specific, multi-layered (with different dimensions not necessarily pulling all in the same direction for example, in relations with key outside suppliers), and therefore more random or 'chaotic'.
Resumo:
We consider the development of statistical models for prediction of constituent concentration of riverine pollutants, which is a key step in load estimation from frequent flow rate data and less frequently collected concentration data. We consider how to capture the impacts of past flow patterns via the average discounted flow (ADF) which discounts the past flux based on the time lapsed - more recent fluxes are given more weight. However, the effectiveness of ADF depends critically on the choice of the discount factor which reflects the unknown environmental cumulating process of the concentration compounds. We propose to choose the discount factor by maximizing the adjusted R-2 values or the Nash-Sutcliffe model efficiency coefficient. The R2 values are also adjusted to take account of the number of parameters in the model fit. The resulting optimal discount factor can be interpreted as a measure of constituent exhaustion rate during flood events. To evaluate the performance of the proposed regression estimators, we examine two different sampling scenarios by resampling fortnightly and opportunistically from two real daily datasets, which come from two United States Geological Survey (USGS) gaging stations located in Des Plaines River and Illinois River basin. The generalized rating-curve approach produces biased estimates of the total sediment loads by -30% to 83%, whereas the new approaches produce relatively much lower biases, ranging from -24% to 35%. This substantial improvement in the estimates of the total load is due to the fact that predictability of concentration is greatly improved by the additional predictors.
Resumo:
We consider estimating the total load from frequent flow data but less frequent concentration data. There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates that minimizes the biases and makes use of informative predictive variables. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized rating-curve approach with additional predictors that capture unique features in the flow data, such as the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and the discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. Forming this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach for two rivers delivering to the Great Barrier Reef, Queensland, Australia. One is a data set from the Burdekin River, and consists of the total suspended sediment (TSS) and nitrogen oxide (NO(x)) and gauged flow for 1997. The other dataset is from the Tully River, for the period of July 2000 to June 2008. For NO(x) Burdekin, the new estimates are very similar to the ratio estimates even when there is no relationship between the concentration and the flow. However, for the Tully dataset, by incorporating the additional predictive variables namely the discounted flow and flow phases (rising or recessing), we substantially improved the model fit, and thus the certainty with which the load is estimated.
Resumo:
There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.
Resumo:
This Article analyzes the recognition and enforcement of cross-border insolvency judgments from the United States, United Kingdom, and Australia to determine whether the UNCITRAL Model Law’s goal of modified universalism is currently being practiced, and subjects the Model Law to analysis through the lens of international relations theories to elaborate a way forward. We posit that courts could use the express language of the Model Law text to confer recognition and enforcement of foreign insolvency judgments. The adoption of our proposal will reduce costs, maximize recovery for creditors, and ensure predictability for all parties.
Resumo:
- Background Expressed emotion (EE) captures the affective quality of the relationship between family caregivers and their care recipients and is known to increase the risk of poor health outcomes for caregiving dyads. Little is known about expressed emotion in the context of caregiving for persons with dementia, especially in non-Western cultures. The Family Attitude Scale (FAS) is a psychometrically sound self-reporting measure for EE. Its use in the examination of caregiving for patients with dementia has not yet been explored. - Objectives This study was performed to examine the psychometric properties of the Chinese version of the FAS (FAS-C) in Chinese caregivers of relatives with dementia, and its validity in predicting severe depressive symptoms among the caregivers. - Methods The FAS was translated into Chinese using Brislin's model. Two expert panels evaluated the semantic equivalence and content validity of this Chinese version (FAS-C), respectively. A total of 123 Chinese primary caregivers of relatives with dementia were recruited from three elderly community care centers in Hong Kong. The FAS-C was administered with the Chinese versions of the 5-item Mental Health Inventory (MHI-5), the Zarit Burden Interview (ZBI) and the Revised Memory and Behavioral Problem Checklist (RMBPC). - Results The FAS-C had excellent semantic equivalence with the original version and a content validity index of 0.92. Exploratory factor analysis identified a three-factor structure for the FAS-C (hostile acts, criticism and distancing). Cronbach's alpha of the FAS-C was 0.92. Pearson's correlation indicated that there were significant associations between a higher score on the FAS-C and greater caregiver burden (r = 0.66, p < 0.001), poorer mental health of the caregivers (r = −0.65, p < 0.001) and a higher level of dementia-related symptoms (frequency of symptoms: r = 0.45, p < 0.001; symptom disturbance: r = 0.51, p < 0.001), which serves to suggest its construct validity. For detecting severe depressive symptoms of the family caregivers, the receiving operating characteristics (ROC) curve had an area under curve of 0.78 (95% confidence interval (CI) = 0.69–0.87, p < 0.0001). The optimal cut-off score was >47 with a sensitivity of 0.720 (95% CI = 0.506–0.879) and specificity of 0.742 (95% CI = 0.643–0.826). - Conclusions The FAS-C is a reliable and valid measure to assess the affective quality of the relationship between Chinese caregivers and their relatives with dementia. It also has acceptable predictability in identifying family caregivers with severe depressive symptoms.
Resumo:
One of the objectives of general-purpose financial reporting is to provide information about the financial position, financial performance and cash flows of an entity that is useful to a wide range of users in making economic decisions. The current focus on potentially increased relevance of fair value accounting weighed against issues of reliability has failed to consider the potential impact on the predictive ability of accounting. Based on a sample of international (non-U.S.) banks from 24 countries during 2009-2012, we test the usefulness of fair values in improving the predictive ability of earnings. First, we find that the increasing use of fair values on balance-sheet financial instruments enhances the ability of current earnings to predict future earnings and cash flows. Second, we provide evidence that the fair value hierarchy classification choices affect the ability of earnings to predict future cash flows and future earnings. More precisely, we find that the non-discretionary fair value component (Level 1 assets) improves the predictability of current earnings whereas the discretionary fair value components (Level 2 and Level 3 assets) weaken the predictive power of earnings. Third, we find a consistent and strong association between factors reflecting country-wide institutional structures and predictive power of fair values based on discretionary measurement inputs (Level 2 and Level 3 assets and liabilities). Our study is timely and relevant. The findings have important implications for standard setters and contribute to the debate on the use of fair value accounting.