931 resultados para predictive factors


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Injuries in the lower extremity are considered to have multifactorial causes, whilst people with heel pain represent the most frequent cause of visits to health professionals. Managing these patients can be very difficult. The purpose of this research is to identify key variables which can influence foot health in patients with heel pain. Materials and method: A cross-sectional observational study was carried out with a sample of sixty-two participants recruited from the Educational Welfare Unit of the University of Malaga. The therapists, blinded for the study, fill in the data with anthropometric information and the FPI, while participants fill in the foot health status questionnaire, FHSQ. The most significant results reveal that there is a moderate relation between the clinical variables and the FHSQ commands. The most significant contribution is the BMI in the foot health status questionnaire. Conclusion: The variables which can help manage clinical subjects with heel pain are age, BMI, footwear and FPI (left foot).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effects of life events, social support and the emotional well-being of partner on the emotional well-being of the mother during pregnancy was examined within the cultural contexts of Britain and Greece. It was proposed that social support, impact of life events and relationship of the mother with her partner would be affected by the different social structures of each culture and would influence emotional well-being. A sample of 200 Greek and 156 British mothers and their partners completed questionnaires which included a life event inventory, measure of social support and measure of emotional well-being (Crown-Crisp Experiential Index). Greek mothers were found to score significantly higher on measures of depression, anxiety and somaticism, experience more stressful life events (most relating to family issues) and report feeling less supported than British mothers. Life events, particularly those relating to family stresses were found to predict poor emotional well-being among Greek mothers. For British mothers, social support was the strongest predictor of emotional well-being. Findings were discussed in the light of differences in social structure and it was suggested that future research might focus on the disruption of established social support structures rather than the differences in availability of social support per se when considering maternal emotional well-being.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effect of psychosocial factors on the emotional well-being of mothers following childbirth were examined within the cultural contexts of Britain and Greece. These mothers had already completed questionnaires during pregnancy and were contacted a second time in the postpartum period. At 4–6 weeks postpartum a sample of 165 Greek mothers and 101 British mothers and their partners completed the Edinburgh Postnatal Depression Scale. The relationship between mothers' EPDS scores and measures of emotional well-being in pregnancy (CCEI), social support, life events, fathers' EPDS score, and father's perception of change in partner was examined in each culture. No difference in the distribution of EPDS scores in each culture was found. Social support and life events were found to predict postnatal depression in both cultures. Additionally, in Greece, emotional well-being in pregnancy made a separate contribution to prediction. The major difference between the two cultures was in the relationship between mothers and their partners. Greek fathers were more emotionally and physically distanced from their partners during pregnancy, birth and early parenthood and perceived their partners as being more changed by the transition to parenthood. These differences were not reflected in differences in emotional well-being possibly because they accord with social expectation in each culture.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Six species of line-caught coral reef fish (Plectropomus spp., Lethrinus miniatus, Lethrinus laticaudis, Lutjanus sebae, Lutjanus malabaricus and Lutjanus erythropterus) were tagged by members of the Australian National Sportsfishing Association (ANSA) in Queensland between 1986 and 2003. Of the 14,757 fish tagged, 1607 were recaptured and we analysed these data to describe movement and determine factors likely to impact release survival. All species were classified as residents since over 80% of recaptures for each species occurred within 1 km of the release site. Few individuals (range 0.8-5%) were recaptured more than 20 km from their release point. L. sebae had a higher recapture rate (19.9%) than the other species studied (range 2.1-11.7%). Venting swimbladder gases, regardless of whether or not fish appeared to be suffering from barotrauma, significantly enhanced (P < 0.05) the survival of L. sebae and L. malabaricus but had no significant effect (P > 0.05) on L. erythropterus. The condition of fish on release, subjectively assessed by anglers, was only a significant effect on recapture rate for L. sebae where fish in "fair" condition had less than half the recapture rate of those assessed as in "excellent" or "good" condition. The recapture rate of L. sebae and L. laticaudis was significantly (P < 0.05) affected by depth with recapture rate declining in depths exceeding 30 m. Overall, the results showed that depth of capture, release condition and treatment for barotrauma influenced recapture rate for some species but these effects were not consistent across all species studied. Recommendations were made to the ANSA tagging clubs to record additional information such as injury, hooking location and hook type to enable a more comprehensive future assessment of the factors influencing release survival.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To examine healthy slaughter-age cattle and sheep on-farm for the excretion of Salmonella serovars in faeces and to identify possible risk factors using a questionnaire. The study involved 215 herds and flocks in the four eastern states of Australia, 56 with prior history of salmonellosis. Production systems examined included pasture beef cattle, feedlot beef cattle, dairy cattle, prime lambs and mutton sheep and animals were all at slaughter age. From each herd or flock, 25 animals were sampled and the samples pooled for Salmonella culture. All Salmonella isolated were serotyped and any Salmonella Typhimurium isolates were phage typed. Questionnaires on each production system, prepared in Epi Info 6.04, were designed to identify risk factors associated with Salmonella spp excretion, with separate questionnaires designed for each production system. Salmonellae were identified in all production systems and were more commonly isolated from dairies and beef feedlots than other systems. Statistical analysis revealed that dairy cattle were significantly more likely to shed Salmonella in faeces than pasture beef cattle, mutton sheep and prime lambs (P < 0.05). A wide diversity of Salmonella serovars, all of which have been isolated from humans in Australia, was identified in both cattle and sheep. Analysis of the questionnaires showed access to new arrivals was a significant risk factor for Salmonella excretion on dairy properties. For beef feedlots, the presence of large numbers of flies in the feedlot pens or around stored manure were significant risk factors for Salmonella excretion. Dairy cattle pose the highest risk of all the slaughter-age animals tested. Some of the identified risk factors can be overcome by improved management practices, especially in relation to hygiene.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Juvenile idiopathic arthritis (JIA) is a severe childhood disease usually characterized by long-term morbidity, unpredictable course, pain, and limitations in daily activities and social participation. The disease affects not only the child but also the whole family. The family is expected to adhere to an often very laborious regimen over a long period of time. However, the parental role is incoherently conceptualized in the research field. Pain in JIA is of somatic origin, but psychosocial factors, such as mood and self-efficacy, are critical in the perception of pain and in its impact on functioning. This study examined the factors correlating and possibly explaining pain in JIA, with a special emphasis on the mutual relations between parent- and patient-driven variables. In this patient series pain was not associated with the disease activity. The degree of pain was on average fairly low in children with JIA. When the children were clustered according to age, anxiety and depression, four distinguishable cluster groups significantly associated with pain emerged. One of the groups was described by concept vulnerability because of unfavorable variable associations. Parental depressive and anxiety symptoms accompanied by illness management had a predictive power in discriminating groups of children with varying distress levels. The parent’s and child’s perception of a child’s functional capability, distress, and somatic self-efficacy had independent explanatory power predicting the child’s pain. Of special interest in the current study was self-efficacy, which refers to the belief of an individual that he/she has the ability to engage in the behavior required for tackling the disease. In children with JIA, strong self-efficacy was related to lower levels of pain, depressive symptoms and trait anxiety. This suggests strengthening a child’s sense of self-efficacy, when helping the child to cope with his or her disease. Pain experienced by a child with JIA needs to be viewed in a multidimensional bio-psycho-social context that covers biological, environmental and cognitive behavioral mechanisms. The relations between the parent-child variables are complex and affect pain both directly and indirectly. Developing pain-treatment modalities that recognize the family as a system is also warranted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Seat belts are effective safety devices used to protect car occupants from severe injuries and fatalities during road vehicle accidents. Despite the proven effectiveness of seat belts, seat belt use rates are quite low, especially in developing countries, such as Turkey. The general aim of the present study was to investigate a large variety of factors related to seat belt use among Turkish car occupants using different perspectives and methods and therefore, to contribute to the design of effective seat belt use interventions for increasing seat belt use rates in Turkey. Five sub-studies were conducted within the present study. In the first sub-study, environmental (e.g., road type) and psycho-social factors (e.g., belt use by other car occupants) related to the seat belt use of front-seat occupants were investigated using observation techniques. Being male, of a young age, and traveling on city roads were the main factors negatively related to seat belt use. Furthermore, seat belt use by the drivers and front-seat passengers was highly correlated and a significant predictors of each other. In the second sub-study, the motivations of the car occupants for seat belt use and non-use were investigated using interview techniques. Situational conditions, such as traveling on city roads and for short distances, and not believing in the effectiveness and relevance of seat belt use for safety, were the most frequently reported reasons for not using a seat belt. Safety, habit and avoiding punishment were among the most frequently reported reasons for using a seat belt. In the third sub-study, the Theory of Planned Behavior (TPB) and the Health Belief Model (HBM) were applied to seat belt use using Structural Equation Modeling techniques. The TPB model showed a good fit to the data, whereas the HBM showed a poor fit to the data. Within the TPB model, attitude and subjective norm were significant predictors of intentions to use a seat belt on both urban and rural roads. In the fourth sub-study, seat belt use frequency and motivations for seat belt use among taxi drivers were investigated and compared between free-time and work-time driving using a survey. The results showed that taxi drivers used seat belts more when driving a private car in their free-times compared to when driving a taxi during their work-times. The lack of a legal obligation to use a seat belt in city traffic and fear of being attacked or robbed by the passengers were found as two specific reasons for not using a seat belt when driving a taxi. Lastly, in the fifth sub-study, the relationship of seat belt use to driver and health behaviors was investigated using a survey. Although seat belt use was related both to health and driver behaviors, factor analysis results showed that it grouped with driver behaviors. Based on the results of the sub-studies, a tentative empirical model showing different predictors of seat belt use was proposed. According to the model, safety and normative motivations and perceived physical barriers related to seat belt use are the three important predictors of seat belt use. Keywords: Seat belt use; environmental factors; psycho-social factors; safety and normative motivations; the Theory of Planned Behavior; the Health Belief Model; health behaviors; driver behaviors; front-seat occupants; taxi drivers; Turkey.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fatigue and sleepiness are major causes of road traffic accidents. However, precise data is often lacking because a validated and reliable device for detecting the level of sleepiness (cf. the breathalyzer for alcohol levels) does not exist, nor does criteria for the unambiguous detection of fatigue/sleepiness as a contributing factor in accident causation. Therefore, identification of risk factors and groups might not always be easy. Furthermore, it is extremely difficult to incorporate fatigue in operationalized terms into either traffic or criminal law. The main aims of this thesis were to estimate the prevalence of fatigue problems while driving among the Finnish driving population, to explore how VALT multidisciplinary investigation teams, Finnish police, and courts recognize (and prosecute) fatigue in traffic, to identify risk factors and groups, and finally to explore the application of the Finnish Road Traffic Act (RTA), which explicitly forbids driving while tired in Article 63. Several different sources of data were used: a computerized database and the original folders of multidisciplinary teams investigating fatal accidents (VALT), the driver records database (AKE), prosecutor and court decisions, a survey of young male military conscripts, and a survey of a representative sample of the Finnish active driving population. The results show that 8-15% of fatal accidents during 1991-2001 were fatigue related, that every fifth Finnish driver has fallen asleep while driving at some point during his/her driving career, and that the Finnish police and courts punish on average one driver per day on the basis of fatigued driving (based on the data from the years 2004-2005). The main finding regarding risk factors and risk groups is that during the summer months, especially in the afternoon, the risk of falling asleep while driving is increased. Furthermore, the results indicate that those with a higher risk of falling asleep while driving are men in general, but especially young male drivers including military conscripts and the elderly during the afternoon hours and the summer in particular; professional drivers breaking the rules about duty and rest hours; and drivers with a tendency to fall asleep easily. A time-of-day pattern of sleep-related incidents was repeatedly found. It was found that VALT teams can be considered relatively reliable when assessing the role of fatigue and sleepiness in accident causation; thus, similar experts might be valuable in the court process as expert witnesses when fatigue or sleepiness are suspected to have a role in an accident’s origins. However, the application of Article 63 of the RTA that forbids, among other things, fatigued driving will continue to be an issue that deserves further attention. This should be done in the context of a needed attitude change towards driving while in a state of extreme tiredness (e.g., after being awake for more than 24 hours), which produces performance deterioration comparable to illegal intoxication (BAC around 0.1%). Regarding the well-known interactive effect of increased sleepiness and even small alcohol levels, the relatively high proportion (up to 14.5%) of Finnish drivers owning and using a breathalyzer raises some concern. This concern exists because these drivers are obviously more focused on not breaking the “magic” line of 0.05% BAC than being concerned about driving impairment, which might be much worse than they realize because of the interactive effects of increased sleepiness and even low alcohol consumption. In conclusion, there is no doubt that fatigue and sleepiness problems while driving are common among the Finnish driving population. While we wait for the invention of reliable devices for fatigue/sleepiness detection, we should invest more effort in raising public awareness about the dangerousness of fatigued driving and educate drivers about how to recognize and deal with fatigue and sleepiness when they ultimately occur.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

According to the models conceptualizing work stress, increased risk of health problems arise when high job demands co-occur with low job control (the demand-control model) or the efforts invested by the employee are disproportionately high compared to the rewards received (effort-reward imbalance model). This study examined the association between work stress and early atherosclerosis with particular attention to the role of pre-employment risk factors and genetic background in this association. The subjects were young healthy adults aged 24-39 who were participating in the 21-year follow-up of the ongoing prospective "Cardiovascular Risk in Young Finns" study in 2001-2002. Work stress was evaluated with questionnaires on demand-control model and on effort-reward model. Atherosclerosis was assessed with ultrasound of carotid artery intima-media thickness (IMT). In addition, risk for enhanced atherosclerotic process was assessed by measuring with heart rate variability and heart rate. Pre-employment risk factors, measured at age 12 to 18, included such as body mass index, blood lipids, family history of coronary heart disease, and parental socioeconomic position. Variants of the neuregulin-1 were determined using genomic DNA. The results showed that higher work stress was associated with higher IMT in men. This association was not attenuated by traditional risk factors of atherosclerosis and coronary heart disease or by pre-employment risk factors measured in adolescence. Neuregulin-1 gene moderated the association between work stress and IMT in men. A significant association between work stress and IMT was found only for the T/T genotype of the neuregulin-1 gene but not for other genotypes. Among women an association was found between higher work stress and lower heart rate variability, suggesting higher risk for developing atherosclerosis. These associations could not be explained by demographic characteristics or coronary risk factors. The present findings provide evidence for an association between work stress and atherosclerosis in relatively young population. This association seems to be modified by genetic influences but it does not appear to be confounded by pre-employment adolescent risk factors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Overprocessing waste occurs in a business process when effort is spent in a way that does not add value to the customer nor to the business. Previous studies have identied a recurrent overprocessing pattern in business processes with so-called "knockout checks", meaning activities that classify a case into "accepted" or "rejected", such that if the case is accepted it proceeds forward, while if rejected, it is cancelled and all work performed in the case is considered unnecessary. Thus, when a knockout check rejects a case, the effort spent in other (previous) checks becomes overprocessing waste. Traditional process redesign methods propose to order knockout checks according to their mean effort and rejection rate. This paper presents a more fine-grained approach where knockout checks are ordered at runtime based on predictive machine learning models. Experiments on two real-life processes show that this predictive approach outperforms traditional methods while incurring minimal runtime overhead.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the following predictive business process monitoring problem: Given the execution trace of an ongoing case,and given a set of traces of historical (completed) cases, predict the most likely outcome of the ongoing case. In this context, a trace refers to a sequence of events with corresponding payloads, where a payload consists of a set of attribute-value pairs. Meanwhile, an outcome refers to a label associated to completed cases, like, for example, a label indicating that a given case completed “on time” (with respect to a given desired duration) or “late”, or a label indicating that a given case led to a customer complaint or not. The paper tackles this problem via a two-phased approach. In the first phase, prefixes of historical cases are encoded using complex symbolic sequences and clustered. In the second phase, a classifier is built for each of the clusters. To predict the outcome of an ongoing case at runtime given its (uncompleted) trace, we select the closest cluster(s) to the trace in question and apply the respective classifier(s), taking into account the Euclidean distance of the trace from the center of the clusters. We consider two families of clustering algorithms – hierarchical clustering and k-medoids – and use random forests for classification. The approach was evaluated on four real-life datasets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability to predict phenology and canopy development is critical in crop models used for simulating likely consequences of alternative crop management and cultivar choice strategies. Here we quantify and contrast the temperature and photoperiod responses for phenology and canopy development of a diverse range of elite Indian and Australian sorghum genotypes (hybrid and landrace). Detailed field experiments were undertaken in Australia and India using a range of genotypes, sowing dates, and photoperiod extension treatments. Measurements of timing of developmental stages and leaf appearance were taken. The generality of photo-thermal approaches to modelling phenological and canopy development was tested. Environmental and genotypic effects on rate of progression from emergence to floral initiation (E-FI) were explained well using a multiplicative model, which combined the intrinsic development rate (Ropt), with responses to temperature and photoperiod. Differences in Ropt and extent of the photoperiod response explained most genotypic effects. Average leaf initiation rate (LIR), leaf appearance rate and duration of the phase from anthesis to physiological maturity differed among genotypes. The association of total leaf number (TLN) with photoperiod found for all genotypes could not be fully explained by effects on development and LIRs. While a putative effect of photoperiod on LIR would explain the observations, other possible confounding factors, such as air-soil temperature differential and the nature of model structure were considered and discussed. This study found a generally robust predictive capacity of photo-thermal development models across diverse ranges of both genotypes and environments. Hence, they remain the most appropriate models for simulation analysis of genotype-by-management scenarios in environments varying broadly in temperature and photoperiod.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE This study determined if deficits in corneal nerve fiber length (CNFL) assessed using corneal confocal microscopy (CCM) can predict future onset of diabetic peripheral neuropathy (DPN). RESEARCH DESIGN AND METHODS CNFL and a range of other baseline measures were compared between 90 nonneuropathic patients with type 1 diabetes who did or did not develop DPN after 4 years. The receiver operator characteristic (ROC) curve was used to determine the capability of single and combined measures of neuropathy to predict DPN. RESULTS DPN developed in 16 participants (18%) after 4 years. Factors predictive of 4-year incident DPN were lower CNFL (P = 0.041); longer duration of diabetes (P = 0.002); higher triglycerides (P = 0.023); retinopathy (higher on the Early Treatment of Diabetic Retinopathy Study scale) (P = 0.008); nephropathy (higher albumin-to-creatinine ratio) (P = 0.001); higher neuropathy disability score (P = 0.037); lower cold sensation (P = 0.001) and cold pain (P = 0.027) thresholds; higher warm sensation (P = 0.008), warm pain (P = 0.024), and vibration (P = 0.003) thresholds; impaired monofilament response (P = 0.003); and slower peroneal (P = 0.013) and sural (P = 0.002) nerve conduction velocity. CCM could predict the 4-year incident DPN with 63% sensitivity and 74% specificity for a CNFL threshold cutoff of 14.1 mm/mm2 (area under ROC curve = 0.66, P = 0.041). Combining neuropathy measures did not improve predictive capability. CONCLUSIONS DPN can be predicted by various demographic, metabolic, and conventional neuropathy measures. The ability of CCM to predict DPN broadens the already impressive diagnostic capabilities of this novel ophthalmic marker.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Deficiencies in sardine post-harvest handling methods were seen as major impediments to development of a value-adding sector supplying Australian bait and human consumption markets. Factors affecting sardine deterioration rates in the immediate post-harvest period were investigated and recommendations made for alternative handling procedures to optimise sardine quality. Net to factory sampling showed that post-mortem autolysis was probably caused by digestive enzyme activity contributing to the observed temporal increase in sardine Quality Index. Belly burst was not an issue. Sardine quality could be maintained by reducing tank loading, and rapid temperature reduction using dedicated, on-board value-adding tanks. Fish should be iced between the jetty and the processing factory, and transport bins chilled using an efficient cooling medium such as flow ice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Premature birth and associated small body size are known to affect health over the life course. Moreover, compelling evidence suggests that birth size throughout its whole range of variation is inversely associated with risk for cardiovascular disease and type 2 diabetes in subsequent life. To explain these findings, the Developmental Origins of Health and Disease (DOHaD) model has been introduced. Within this framework, restricted physical growth is, to a large extent, considered either a product of harmful environmental influences, such as suboptimal nutrition and alterations in the foetal hormonal milieu, or an adaptive reaction to the environment. Whether inverse associations exist between body size at birth and psychological vulnerability factors for mental disorders is poorly known. Thus, the aim of this thesis was to study in three large prospective cohorts whether prenatal and postnatal physical growth, across the whole range of variation, is associated with subsequent temperament/personality traits and psychological symptoms that are considered vulnerability factors for mental disorders. Weight and length at birth in full term infants showed quadratic associations with the temperamental trait of harm avoidance (Study I). The highest scores were characteristic of the smallest individuals, followed by the heaviest/longest. Linear associations between birth size and psychological outcomes were found such that lower weight and thinness at birth predicted more pronounced trait anxiety in late adulthood (Study II); lower birth weight, placental size, and head circumference at 12 months predicted a more pronounced positive schitzotypal trait in women (Study III); and thinness and smaller head circumference at birth associated with symptoms of attention-deficit hyperactivity disorder (ADHD) in children who were born at term (Study IV). These associations occured across the whole variation in birth size and after adjusting for several confounders. With respect to growth after birth, individuals with high trait anxiety scores in late adulthood were lighter in weight and thinner in infancy, and gained weight more rapidly between 7 and 11 years of age, but weighed less and were shorter in late adulthood in relation to weight and height measured at 11 years of age (Study II). These results suggest that a suboptimal prenatal environment reflected in smaller birth size may affect a variety of psychological vulnerability factors for mental disorders, such as the temperamental trait of harm avoidance, trait anxiety, schizotypal traits, and symptoms of ADHD. The smaller the birth size across the whole range of variation, the more pronounced were these psychological vulnerability factors. Moreover, some of these outcomes, such as trait anxiety, were also predicted by patterns of growth after birth. The findings are concordant with the DOHaD model, and emphasise the importance of prenatal factors in the aetiology of not only mental disorders but also their psychological vulnerability factors.