923 resultados para mathematical equation correction approach
Resumo:
The main problem connected to cone beam computed tomography (CT) systems for industrial applications employing 450 kV X-ray tubes is the high amount of scattered radiation which is added to the primary radiation (signal). This stray radiation leads to a significant degradation of the image quality. A better understanding of the scattering and methods to reduce its effects are therefore necessary to improve the image quality. Several studies have been carried out in the medical field at lower energies, whereas studies in industrial CT, especially for energies up to 450 kV, are lacking. Moreover, the studies reported in literature do not consider the scattered radiation generated by the CT system structure and the walls of the X-ray room (environmental scatter). In order to investigate the scattering on CT projections a GEANT4-based Monte Carlo (MC) model was developed. The model, which has been validated against experimental data, has enabled the calculation of the scattering including the environmental scatter, the optimization of an anti-scatter grid suitable for the CT system, and the optimization of the hardware components of the CT system. The investigation of multiple scattering in the CT projections showed that its contribution is 2.3 times the one of primary radiation for certain objects. The results of the environmental scatter showed that it is the major component of the scattering for aluminum box objects of front size 70 x 70 mm2 and that it strongly depends on the thickness of the object and therefore on the projection. For that reason, its correction is one of the key factors for achieving high quality images. The anti-scatter grid optimized by means of the developed MC model was found to reduce the scatter-toprimary ratio in the reconstructed images by 20 %. The object and environmental scatter calculated by means of the simulation were used to improve the scatter correction algorithm which could be patented by Empa. The results showed that the cupping effect in the corrected image is strongly reduced. The developed CT simulation is a powerful tool to optimize the design of the CT system and to evaluate the contribution of the scattered radiation to the image. Besides, it has offered a basis for a new scatter correction approach by which it has been possible to achieve images with the same spatial resolution as state-of-the-art well collimated fan-beam CT with a gain in the reconstruction time of a factor 10. This result has a high economic impact in non-destructive testing and evaluation, and reverse engineering.
Resumo:
Background. One of the phenomena observed in human aging is the progressive increase of a systemic inflammatory state, a condition referred to as “inflammaging”, negatively correlated with longevity. A prominent mediator of inflammation is the transcription factor NF-kB, that acts as key transcriptional regulator of many genes coding for pro-inflammatory cytokines. Many different signaling pathways activated by very diverse stimuli converge on NF-kB, resulting in a regulatory network characterized by high complexity. NF-kB signaling has been proposed to be responsible of inflammaging. Scope of this analysis is to provide a wider, systemic picture of such intricate signaling and interaction network: the NF-kB pathway interactome. Methods. The study has been carried out following a workflow for gathering information from literature as well as from several pathway and protein interactions databases, and for integrating and analyzing existing data and the relative reconstructed representations by using the available computational tools. Strong manual intervention has been necessarily used to integrate data from multiple sources into mathematically analyzable networks. The reconstruction of the NF-kB interactome pursued with this approach provides a starting point for a general view of the architecture and for a deeper analysis and understanding of this complex regulatory system. Results. A “core” and a “wider” NF-kB pathway interactome, consisting of 140 and 3146 proteins respectively, were reconstructed and analyzed through a mathematical, graph-theoretical approach. Among other interesting features, the topological characterization of the interactomes shows that a relevant number of interacting proteins are in turn products of genes that are controlled and regulated in their expression exactly by NF-kB transcription factors. These “feedback loops”, not always well-known, deserve deeper investigation since they may have a role in tuning the response and the output consequent to NF-kB pathway initiation, in regulating the intensity of the response, or its homeostasis and balance in order to make the functioning of such critical system more robust and reliable. This integrated view allows to shed light on the functional structure and on some of the crucial nodes of thet NF-kB transcription factors interactome. Conclusion. Framing structure and dynamics of the NF-kB interactome into a wider, systemic picture would be a significant step toward a better understanding of how NF-kB globally regulates diverse gene programs and phenotypes. This study represents a step towards a more complete and integrated view of the NF-kB signaling system.
Resumo:
Surface tension induced convection in a liquid bridge held between two parallel, coaxial, solid disks is considered. The surface tension gradient is produced by a small temperature gradient parallel Co the undisturbed surface. The study is performed by using a mathematical regular perturbation approach based on a small parameter, e, which measures the deviation of the imposed temperature field from its mean value. The first order velocity field is given by a Stokes-type problem (viscous terms are dominant) with relatively simple boundary conditions. The first order temperature field is that imposed from the end disks on a liquid bridge immersed in a non-conductive fluid. Radiative effects are supposed to be negligible. The second order temperature field, which accounts for convective effects, is split into three components, one due to the bulk motion, and the other two to the distortion of the free surface. The relative importance of these components in terms of the heat transfer to or from the end disks is assessed
Resumo:
Background: There are 600,000 new malaria cases daily worldwide. The gold standard for estimating the parasite burden and the corresponding severity of the disease consists in manually counting the number of parasites in blood smears through a microscope, a process that can take more than 20 minutes of an expert microscopist’s time. Objective: This research tests the feasibility of a crowdsourced approach to malaria image analysis. In particular, we investigated whether anonymous volunteers with no prior experience would be able to count malaria parasites in digitized images of thick blood smears by playing a Web-based game. Methods: The experimental system consisted of a Web-based game where online volunteers were tasked with detecting parasites in digitized blood sample images coupled with a decision algorithm that combined the analyses from several players to produce an improved collective detection outcome. Data were collected through the MalariaSpot website. Random images of thick blood films containing Plasmodium falciparum at medium to low parasitemias, acquired by conventional optical microscopy, were presented to players. In the game, players had to find and tag as many parasites as possible in 1 minute. In the event that players found all the parasites present in the image, they were presented with a new image. In order to combine the choices of different players into a single crowd decision, we implemented an image processing pipeline and a quorum algorithm that judged a parasite tagged when a group of players agreed on its position. Results: Over 1 month, anonymous players from 95 countries played more than 12,000 games and generated a database of more than 270,000 clicks on the test images. Results revealed that combining 22 games from nonexpert players achieved a parasite counting accuracy higher than 99%. This performance could be obtained also by combining 13 games from players trained for 1 minute. Exhaustive computations measured the parasite counting accuracy for all players as a function of the number of games considered and the experience of the players. In addition, we propose a mathematical equation that accurately models the collective parasite counting performance. Conclusions: This research validates the online gaming approach for crowdsourced counting of malaria parasites in images of thick blood films. The findings support the conclusion that nonexperts are able to rapidly learn how to identify the typical features of malaria parasites in digitized thick blood samples and that combining the analyses of several users provides similar parasite counting accuracy rates as those of expert microscopists. This experiment illustrates the potential of the crowdsourced gaming approach for performing routine malaria parasite quantification, and more generally for solving biomedical image analysis problems, with future potential for telediagnosis related to global health challenges.
Resumo:
We provide a derivation of a more accurate version of the stochastic Gross-Pitaevskii equation, as introduced by Gardiner et al (2002 J. Phys. B: At. Mol. Opt. Phys. 35 1555). This derivation does not rely on the concept of local energy and momentum conservation and is based on a quasiclassical Wigner function representation of a 'high temperature' master equation for a Bose gas, which includes only modes below an energy cut-off ER that are sufficiently highly occupied (the condensate band). The modes above this cutoff (the non-condensate band) are treated as being essentially thermalized. The interaction between these two bands, known as growth and scattering processes, provides noise and damping terms in the equation of motion for the condensate band, which we call the stochastic Gross-Pitaevskii equation. This approach is distinguished by the control of the approximations made in its derivation and by the feasibility of its numerical implementation.
Resumo:
Recent intervention efforts in promoting positive identity in troubled adolescents have begun to draw on the potential for an integration of the self-construction and self-discovery perspectives in conceptualizing identity processes, as well as the integration of quantitative and qualitative data analytic strategies. This study reports an investigation of the Changing Lives Program (CLP), using an Outcome Mediation (OM) evaluation model, an integrated model for evaluating targets of intervention, while theoretically including a Self-Transformative Model of Identity Development (STM), a proposed integration of self-discovery and self-construction identity processes. This study also used a Relational Data Analysis (RDA) integration of quantitative and qualitative analysis strategies and a structural equation modeling approach (SEM), to construct and evaluate the hypothesized OM/STM model. The CLP is a community supported positive youth development intervention, targeting multi-problem youth in alternative high schools in the Miami Dade County Public Schools (M-DCPS). The 259 participants for this study were drawn from the CLP’s archival data file. The model evaluated in this study utilized three indices of core identity processes (1) personal expressiveness, (2) identity conflict resolution, and (3) informational identity style that were conceptualized as mediators of the effects of participation in the CLP on change in two qualitative outcome indices of participants’ sense of self and identity. Findings indicated the model fit the data (χ2 (10) = 3.638, p = .96; RMSEA = .00; CFI = 1.00; WRMR = .299). The pattern of findings supported the utilization of the STM in conceptualizing identity processes and provided support for the OM design. The findings also suggested the need for methods capable of detecting and rendering unique sample specific free response data to increase the likelihood of identifying emergent core developmental research concepts and constructs in studies of intervention/developmental change over time in ways not possible using fixed response methods alone.
Resumo:
Les gènes, qui servent à encoder les fonctions biologiques des êtres vivants, forment l'unité moléculaire de base de l'hérédité. Afin d'expliquer la diversité des espèces que l'on peut observer aujourd'hui, il est essentiel de comprendre comment les gènes évoluent. Pour ce faire, on doit recréer le passé en inférant leur phylogénie, c'est-à-dire un arbre de gènes qui représente les liens de parenté des régions codantes des vivants. Les méthodes classiques d'inférence phylogénétique ont été élaborées principalement pour construire des arbres d'espèces et ne se basent que sur les séquences d'ADN. Les gènes sont toutefois riches en information, et on commence à peine à voir apparaître des méthodes de reconstruction qui utilisent leurs propriétés spécifiques. Notamment, l'histoire d'une famille de gènes en terme de duplications et de pertes, obtenue par la réconciliation d'un arbre de gènes avec un arbre d'espèces, peut nous permettre de détecter des faiblesses au sein d'un arbre et de l'améliorer. Dans cette thèse, la réconciliation est appliquée à la construction et la correction d'arbres de gènes sous trois angles différents: 1) Nous abordons la problématique de résoudre un arbre de gènes non-binaire. En particulier, nous présentons un algorithme en temps linéaire qui résout une polytomie en se basant sur la réconciliation. 2) Nous proposons une nouvelle approche de correction d'arbres de gènes par les relations d'orthologie et paralogie. Des algorithmes en temps polynomial sont présentés pour les problèmes suivants: corriger un arbre de gènes afin qu'il contienne un ensemble d'orthologues donné, et valider un ensemble de relations partielles d'orthologie et paralogie. 3) Nous montrons comment la réconciliation peut servir à "combiner'' plusieurs arbres de gènes. Plus précisément, nous étudions le problème de choisir un superarbre de gènes selon son coût de réconciliation.
Resumo:
Phase change problems arise in many practical applications such as air-conditioning and refrigeration, thermal energy storage systems and thermal management of electronic devices. The physical phenomenon in such applications are complex and are often difficult to be studied in detail with the help of only experimental techniques. The efforts to improve computational techniques for analyzing two-phase flow problems with phase change are therefore gaining momentum. The development of numerical methods for multiphase flow has been motivated generally by the need to account more accurately for (a) large topological changes such as phase breakup and merging, (b) sharp representation of the interface and its discontinuous properties and (c) accurate and mass conserving motion of the interface. In addition to these considerations, numerical simulation of multiphase flow with phase change introduces additional challenges related to discontinuities in the velocity and the temperature fields. Moreover, the velocity field is no longer divergence free. For phase change problems, the focus of developmental efforts has thus been on numerically attaining a proper conservation of energy across the interface in addition to the accurate treatment of fluxes of mass and momentum conservation as well as the associated interface advection. Among the initial efforts related to the simulation of bubble growth in film boiling applications the work in \cite{Welch1995} was based on the interface tracking method using a moving unstructured mesh. That study considered moderate interfacial deformations. A similar problem was subsequently studied using moving, boundary fitted grids \cite{Son1997}, again for regimes of relatively small topological changes. A hybrid interface tracking method with a moving interface grid overlapping a static Eulerian grid was developed \cite{Juric1998} for the computation of a range of phase change problems including, three-dimensional film boiling \cite{esmaeeli2004computations}, multimode two-dimensional pool boiling \cite{Esmaeeli2004} and film boiling on horizontal cylinders \cite{Esmaeeli2004a}. The handling of interface merging and pinch off however remains a challenge with methods that explicitly track the interface. As large topological changes are crucial for phase change problems, attention has turned in recent years to front capturing methods utilizing implicit interfaces that are more effective in treating complex interface deformations. The VOF (Volume of Fluid) method was adopted in \cite{Welch2000} to simulate the one-dimensional Stefan problem and the two-dimensional film boiling problem. The approach employed a specific model for mass transfer across the interface involving a mass source term within cells containing the interface. This VOF based approach was further coupled with the level set method in \cite{Son1998}, employing a smeared-out Heaviside function to avoid the numerical instability related to the source term. The coupled level set, volume of fluid method and the diffused interface approach was used for film boiling with water and R134a at the near critical pressure condition \cite{Tomar2005}. The effect of superheat and saturation pressure on the frequency of bubble formation were analyzed with this approach. The work in \cite{Gibou2007} used the ghost fluid and the level set methods for phase change simulations. A similar approach was adopted in \cite{Son2008} to study various boiling problems including three-dimensional film boiling on a horizontal cylinder, nucleate boiling in microcavity \cite{lee2010numerical} and flow boiling in a finned microchannel \cite{lee2012direct}. The work in \cite{tanguy2007level} also used the ghost fluid method and proposed an improved algorithm based on enforcing continuity and divergence-free condition for the extended velocity field. The work in \cite{sato2013sharp} employed a multiphase model based on volume fraction with interface sharpening scheme and derived a phase change model based on local interface area and mass flux. Among the front capturing methods, sharp interface methods have been found to be particularly effective both for implementing sharp jumps and for resolving the interfacial velocity field. However, sharp velocity jumps render the solution susceptible to erroneous oscillations in pressure and also lead to spurious interface velocities. To implement phase change, the work in \cite{Hardt2008} employed point mass source terms derived from a physical basis for the evaporating mass flux. To avoid numerical instability, the authors smeared the mass source by solving a pseudo time-step diffusion equation. This measure however led to mass conservation issues due to non-symmetric integration over the distributed mass source region. The problem of spurious pressure oscillations related to point mass sources was also investigated by \cite{Schlottke2008}. Although their method is based on the VOF, the large pressure peaks associated with sharp mass source was observed to be similar to that for the interface tracking method. Such spurious fluctuation in pressure are essentially undesirable because the effect is globally transmitted in incompressible flow. Hence, the pressure field formation due to phase change need to be implemented with greater accuracy than is reported in current literature. The accuracy of interface advection in the presence of interfacial mass flux (mass flux conservation) has been discussed in \cite{tanguy2007level,tanguy2014benchmarks}. The authors found that the method of extending one phase velocity to entire domain suggested by Nguyen et al. in \cite{nguyen2001boundary} suffers from a lack of mass flux conservation when the density difference is high. To improve the solution, the authors impose a divergence-free condition for the extended velocity field by solving a constant coefficient Poisson equation. The approach has shown good results with enclosed bubble or droplet but is not general for more complex flow and requires additional solution of the linear system of equations. In current thesis, an improved approach that addresses both the numerical oscillation of pressure and the spurious interface velocity field is presented by featuring (i) continuous velocity and density fields within a thin interfacial region and (ii) temporal velocity correction steps to avoid unphysical pressure source term. Also I propose a general (iii) mass flux projection correction for improved mass flux conservation. The pressure and the temperature gradient jump condition are treated sharply. A series of one-dimensional and two-dimensional problems are solved to verify the performance of the new algorithm. Two-dimensional and cylindrical film boiling problems are also demonstrated and show good qualitative agreement with the experimental observations and heat transfer correlations. Finally, a study on Taylor bubble flow with heat transfer and phase change in a small vertical tube in axisymmetric coordinates is carried out using the new multiphase, phase change method.
Resumo:
Les gènes, qui servent à encoder les fonctions biologiques des êtres vivants, forment l'unité moléculaire de base de l'hérédité. Afin d'expliquer la diversité des espèces que l'on peut observer aujourd'hui, il est essentiel de comprendre comment les gènes évoluent. Pour ce faire, on doit recréer le passé en inférant leur phylogénie, c'est-à-dire un arbre de gènes qui représente les liens de parenté des régions codantes des vivants. Les méthodes classiques d'inférence phylogénétique ont été élaborées principalement pour construire des arbres d'espèces et ne se basent que sur les séquences d'ADN. Les gènes sont toutefois riches en information, et on commence à peine à voir apparaître des méthodes de reconstruction qui utilisent leurs propriétés spécifiques. Notamment, l'histoire d'une famille de gènes en terme de duplications et de pertes, obtenue par la réconciliation d'un arbre de gènes avec un arbre d'espèces, peut nous permettre de détecter des faiblesses au sein d'un arbre et de l'améliorer. Dans cette thèse, la réconciliation est appliquée à la construction et la correction d'arbres de gènes sous trois angles différents: 1) Nous abordons la problématique de résoudre un arbre de gènes non-binaire. En particulier, nous présentons un algorithme en temps linéaire qui résout une polytomie en se basant sur la réconciliation. 2) Nous proposons une nouvelle approche de correction d'arbres de gènes par les relations d'orthologie et paralogie. Des algorithmes en temps polynomial sont présentés pour les problèmes suivants: corriger un arbre de gènes afin qu'il contienne un ensemble d'orthologues donné, et valider un ensemble de relations partielles d'orthologie et paralogie. 3) Nous montrons comment la réconciliation peut servir à "combiner'' plusieurs arbres de gènes. Plus précisément, nous étudions le problème de choisir un superarbre de gènes selon son coût de réconciliation.
Resumo:
Structured abstract Purpose: To deepen, in grocery retail context, the roles of consumer perceived value and consumer satisfaction, as antecedents’ dimensions of customer loyalty intentions. Design/Methodology/approach: Also employing a short version (12-items) of the original 19-item PERVAL scale of Sweeney & Soutar (2001), a structural equation modeling approach was applied to investigate statistical properties of the indirect influence on loyalty of a reflective second order customer perceived value model. The performance of three alternative estimation methods was compared through bootstrapping techniques. Findings: Results provided i) support for the use of the short form of the PERVAL scale in measuring consumer perceived value; ii) the influence of the four highly correlated independent latent predictors on satisfaction was well summarized by a higher-order reflective specification of consumer perceived value; iii) emotional and functional dimensions were determinants for the relationship with the retailer; iv) parameter’s bias with the three methods of estimation was only significant for bootstrap small sample sizes. Research limitations:/implications: Future research is needed to explore the use of the short form of the PERVAL scale in more homogeneous groups of consumers. Originality/value: Firstly, to indirectly explain customer loyalty mediated by customer satisfaction it was adopted a recent short form of PERVAL scale and a second order reflective conceptualization of value. Secondly, three alternative estimation methods were used and compared through bootstrapping and simulation procedures.
Resumo:
In vitro cardiovascular device performance evaluation in a mock circulation loop (MCL) is a necessary step prior to in vivo testing.A MCL that accurately represents the physiology of the cardiovascular system accelerates the assessment of the device’s ability to treat pathological conditions. To serve this purpose, a compact MCL measuring 600 ¥ 600 ¥ 600 mm (L ¥ W¥ H) was constructed in conjunction with a computer mathematical simulation.This approach allowed the effective selection of physical loop characteristics, such as pneumatic drive parameters, to create pressure and flow, and pipe dimensions to replicate the resistance, compliance, and fluid inertia of the native cardiovascular system. The resulting five-element MCL reproduced the physiological hemodynamics of a healthy and failing heart by altering ventricle contractility, vascular resistance/compliance, heart rate, and vascular volume. The effects of interpatient anatomical variability, such as septal defects and valvular disease, were also assessed. Cardiovascular hemodynamic pressures (arterial, venous, atrial, ventricular), flows (systemic, bronchial, pulmonary), and volumes (ventricular, stroke) were analyzed in real time. The objective of this study is to describe the developmental stages of the compact MCL and demonstrate its value as a research tool for the accelerated development of cardiovascular devices.
Resumo:
This paper investigates energy saving potential of commercial building by living wall and green façade system using Envelope Thermal Transfer Value (ETTV) equation in Sub-tropical climate of Australia. Energy saving of four commercial buildings was quantified by applying living wall and green façade system to the west facing wall. A field experimental facility, from which temperature data of living wall system was collected, was used to quantify wall temperatures and heat gain under controlled conditions. The experimental parameters were accumulated with extensive data of existing commercial building to quantify energy saving. Based on temperature data of living wall system comprised of Australian native plants, equivalent temperature of living wall system has been computed. Then, shading coefficient of plants in green façade system has been included in mathematical equation and in graphical analysis. To minimize the air-conditioned load of commercial building, therefore to minimize the heat gain of commercial building, an analysis of building heat gain reduction by living wall and green façade system has been performed. Overall, cooling energy performance of commercial building before and after living wall and green façade system application has been examined. The quantified energy saving showed that only living wall system on opaque part of west facing wall can save 8-13 % of cooling energy consumption where as only green façade system on opaque part of west facing wall can save 9.5-18% cooling energy consumption of commercial building. Again, green façade system on fenestration system on west facing wall can save 28-35 % of cooling energy consumption where as combination of both living wall on opaque part of west facing wall and green façade on fenestration system on west facing wall can save 35-40% cooling energy consumption of commercial building in sub-tropical climate of Australia.
Resumo:
Background and significance: Older adults with chronic diseases are at increasing risk of hospital admission and readmission. Approximately 75% of adults have at least one chronic condition, and the odds of developing a chronic condition increases with age. Chronic diseases consume about 70% of the total Australian health expenditure, and about 59% of hospital events for chronic conditions are potentially preventable. These figures have brought to light the importance of the management of chronic disease among the growing older population. Many studies have endeavoured to develop effective chronic disease management programs by applying social cognitive theory. However, limited studies have focused on chronic disease self-management in older adults at high risk of hospital readmission. Moreover, although the majority of studies have covered wide and valuable outcome measures, there is scant evidence on examining the fundamental health outcomes such as nutritional status, functional status and health-related quality of life. Aim: The aim of this research was to test social cognitive theory in relation to self-efficacy in managing chronic disease and three health outcomes, namely nutritional status, functional status, and health-related quality of life, in older adults at high risk of hospital readmission. Methods: A cross-sectional study design was employed for this research. Three studies were undertaken. Study One examined the nutritional status and validation of a nutritional screening tool; Study Two explored the relationships between participants. characteristics, self-efficacy beliefs, and health outcomes based on the study.s hypothesized model; Study Three tested a theoretical model based on social cognitive theory, which examines potential mechanisms of the mediation effects of social support and self-efficacy beliefs. One hundred and fifty-seven patients aged 65 years and older with a medical admission and at least one risk factor for readmission were recruited. Data were collected from medical records on demographics, medical history, and from self-report questionnaires. The nutrition data were collected by two registered nurses. For Study One, a contingency table and the kappa statistic was used to determine the validity of the Malnutrition Screening Tool. In Study Two, standard multiple regression, hierarchical multiple regression and logistic regression were undertaken to determine the significant influential predictors for the three health outcome measures. For Study Three, a structural equation modelling approach was taken to test the hypothesized self-efficacy model. Results: The findings of Study One suggested that a high prevalence of malnutrition continues to be a concern in older adults as the prevalence of malnutrition was 20.6% according to the Subjective Global Assessment. Additionally, the findings confirmed that the Malnutrition Screening Tool is a valid nutritional screening tool for hospitalized older adults at risk of readmission when compared to the Subjective Global Assessment with high sensitivity (94%), and specificity (89%) and substantial agreement between these two methods (k = .74, p < .001; 95% CI .62-.86). Analysis data for Study Two found that depressive symptoms and perceived social support were the two strongest influential factors for self-efficacy in managing chronic disease in a hierarchical multiple regression. Results of multivariable regression models suggested advancing age, depressive symptoms and less tangible support were three important predictors for malnutrition. In terms of functional status, a standard regression model found that social support was the strongest predictor for the Instrumental Activities of Daily Living, followed by self-efficacy in managing chronic disease. The results of standard multiple regression revealed that the number of hospital readmission risk factors adversely affected the physical component score, while depressive symptoms and self-efficacy beliefs were two significant predictors for the mental component score. In Study Three, the results of the structural equation modelling found that self-efficacy partially mediated the effect of health characteristics and depression on health-related quality of life. The health characteristics had strong direct effects on functional status and body mass index. The results also indicated that social support partially mediated the relationship between health characteristics and functional status. With regard to the joint effects of social support and self-efficacy, social support fully mediated the effect of health characteristics on self-efficacy, and self-efficacy partially mediated the effect of social support on functional status and health-related quality of life. The results also demonstrated that the models fitted the data well with relative high variance explained by the models, implying the hypothesized constructs under discussion were highly relevant, and hence the application for social cognitive theory in this context was supported. Conclusion: This thesis highlights the applicability of social cognitive theory on chronic disease self-management in older adults at risk of hospital readmission. Further studies are recommended to validate and continue to extend the development of social cognitive theory on chronic disease self-management in older adults to improve their nutritional and functional status, and health-related quality of life.
Resumo:
This paper presents new schemes for recursive estimation of the state transition probabilities for hidden Markov models (HMM's) via extended least squares (ELS) and recursive state prediction error (RSPE) methods. Local convergence analysis for the proposed RSPE algorithm is shown using the ordinary differential equation (ODE) approach developed for the more familiar recursive output prediction error (RPE) methods. The presented scheme converges and is relatively well conditioned compared with the ...
Resumo:
Despite significant research on drivers’ speeding behavior in work zones, little is known about how well drivers’ judgments of appropriate speeds match their actual speeds and what factors influence their judgments. This study aims to fill these two important gaps in the literature by comparing observed speeds in two work zones with drivers’ self-nominated speeds for the same work zones. In an online survey, drivers nominated speeds for the two work zones based on photographs in which the actual posted speed limits were not revealed. A simultaneous equation modelling approach was employed to examine the effects of driver characteristics on their self-nominated speeds. The results showed that survey participants nominated lower speeds (corresponding to higher compliance rates) than those which were observed. Higher speeds were nominated by males than females, young and middle aged drivers than older drivers, and drivers with truck driving experience than those who drive only cars. Larger differences between nominated and observed speeds were found among car drivers than truck drivers. These differences suggest that self-nominated speeds might not be valid indicators of the observed work zone speeds and therefore should not be used as an alternative to observed speed data.