836 resultados para factor analytic model
Resumo:
Published as an article in: Studies in Nonlinear Dynamics & Econometrics, 2004, vol. 8, issue 3, article 6.
Resumo:
This work seeks to understand past and present surface conditions on the Moon using two different but complementary approaches: topographic analysis using high-resolution elevation data from recent spacecraft missions and forward modeling of the dominant agent of lunar surface modification, impact cratering. The first investigation focuses on global surface roughness of the Moon, using a variety of statistical parameters to explore slopes at different scales and their relation to competing geological processes. We find that highlands topography behaves as a nearly self-similar fractal system on scales of order 100 meters, and there is a distinct change in this behavior above and below approximately 1 km. Chapter 2 focuses this analysis on two localized regions: the lunar south pole, including Shackleton crater, and the large mare-filled basins on the nearside of the Moon. In particular, we find that differential slope, a statistical measure of roughness related to the curvature of a topographic profile, is extremely useful in distinguishing between geologic units. Chapter 3 introduces a numerical model that simulates a cratered terrain by emplacing features of characteristic shape geometrically, allowing for tracking of both the topography and surviving rim fragments over time. The power spectral density of cratered terrains is estimated numerically from model results and benchmarked against a 1-dimensional analytic model. The power spectral slope is observed to vary predictably with the size-frequency distribution of craters, as well as the crater shape. The final chapter employs the rim-tracking feature of the cratered terrain model to analyze the evolving size-frequency distribution of craters under different criteria for identifying "visible" craters from surviving rim fragments. A geometric bias exists that systematically over counts large or small craters, depending on the rim fraction required to count a given feature as either visible or erased.
Resumo:
Advances in nano-scale mechanical testing have brought about progress in the understanding of physical phenomena in materials and a measure of control in the fabrication of novel materials. In contrast to bulk materials that display size-invariant mechanical properties, sub-micron metallic samples show a critical dependence on sample size. The strength of nano-scale single crystalline metals is well-described by a power-law function, σαD-n, where D is a critical sample size and n is a experimentally-fit positive exponent. This relationship is attributed to source-driven plasticity and demonstrates a strengthening as the decreasing sample size begins to limit the size and number of dislocation sources. A full understanding of this size-dependence is complicated by the presence of microstructural features such as interfaces that can compete with the dominant dislocation-based deformation mechanisms. In this thesis, the effects of microstructural features such as grain boundaries and anisotropic crystallinity on nano-scale metals are investigated through uniaxial compression testing. We find that nano-sized Cu covered by a hard coating displays a Bauschinger effect and the emergence of this behavior can be explained through a simple dislocation-based analytic model. Al nano-pillars containing a single vertically-oriented coincident site lattice grain boundary are found to show similar deformation to single-crystalline nano-pillars with slip traces passing through the grain boundary. With increasing tilt angle of the grain boundary from the pillar axis, we observe a transition from dislocation-dominated deformation to grain boundary sliding. Crystallites are observed to shear along the grain boundary and molecular dynamics simulations reveal a mechanism of atomic migration that accommodates boundary sliding. We conclude with an analysis of the effects of inherent crystal anisotropy and alloying on the mechanical behavior of the Mg alloy, AZ31. Through comparison to pure Mg, we show that the size effect dominates the strength of samples below 10 μm, that differences in the size effect between hexagonal slip systems is due to the inherent crystal anisotropy, suggesting that the fundamental mechanism of the size effect in these slip systems is the same.
Resumo:
The evoked response, a signal present in the electro-encephalogram when specific sense modalities are stimulated with brief sensory inputs, has not yet revealed as much about brain function as it apparently promised when first recorded in the late 1940's. One of the problems has been to record the responses at a large number of points on the surface of the head; thus in order to achieve greater spatial resolution than previously attained, a 50-channel recording system was designed to monitor experiments with human visually evoked responses.
Conventional voltage versus time plots of the responses were found inadequate as a means of making qualitative studies of such a large data space. This problem was solved by creating a graphical display of the responses in the form of equipotential maps of the activity at successive instants during the complete response. In order to ascertain the necessary complexity of any models of the responses, factor analytic procedures were used to show that models characterized by only five or six independent parameters could adequately represent the variability in all recording channels.
One type of equivalent source for the responses which meets these specifications is the electrostatic dipole. Two different dipole models were studied: the dipole in a homogeneous sphere and the dipole in a sphere comprised of two spherical shells (of different conductivities) concentric with and enclosing a homogeneous sphere of a third conductivity. These models were used to determine nonlinear least squares fits of dipole parameters to a given potential distribution on the surface of a spherical approximation to the head. Numerous tests of the procedures were conducted with problems having known solutions. After these theoretical studies demonstrated the applicability of the technique, the models were used to determine inverse solutions for the evoked response potentials at various times throughout the responses. It was found that reliable estimates of the location and strength of cortical activity were obtained, and that the two models differed only slightly in their inverse solutions. These techniques enabled information flow in the brain, as indicated by locations and strengths of active sites, to be followed throughout the evoked response.
Resumo:
Background: Elective repeat caesarean delivery (ERCD) rates have been increasing worldwide, thus prompting obstetric discourse on the risks and benefits for the mother and infant. Yet, these increasing rates also have major economic implications for the health care system. Given the dearth of information on the cost-effectiveness related to mode of delivery, the aim of this paper was to perform an economic evaluation on the costs and short-term maternal health consequences associated with a trial of labour after one previous caesarean delivery compared with ERCD for low risk women in Ireland.Methods: Using a decision analytic model, a cost-effectiveness analysis (CEA) was performed where the measure of health gain was quality-adjusted life years (QALYs) over a six-week time horizon. A review of international literature was conducted to derive representative estimates of adverse maternal health outcomes following a trial of labour after caesarean (TOLAC) and ERCD. Delivery/procedure costs derived from primary data collection and combined both "bottom-up" and "top-down" costing estimations.Results: Maternal morbidities emerged in twice as many cases in the TOLAC group than the ERCD group. However, a TOLAC was found to be the most-effective method of delivery because it was substantially less expensive than ERCD ((sic)1,835.06 versus (sic)4,039.87 per women, respectively), and QALYs were modestly higher (0.84 versus 0.70). Our findings were supported by probabilistic sensitivity analysis.Conclusions: Clinicians need to be well informed of the benefits and risks of TOLAC among low risk women. Ideally, clinician-patient discourse would address differences in length of hospital stay and postpartum recovery time. While it is premature advocate a policy of TOLAC across maternity units, the results of the study prompt further analysis and repeat iterations, encouraging future studies to synthesis previous research and new and relevant evidence under a single comprehensive decision model.
Resumo:
Despite the substantial organisational benefits of integrated IT, the implementation of such systems – and particularly Enterprise Resource Planning (ERP) systems – has tended to be problematic, stimulating an extensive body of research into ERP implementation. This research has remained largely separate from the main IT implementation literature. At the same time, studies of IT implementation have generally adopted either a factor or process approach; both have major limitations. To address these imitations, factor and process perspectives are combined here in a unique model of IT implementation. We argue that • the organisational factors which determine successful implementation differ for integrated and traditional, discrete IT • failure to manage these differences is a major source of integrated IT failure. The factor/process model is used as a framework for proposing differences between discrete and integrated IT.
Resumo:
We present the results of the one-year long observational campaign of the type 11 plateau SN 2005cs, which exploded in the nearby spiral galaxy M51 (the Whirlpool galaxy). This extensive data set makes SN 2005cs the best observed low-luminosity, Ni-56-poor type II plateau event so far and one of the best core-collapse supernovae ever. The optical and near-infrared spectra show narrow P-Cygni lines characteristic of this SN family, which are indicative of a very low expansion velocity (about 1000 km s(-1)) of the ejected material. The optical light curves cover both the plateau phase and the late-time radioactive tail, until about 380 d after core-collapse. Numerous unfiltered observations obtained by amateur astronomers give us the rare opportunity to monitor the fast rise to maximum light, lasting about 2 cl. In addition to optical observations, we also present near-infrared light curves that (together with already published ultraviolet observations) allow us to construct for the first time a reliable bolometric light Curve for an object of this class. Finally. comparing the observed data withthose derived front it semi-analytic model, we infer for SN 2005cs a Ni-56 mass of about 3 x 10(-3) M-circle dot a total ejected mass of 8-13 M-circle dot and an explosion energy of about 3 x 10(50) erg.
Resumo:
We present a sample of normal Type Ia supernovae (SNe Ia) from the Nearby Supernova Factory data set with spectrophotometry at sufficiently late phases to estimate the ejected mass using the bolometric light curve.Wemeasure Ni masses from the peak bolometric luminosity, then compare the luminosity in the Co-decay tail to the expected rate of radioactive energy release from ejecta of a given mass. We infer the ejected mass in a Bayesian context using a semi-analytic model of the ejecta, incorporating constraints from contemporary numerical models as priors on the density structure and distribution of Ni throughout the ejecta. We find a strong correlation between ejected mass and light-curve decline rate, and consequently Ni mass, with ejected masses in our data ranging from 0.9 to 1.4 M. Most fast-declining (SALT2 x <-1) normal SNe Ia have significantly sub-Chandrasekhar ejected masses in our fiducial analysis.
Resumo:
BACKGROUND: Despite vaccines and improved medical intensive care, clinicians must continue to be vigilant of possible Meningococcal Disease in children. The objective was to establish if the procalcitonin test was a cost-effective adjunct for prodromal Meningococcal Disease in children presenting at emergency department with fever without source.
METHODS AND FINDINGS: Data to evaluate procalcitonin, C-reactive protein and white cell count tests as indicators of Meningococcal Disease were collected from six independent studies identified through a systematic literature search, applying PRISMA guidelines. The data included 881 children with fever without source in developed countries.The optimal cut-off value for the procalcitonin, C-reactive protein and white cell count tests, each as an indicator of Meningococcal Disease, was determined. Summary Receiver Operator Curve analysis determined the overall diagnostic performance of each test with 95% confidence intervals. A decision analytic model was designed to reflect realistic clinical pathways for a child presenting with fever without source by comparing two diagnostic strategies: standard testing using combined C-reactive protein and white cell count tests compared to standard testing plus procalcitonin test. The costs of each of the four diagnosis groups (true positive, false negative, true negative and false positive) were assessed from a National Health Service payer perspective. The procalcitonin test was more accurate (sensitivity=0.89, 95%CI=0.76-0.96; specificity=0.74, 95%CI=0.4-0.92) for early Meningococcal Disease compared to standard testing alone (sensitivity=0.47, 95%CI=0.32-0.62; specificity=0.8, 95% CI=0.64-0.9). Decision analytic model outcomes indicated that the incremental cost effectiveness ratio for the base case was £-8,137.25 (US $ -13,371.94) per correctly treated patient.
CONCLUSIONS: Procalcitonin plus standard recommended tests, improved the discriminatory ability for fatal Meningococcal Disease and was more cost-effective; it was also a superior biomarker in infants. Further research is recommended for point-of-care procalcitonin testing and Markov modelling to incorporate cost per QALY with a life-time model.
Resumo:
OBJECTIVES: Regular use of nonsteroidal anti-inflammatory drugs (NSAIDs) is associated with a reduced risk of esophageal adenocarcinoma. Epidemiological studies examining the association between NSAID use and the risk of the precursor lesion, Barrett’s esophagus, have been inconclusive.
METHODS: We analyzed pooled individual-level participant data from six case-control studies of Barrett’s esophagus in the Barrett’s and Esophageal Adenocarcinoma Consortium (BEACON). We compared medication use from 1474 patients with Barrett’s esophagus separately with two control groups: 2256 population-based controls and 2018 gastroesophageal reflux disease (GERD) controls. Study-specific odds ratios (OR) and 95% confidence intervals (CI) were estimated using multivariable logistic regression models and were combined using a random effects meta-analytic model.
RESULTS: Regular (at least once weekly) use of any NSAIDs was not associated with the risk of Barrett’s esophagus (vs. population-based controls, adjusted OR = 1.00, 95% CI = 0.76–1.32; I2=61%; vs. GERD controls, adjusted OR = 0.99, 95% CI = 0.82–1.19; I2=19%). Similar null findings were observed among individuals who took aspirin or non-aspirin NSAIDs. We also found no association with highest levels of frequency (at least daily use) and duration (≥5 years) of NSAID use. There was evidence of moderate between-study heterogeneity; however, associations with NSAID use remained non-significant in “leave-one-out” sensitivity analyses.
CONCLUSIONS: Use of NSAIDs was not associated with the risk of Barrett’s esophagus. The previously reported inverse association between NSAID use and esophageal adenocarcinoma may be through reducing the risk of neoplastic progression in patients with Barrett’s esophagus.
Resumo:
DESIGN We will address our research objectives by searching the published and unpublished literature and conducting an evidence synthesis of i) studies of the effectiveness of psychosocial interventions provided for children and adolescents who have suffered maltreatment, ii) economic evaluations of these interventions and iii) studies of their acceptability to children, adolescents and their carers. SEARCH STRATEGY: Evidence will be identified via electronic databases for health and allied health literature, social sciences and social welfare, education and other evidence based depositories, and economic databases. We will identify material generated by user-led,voluntary sector enquiry by searching the internet and browsing the websites of relevant UK government departments and charities. Additionally, studies will be identified via the bibliographies of retrieved articles/reviews; targeted author searches; forward citation searching. We will also use our extensive professional networks, and our planned consultations with key stakeholders and our study steering committee. Databases will be searched from inception to time of search. REVIEW STRATEGY Inclusion criteria: 1) Infants, children or adolescents who have experienced maltreatment between the ages of 0 17 years. 2) All psychosocial interventions available for maltreated children and adolescents, by any provider and in any setting, aiming to address the sequelae of any form of maltreatment, including fabricated illness. 3) For synthesis of evidence of effectiveness: all controlled studies in which psychosocial interventions are compared with no-treatment, treatment as usual, waitlist or other-treated controls. For a synthesis of evidence of acceptability we will include any design that asks participants for their views or provides data on non-participation. For decision-analytic modelling we may include uncontrolled studies. Primary and secondary outcomes will be confirmed in consultation with stakeholders. Provisional primary outcomes are psychological distress/mental health (particularly PTSD, depression and anxiety, self-harm); ii) behaviour; iii) social functioning; iv) cognitive / academic attainment, v) quality of life, and vi) costs. After studies that meet the inclusion criteria have been identified (independently by two reviewers), data will be extracted and risk of bias (RoB) assessed (independently by two reviewers) using the Cochrane Collaboration RoB Tool (effectiveness), quality hierarchies of data sources for economic analyses (cost-effectiveness) and the CASP tool for qualitative research (acceptability). Where interventions are similar and appropriate data are available (or can be obtained) evidence synthesis will be performed to pool the results. Where possible, we will explore the extent to which age, maltreatment history (including whether intra- or extra-familial), time since maltreatment, care setting (family / out-of-home care including foster care/residential), care history, and characteristics of intervention (type, setting, provider, duration) moderate the effects of psychosocial interventions. A synthesis of acceptability data will be undertaken, using a narrative approach to synthesis. A decision-analytic model will be constructed to compare the expected cost-effectiveness of the different types of intervention identified in the systematic review. We will also conduct a Value of information analysis if the data permit. EXPECTED OUTPUTS: A synthesis of the effectiveness and cost effectiveness of psychosocial interventions for maltreated children (taking into account age, maltreatment profile and setting) and their acceptability to key stakeholders.
Resumo:
Although extended secondary prophylaxis with low-molecular-weight heparin was recently shown to be more effective than warfarin for cancer-related venous thromboembolism, its cost-effectiveness compared to traditional prophylaxis with warfarin is uncertain. We built a decision analytic model to evaluate the clinical and economic outcomes of a 6-month course of low-molecular-weight heparin or warfarin therapy in 65-year-old patients with cancer-related venous thromboembolism. We used probability estimates and utilities reported in the literature and published cost data. Using a US societal perspective, we compared strategies based on quality-adjusted life-years (QALYs) and lifetime costs. The incremental cost-effectiveness ratio of low-molecular-weight heparin compared with warfarin was 149,865 dollars/QALY. Low-molecular-weight heparin yielded a quality-adjusted life expectancy of 1.097 QALYs at the cost of 15,329 dollars. Overall, 46% (7108 dollars) of the total costs associated with low-molecular-weight heparin were attributable to pharmacy costs. Although the low-molecular-weigh heparin strategy achieved a higher incremental quality-adjusted life expectancy than the warfarin strategy (difference of 0.051 QALYs), this clinical benefit was offset by a substantial cost increment of 7,609 dollars. Cost-effectiveness results were sensitive to variation of the early mortality risks associated with low-molecular-weight heparin and warfarin and the pharmacy costs for low-molecular-weight heparin. Based on the best available evidence, secondary prophylaxis with low-molecular-weight heparin is more effective than warfarin for cancer-related venous thromboembolism. However, because of the substantial pharmacy costs of extended low-molecular-weight heparin prophylaxis in the US, this treatment is relatively expensive compared with warfarin.
Resumo:
INTRODUCTION : Les soins de première ligne au Québec vivent depuis quelques années une réorganisation importante. Les GMF, les cliniques réseaux, les CSSS, les réseaux locaux de service, ne sont que quelques exemples des nouveaux modes d’organisation qui voient le jour actuellement. La collaboration interprofessionnelle se trouve au cœur de ces changements. MÉTHODOLOGIE : Il s’agit d’une étude de cas unique, effectuée dans un GMF de deuxième vague. Les données ont été recueillies par des entrevues semi-dirigées auprès du médecin responsable du GMF, des médecins et des infirmières du GMF, et du cadre responsable des infirmières au CSSS. Les entrevues se sont déroulées jusqu’à saturation empirique. Des documents concernant les outils cliniques et les outils de communication ont aussi été consultés. RÉSULTATS : À travers un processus itératif touchant les éléments interactionnels et organisationnels, par l’évolution vers une culture différente, des ajustements mutuels ont pu être réalisés et les pratiques cliniques se sont réellement modifiées au sein du GMF étudié. Les participants ont souligné une amélioration de leurs résultats cliniques. Ils constatent que les patients ont une meilleure accessibilité, mais l’effet sur la charge de travail et sur la capacité de suivre plus de patients est évaluée de façon variable. CONCLUSION : Le modèle conceptuel proposé permet d’observer empiriquement les dimensions qui font ressortir la valeur ajoutée du développement de la collaboration interprofessionnelle au sein des GMF, ainsi que son impact sur les pratiques professionnelles.
Resumo:
L’agrégation érythrocytaire est le principal facteur responsable des propriétés non newtoniennes sanguines pour des conditions d’écoulement à faible cisaillement. Lorsque les globules rouges s’agrègent, ils forment des rouleaux et des structures tridimensionnelles enchevêtrées qui font passer la viscosité sanguine de quelques mPa.s à une centaine de mPa.s. Cette organisation microstructurale érythrocytaire est maintenue par des liens inter-globulaires de faible énergie, lesquels sont brisés par une augmentation du cisaillement. Ces propriétés macroscopiques sont bien connues. Toutefois, les liens étiologiques entre ces propriétés rhéologiques générales et leurs effets pathophysiologiques demeurent difficiles à évaluer in vivo puisque les propriétés sanguines sont dynamiques et fortement tributaires des conditions d’écoulement. Ainsi, à partir de propriétés rhéologiques mesurées in vitro dans des conditions contrôlées, il devient difficile d’extrapoler leurs valeurs dans un environnement physiologique. Or, les thrombophlébites se développent systématiquement en des loci particuliers du système cardiovasculaire. D’autre part, plusieurs études cliniques ont établi que des conditions hémorhéologiques perturbées constituent des facteurs de risque de thrombose veineuse mais leurs contributions étiologiques demeurent hypothétiques ou corrélatives. En conséquence, un outil de caractérisation hémorhéologique applicable in vivo et in situ devrait permettre de mieux cerner et comprendre ces implications. Les ultrasons, qui se propagent dans les tissus biologiques, sont sensibles à l’agrégation érythrocytaire. De nature non invasive, l’imagerie ultrasonore permet de caractériser in vivo et in situ la microstructure sanguine dans des conditions d’écoulements physiologiques. Les signaux ultrasonores rétrodiffusés portent une information sur la microstructure sanguine reflétant directement les perturbations hémorhéologiques locales. Une cartographie in vivo de l’agrégation érythrocytaire, unique aux ultrasons, devrait permettre d’investiguer les implications étiologiques de l’hémorhéologie dans la maladie thrombotique vasculaire. Cette thèse complète une série de travaux effectués au Laboratoire de Biorhéologie et d’Ultrasonographie Médicale (LBUM) du centre de recherche du Centre hospitalier de l’Université de Montréal portant sur la rétrodiffusion ultrasonore érythrocytaire et menant à une application in vivo de la méthode. Elle se situe à la suite de travaux de modélisation qui ont mis en évidence la pertinence d’un modèle particulaire tenant compte de la densité des globules rouges, de la section de rétrodiffusion unitaire d’un globule et du facteur de structure. Ce modèle permet d’établir le lien entre la microstructure sanguine et le spectre fréquentiel du coefficient de rétrodiffusion ultrasonore. Une approximation au second ordre en fréquence du facteur de structure est proposée dans ces travaux pour décrire la microstructure sanguine. Cette approche est tout d’abord présentée et validée dans un champ d’écoulement cisaillé homogène. Une extension de la méthode en 2D permet ensuite la cartographie des propriétés structurelles sanguines en écoulement tubulaire par des images paramétriques qui mettent en évidence le caractère temporel de l’agrégation et la sensibilité ultrasonore à ces phénomènes. Une extrapolation menant à une relation entre la taille des agrégats érythrocytaires et la viscosité sanguine permet l’établissement de cartes de viscosité locales. Enfin, il est démontré, à l’aide d’un modèle animal, qu’une augmentation subite de l’agrégation érythrocytaire provoque la formation d’un thrombus veineux. Le niveau d’agrégation, la présence du thrombus et les variations du débit ont été caractérisés, dans cette étude, par imagerie ultrasonore. Nos résultats suggèrent que des paramètres hémorhéologiques, préférablement mesurés in vivo et in situ, devraient faire partie du profil de risque thrombotique.
Resumo:
En el presente documento se descompone la estructura a términos de las tasas de interés de los bonos soberanos de EE.UU. y Colombia. Se utiliza un modelo afín de cuatro factores, donde el primero de ellos corresponde a un factor de pronóstico de los retornos y, los demás, a los tres primeros componentes principales de la matriz de varianza-covarianza de las tasas de interés. Para la descomposición de las tasas de interés de Colombia se utiliza el factor de pronóstico de EE.UU. para capturar efectos de spillovers. Se logra concluir que las tasas en EE.UU. no tienen un efecto sobre el nivel de tasas en Colombia pero sí influyen en los excesos de retorno esperado de los bonos y también existen efectos sobre los factores locales, aunque el factor determinante de la dinámica de las tasas locales es el “nivel”. De la descomposición se obtienen las expectativas de la tasa corta y la prima por vencimiento. En ese sentido, se observa que el valor de la prima por vencimiento y su volatilidad incrementa con el vencimiento y que este valor ha venido disminuyendo en el tiempo.