82 resultados para Flynn and Wall kinetic model
Resumo:
RATIONALE: An objective and simple prognostic model for patients with pulmonary embolism could be helpful in guiding initial intensity of treatment. OBJECTIVES: To develop a clinical prediction rule that accurately classifies patients with pulmonary embolism into categories of increasing risk of mortality and other adverse medical outcomes. METHODS: We randomly allocated 15,531 inpatient discharges with pulmonary embolism from 186 Pennsylvania hospitals to derivation (67%) and internal validation (33%) samples. We derived our prediction rule using logistic regression with 30-day mortality as the primary outcome, and patient demographic and clinical data routinely available at presentation as potential predictor variables. We externally validated the rule in 221 inpatients with pulmonary embolism from Switzerland and France. MEASUREMENTS: We compared mortality and nonfatal adverse medical outcomes across the derivation and two validation samples. MAIN RESULTS: The prediction rule is based on 11 simple patient characteristics that were independently associated with mortality and stratifies patients with pulmonary embolism into five severity classes, with 30-day mortality rates of 0-1.6% in class I, 1.7-3.5% in class II, 3.2-7.1% in class III, 4.0-11.4% in class IV, and 10.0-24.5% in class V across the derivation and validation samples. Inpatient death and nonfatal complications were <or= 1.1% among patients in class I and <or= 1.9% among patients in class II. CONCLUSIONS: Our rule accurately classifies patients with pulmonary embolism into classes of increasing risk of mortality and other adverse medical outcomes. Further validation of the rule is important before its implementation as a decision aid to guide the initial management of patients with pulmonary embolism.
Resumo:
The geologic structures and metamorphic zonation of the northwestern Indian Himalaya contrast significantly with those in the central and eastern parts of the range, where the high-grade metamorphic rocks of the High Himalayan Crystalline (HHC) thrust southward over the weakly metamorphosed sediments of the Lesser Himalaya along the Main Central Thrust (MCT). Indeed, the hanging wall of the MCT in the NW Himalaya mainly consists of the greenschist facies metasediments of the Chamba zone, whereas HHC high-grade rocks are exposed more internally in the range as a large-scale dome called the Gianbul dome. This Gianbul dome is bounded by two oppositely directed shear zones, the NE-dipping Zanskar Shear Zone (ZSZ) on the northern flank and the SW-dipping Miyar Shear Zone (MSZ) on the southern limb. Current models for the emplacement of the HHC in NW India as a dome structure differ mainly in terms of the roles played by both the ZSZ and the MSZ during the tectonothermal evolution of the HHC. In both the channel flow model and wedge extrusion model, the ZSZ acts as a backstop normal fault along which the high-grade metamorphic rocks of the HHC of Zanskar are exhumed. In contrast, the recently proposed tectonic wedging model argues that the ZSZ and the MSZ correspond to one single detachment system that operates as a subhorizontal backthrust off of the MCT. Thus, the kinematic evolution of the two shear zones, the ZSZ and the MSZ, and their structural, metamorphic and chronological relations appear to be diagnostic features for discriminating the different models. In this paper, structural, metamorphic and geochronological data demonstrate that the MSZ and the ZSZ experienced two distinct kinematic evolutions. As such, the data presented in this paper rule out the hypothesis that the MSZ and the ZSZ constitute one single detachment system, as postulated by the tectonic wedging model. Structural, metamorphic and geochronological data are used to present an alternative tectonic model for the large-scale doming in the NW Indian Himalaya involving early NE-directed tectonics, weakness in the upper crust, reduced erosion at the orogenic front and rapid exhumation along both the ZSZ and the MSZ.
Resumo:
Forensic intelligence is a distinct dimension of forensic science. Forensic intelligence processes have mostly been developed to address either a specific type of trace or a specific problem. Even though these empirical developments have led to successes, they are trace-specific in nature and contribute to the generation of silos which hamper the establishment of a more general and transversal model. Forensic intelligence has shown some important perspectives but more general developments are required to address persistent challenges. This will ensure the progress of the discipline as well as its widespread implementation in the future. This paper demonstrates that the description of forensic intelligence processes, their architectures, and the methods for building them can, at a certain level, be abstracted from the type of traces considered. A comparative analysis is made between two forensic intelligence approaches developed independently in Australia and in Europe regarding the monitoring of apparently very different kind of problems: illicit drugs and false identity documents. An inductive effort is pursued to identify similarities and to outline a general model. Besides breaking barriers between apparently separate fields of study in forensic science and intelligence, this transversal model would assist in defining forensic intelligence, its role and place in policing, and in identifying its contributions and limitations. The model will facilitate the paradigm shift from the current case-by-case reactive attitude towards a proactive approach by serving as a guideline for the use of forensic case data in an intelligence-led perspective. A follow-up article will specifically address issues related to comparison processes, decision points and organisational issues regarding forensic intelligence (part II).
Resumo:
Gender inequalities remain an issue in our society and particularly in the workplace. Several factors can explain this gender difference in top-level managerial positions such as career ambitions but also biases against women. In our chapter, we propose a model explaining why gender inequalities and particularly discrimination against women is still present in our societies despite social norms and existing legislation on gender equality. To this purpose, we review research on discrimination through two different approaches, (a) a prejudice approach through the justification-suppression model developed by Crandall and Eshleman (2003) and (b) a power approach through the social dominance theory (Pratto, Sidanius, Stallworth, & Malle, 1994; Sidanius & Pratto, 1999). In our work, we integrate these two approaches and propose a model of gender prejudice, power and discrimination. The integration of these two approaches contributes to a better understanding of how discrimination against women is formed and maintained over time.
Resumo:
INTRODUCTION: Hip fractures are responsible for excessive mortality, decreasing the 5-year survival rate by about 20%. From an economic perspective, they represent a major source of expense, with direct costs in hospitalization, rehabilitation, and institutionalization. The incidence rate sharply increases after the age of 70, but it can be reduced in women aged 70-80 years by therapeutic interventions. Recent analyses suggest that the most efficient strategy is to implement such interventions in women at the age of 70 years. As several guidelines recommend bone mineral density (BMD) screening of postmenopausal women with clinical risk factors, our objective was to assess the cost-effectiveness of two screening strategies applied to elderly women aged 70 years and older. METHODS: A cost-effectiveness analysis was performed using decision-tree analysis and a Markov model. Two alternative strategies, one measuring BMD of all women, and one measuring BMD only of those having at least one risk factor, were compared with the reference strategy "no screening". Cost-effectiveness ratios were measured as cost per year gained without hip fracture. Most probabilities were based on data observed in EPIDOS, SEMOF and OFELY cohorts. RESULTS: In this model, which is mostly based on observed data, the strategy "screen all" was more cost effective than "screen women at risk." For one woman screened at the age of 70 and followed for 10 years, the incremental (additional) cost-effectiveness ratio of these two strategies compared with the reference was 4,235 euros and 8,290 euros, respectively. CONCLUSION: The results of this model, under the assumptions described in the paper, suggest that in women aged 70-80 years, screening all women with dual-energy X-ray absorptiometry (DXA) would be more effective than no screening or screening only women with at least one risk factor. Cost-effectiveness studies based on decision-analysis trees maybe useful tools for helping decision makers, and further models based on different assumptions should be performed to improve the level of evidence on cost-effectiveness ratios of the usual screening strategies for osteoporosis.
Resumo:
Background and Aims: IL28B polymorphisms, interferon (IFN)-gamma inducible protein-10 (IP-10) levels and the homeostasis model assessment of insulin resistance (HOMA-IR) score have been reported to predict rapid (RVR) and sustained (SVR) virological response in chronic hepatitis C (CHC), but it is not known whether these factors represent independent, clinically useful predictors. The aim of the study was to assess factors (including IL28B polymorphisms, IP-10 levels and HOMA-IR score) independently predicting response to therapy in CHC under real life conditions.Methods: Multivariate analysis of factors predicting RVR and SVR in 280 consecutive, treatment-naive CHC patients treated with pegylated IFN alpha and ribavirin in a prospective multicenter study.Results: Independent predictors of RVR were HCV RNA < 400,000 IU/ml (OR11.37; 95% CI 3.03-42.6), rs12980275 AA (vs. AG/GG) (OR 7.09; 1.97-25.56) and IP-10 (OR 0.04; 0.003-0.56) in HCV genotype 1 patients and lower baseline γ-glutamyl-transferase levels (OR = 0.02; 0.0009-0.31) in HCV genotype 3 patients. Independent predictors of SVR were rs12980275 AA (OR 9.68; 3.44-27.18), age < 40 yrs (OR = 4.79; 1.50-15.34) and HCV RNA < 400,000 IU/ml (OR 2.74; 1.03-7.27) in HCV genotype 1 patients and rs12980275 AA (OR = 6.26; 1.98-19.74) and age < 40 yrs (OR 5.37; 1.54-18.75) in the 88 HCV genotype 1 patients without a RVR. RVR was by itself predictive of SVR in HCV genotype 1 patients (32 of 33, 97%; OR 33.0; 4.06-268.32) and the only independent predictor of SVR in HCV genotype 2 (OR 9.0, 1.72-46.99; p=0.009) or 3 patients (OR 7.8, 1.43-42.67; p=0.01).Conclusions: In HCV genotype 1 patients, IL28B polymorphisms, HCV RNA load and IP-10 independently predict RVR. The combination of IL28B polymorphisms, HCV RNA level and age may yield more accurate pretreatment prediction of SVR. HOMA-IR score is not associated with viral response.
Resumo:
Phototropism is an adaptive response allowing plants to optimize photosynthetic light capture. This is achieved by asymmetric growth between the shaded and lit sides of the stimulated organ. In grass seedlings, the site of phototropin-mediated light perception is distinct from the site of bending; however, in dicotyledonous plants (e.g., Arabidopsis), spatial aspects of perception remain debatable. We use morphological studies and genetics to show that phototropism can occur in the absence of the root, lower hypocotyl, hypocotyl apex, and cotyledons. Tissue-specific expression of the phototropin1 (phot1) photoreceptor demonstrates that light sensing occurs in the upper hypocotyl and that expression of phot1 in the hypocotyl elongation zone is sufficient to enable a normal phototropic response. Moreover, we show that efficient phototropism occurs when phot1 is expressed from endodermal, cortical, or epidermal cells and that its local activation rapidly leads to a global response throughout the seedling. We propose that spatial aspects in the steps leading from light perception to growth reorientation during phototropism differ between grasses and dicots. These results are important to properly interpret genetic experiments and establish a model connecting light perception to the growth response, including cellular and morphological aspects.
Resumo:
Background Individual signs and symptoms are of limited value for the diagnosis of influenza. Objective To develop a decision tree for the diagnosis of influenza based on a classification and regression tree (CART) analysis. Methods Data from two previous similar cohort studies were assembled into a single dataset. The data were randomly divided into a development set (70%) and a validation set (30%). We used CART analysis to develop three models that maximize the number of patients who do not require diagnostic testing prior to treatment decisions. The validation set was used to evaluate overfitting of the model to the training set. Results Model 1 has seven terminal nodes based on temperature, the onset of symptoms and the presence of chills, cough and myalgia. Model 2 was a simpler tree with only two splits based on temperature and the presence of chills. Model 3 was developed with temperature as a dichotomous variable (≥38°C) and had only two splits based on the presence of fever and myalgia. The area under the receiver operating characteristic curves (AUROCC) for the development and validation sets, respectively, were 0.82 and 0.80 for Model 1, 0.75 and 0.76 for Model 2 and 0.76 and 0.77 for Model 3. Model 2 classified 67% of patients in the validation group into a high- or low-risk group compared with only 38% for Model 1 and 54% for Model 3. Conclusions A simple decision tree (Model 2) classified two-thirds of patients as low or high risk and had an AUROCC of 0.76. After further validation in an independent population, this CART model could support clinical decision making regarding influenza, with low-risk patients requiring no further evaluation for influenza and high-risk patients being candidates for empiric symptomatic or drug therapy.
Resumo:
Background: Experimental data have suggested that adoptive transfer of CD4+CD25+Foxp3+ regulatory T cells (Tregs), capable of controlling immune responses to specifi c auto- or alloantigens, could be used as a therapeutic strategy to promote specifi c tolerance in T-cell mediated diseases and in organ transplantation (Tx). However, before advocating the application of immunotherapy with Tregs in Tx, we need to improve our understanding of their in vivo homeostasis, traffi cking pattern and effector function in response to alloantigens. Methods : Donor-antigen specifi c murine Tregs were generated and characterized in vitro following our described protocols. Using an adoptive transfer and skin allotransplantation model, we have analyzed the in vivo expansion and homing of fl uorescent-labeled effector T cells (Teff) and Tregs, at different time-points after Tx, using fl ow-cytometry as well as fl uorescence microscopy techniques. Results: Tregs expressed CD62L, CCR7 and CD103 allowing their homing into lymphoid and non-lymphoid tissues (gut, skin) after intravenous injection. While hyporesponsive to TCR stimulation in vitro, transferred Tregs survived, migrated to secondary lymphoid organs and preferentially expanded within the allograft draining lymph nodes. Furthermore, Foxp3+ cells could be detected inside the allograft as early as day 3-5 after Tx. At a much later time-point (day 60 after Tx), graft-infi ltrating Foxp3+ cells were also detectable in tolerant recipients. When transferred alone, CD4+CD25- Teff cells expanded within secondary lymphoid organs and infi ltrated the allograft by day 3-5 after Tx. The co-transfer of Tregs limited the expansion of alloreactive Teff cells as well as their recruitment into the allograft. The promotion of graft survival observed in the presence of Tregs was in part mediated by the inhibition of the production of effector cytokines by CD4+CD25- T cells. Conclusion: Taken together, our results suggest that the suppression of allograft rejection and the induction of Tx tolerance are in part dependant on the alloantigendriven homing and expansion of Tregs. Thus, the appropriate localization of Tregs may be critical for their suppressive function in vivo.
Resumo:
An antagonistic effect of voriconazole on the fungicidal activity of sequential doses of amphotericin B has previously been demonstrated in Candida albicans strains susceptible to voriconazole. Because treatment failure and the need to switch to other antifungals are expected to occur more often in infections that are caused by resistant strains, it was of interest to study whether the antagonistic effect was still seen in Candida strains with reduced susceptibility to voriconazole. With the hypothesis that antagonism will not occur in voriconazole-resistant strains, C. albicans strains with characterized mechanisms of resistance against voriconazole, as well as Candida glabrata and Candida krusei strains with differences in their degrees of susceptibility to voriconazole were exposed to voriconazole or amphotericin B alone, to both drugs simultaneously, or to voriconazole followed by amphotericin B in an in vitro kinetic model. Amphotericin B administered alone or simultaneously with voriconazole resulted in fungicidal activity. When amphotericin B was administered after voriconazole, its activity was reduced (median reduction, 61%; range, 9 to 94%). Levels of voriconazole-dependent inhibition of amphotericin B activity differed significantly among the strains but were not correlated with the MIC values (correlation coefficient, -0.19; P = 0.65). Inhibition was found in C. albicans strains with increases in CDR1 and CDR2 expression but not in the strain with an increase in MDR1 expression. In summary, decreased susceptibility to voriconazole does not abolish voriconazole-dependent inhibition of the fungicidal activity of amphotericin B in voriconazole-resistant Candida strains. The degree of interaction could not be predicted by the MIC value alone.
Resumo:
This dissertation focuses on the practice of regulatory governance, throughout the study of the functioning of formally independent regulatory agencies (IRAs), with special attention to their de facto independence. The research goals are grounded on a "neo-positivist" (or "reconstructed positivist") position (Hawkesworth 1992; Radaelli 2000b; Sabatier 2000). This perspective starts from the ontological assumption that even if subjective perceptions are constitutive elements of political phenomena, a real world exists beyond any social construction and can, however imperfectly, become the object of scientific inquiry. Epistemologically, it follows that hypothetical-deductive theories with explanatory aims can be tested by employing a proper methodology and set of analytical techniques. It is thus possible to make scientific inferences and general conclusions to a certain extent, according to a Bayesian conception of knowledge, in order to update the prior scientific beliefs in the truth of the related hypotheses (Howson 1998), while acknowledging the fact that the conditions of truth are at least partially subjective and historically determined (Foucault 1988; Kuhn 1970). At the same time, a sceptical position is adopted towards the supposed disjunction between facts and values and the possibility of discovering abstract universal laws in social science. It has been observed that the current version of capitalism corresponds to the golden age of regulation, and that since the 1980s no government activity in OECD countries has grown faster than regulatory functions (Jacobs 1999). Following an apparent paradox, the ongoing dynamics of liberalisation, privatisation, decartelisation, internationalisation, and regional integration hardly led to the crumbling of the state, but instead promoted a wave of regulatory growth in the face of new risks and new opportunities (Vogel 1996). Accordingly, a new order of regulatory capitalism is rising, implying a new division of labour between state and society and entailing the expansion and intensification of regulation (Levi-Faur 2005). The previous order, relying on public ownership and public intervention and/or on sectoral self-regulation by private actors, is being replaced by a more formalised, expert-based, open, and independently regulated model of governance. Independent regulation agencies (IRAs), that is, formally independent administrative agencies with regulatory powers that benefit from public authority delegated from political decision makers, represent the main institutional feature of regulatory governance (Gilardi 2008). IRAs constitute a relatively new technology of regulation in western Europe, at least for certain domains, but they are increasingly widespread across countries and sectors. For instance, independent regulators have been set up for regulating very diverse issues, such as general competition, banking and finance, telecommunications, civil aviation, railway services, food safety, the pharmaceutical industry, electricity, environmental protection, and personal data privacy. Two attributes of IRAs deserve a special mention. On the one hand, they are formally separated from democratic institutions and elected politicians, thus raising normative and empirical concerns about their accountability and legitimacy. On the other hand, some hard questions about their role as political actors are still unaddressed, though, together with regulatory competencies, IRAs often accumulate executive, (quasi-)legislative, and adjudicatory functions, as well as about their performance.
Resumo:
The paper presents an approach for mapping of precipitation data. The main goal is to perform spatial predictions and simulations of precipitation fields using geostatistical methods (ordinary kriging, kriging with external drift) as well as machine learning algorithms (neural networks). More practically, the objective is to reproduce simultaneously both the spatial patterns and the extreme values. This objective is best reached by models integrating geostatistics and machine learning algorithms. To demonstrate how such models work, two case studies have been considered: first, a 2-day accumulation of heavy precipitation and second, a 6-day accumulation of extreme orographic precipitation. The first example is used to compare the performance of two optimization algorithms (conjugate gradients and Levenberg-Marquardt) of a neural network for the reproduction of extreme values. Hybrid models, which combine geostatistical and machine learning algorithms, are also treated in this context. The second dataset is used to analyze the contribution of radar Doppler imagery when used as external drift or as input in the models (kriging with external drift and neural networks). Model assessment is carried out by comparing independent validation errors as well as analyzing data patterns.
Resumo:
Evidence has emerged that the initiation and growth of gliomas is sustained by a subpopulation of cancer-initiating cells (CICs). Because of the difficulty of using markers to tag CICs in gliomas, we have previously exploited more robust phenotypic characteristics, including a specific morphology and intrincic autofluorescence, to identify and isolate a subpopulation of glioma CICs, called FL1(+). The objective of this study was to further validate our method in a large cohort of human glioma and a mouse model of glioma. Seventy-four human gliomas of all grades and the GFAP-V(12)HA-ras B8 mouse model were analyzed for in vitro self-renewal capacity and their content of FL1(+). Nonneoplastic brain tissue and embryonic mouse brain were used as control. Genetic traceability along passages was assessed with microsatellite analysis. We found that FL1(+) cells from low-grade gliomas and from control nonneoplasic brain tissue show a lower level of autofluorescence and undergo a restricted number of cell divisions before dying in culture. In contrast, we found that FL1(+) cells derived from many but not all high-grade gliomas acquire high levels of autofluorescence and can be propagated in long-term cultures. Moreover, FL1(+) cells show a remarkable traceability over time in vitro and in vivo. Our results show that FL1(+) cells can be found in all specimens of a large cohort of human gliomas of different grades and in a model of genetically induced mouse glioma as well as nonneoplastic brain. However, their self-renewal capacity is variable and seems to be dependent on the tumor grade.
Resumo:
Järvholm and Co-workers (2009) proposed a conceptual model for research on working life. Models are powerful communication and decision tools. This model is strongly unidirectional and does not cover the mentioned interactions in the arguments.With help of a genealogy of work and of health it is shown that work and health are interactive and have to be analysed on the background of society.Key words: research model, work, health, occupational health, society, interaction, discussion paperRemodellierung der von Järvholm et al. (2009) vorgeschlagenen Forschungsperspektiven in Arbeit und GesundheitJärvholm und Kollegen stellten 2009 ein konzeptionelles Modell für die Forschung im Bereich Arbeit und Gesundheit vor. Modelle stellen kraftvolle Kommunikations- und Entscheidungsinstrumente dar. Die Einflussfaktoren im Modell verlaufen jedoch nur in einer Richtung und bilden die interaktiven Argumente im Text nicht ab. Mit Hilfe einer Genealogie der Begriffe Arbeit und Gesundheit wird aufgezeigt, dass Arbeit und Gesundheit sich gegenseitig beeinflussen und nur vor dem Hintergrund der jeweiligen gesellschaftlichen Kontextfaktoren zu analysieren sind.Introduction : After an interesting introduction about the objectives of research on working life, Järvholm and Co-workers (2009) manage to define a conceptual model for working life research out of a small survey of Occupational Safety and Health (OSH) definitions. The strong point of their model is the entity 'working life' including personal development, as well as career paths and aging. Yet, the model Järvholm et al. (2009) propose is strangely unidirectional; the arrows point from the population to working life, from there to health and to disease, as well as to productivity and economic resources. The diagram only shows one feed-back loop: between economic resources and health. We all know that having a chronic disease condition influences work and working capacity. Economic resources have a strong influence on work, too. Having personal economic resources will influence the kind of work someone accepts and facilitate access to continuous professional education. A third observation is that society is not present in the model, although this is less the case in the arguments. In fact, there is an incomprehensible gap between the arguments brought forth by Järvholm and co-workers and their reductionist model.Switzerland has a very low coverage of occupational health specialists. Switzerland is a long way from fulfilling the WHO's recommendations on workers' access to OSH services as described in its Global plan of action. The Institute for Work and Health (IST) in Lausanne is the only organisation which covers the major domains of OSH research that are occupational medicine, occupational hygiene, ergonomic and psychosocial research. As the country's sole occupational health institution we are forced to reflect the objectives of working life research so as not to waste the scare resources available.I will set out below a much shortened genealogy of work and of health, with the aim of extending Järvholm et al's (2009) analyses on the perspectives of working life research in two directions. Firstly towards the interactive nature of work and health and the integration of society, and secondly towards the question of what working life means or where working life could be situated.Work, as we know it today - paid work regulated by a contract as the basis for sustaining life and as a base for social rights - was born in modern era. Therefore I will start my genealogy in the pre-modern era, focus on the important changes that occurred during industrial revolution and the modern era and end in 2010 taking into account the enormous transformations of the past 20-30 years. I will put aside some 810 years of advances in science and technology that have expanded the world's limits and human understanding, and restrict my genealogy to work and to health/body implicating also the societal realm. [Author]
Resumo:
Natural selection is typically exerted at some specific life stages. If natural selection takes place before a trait can be measured, using conventional models can cause wrong inference about population parameters. When the missing data process relates to the trait of interest, a valid inference requires explicit modeling of the missing process. We propose a joint modeling approach, a shared parameter model, to account for nonrandom missing data. It consists of an animal model for the phenotypic data and a logistic model for the missing process, linked by the additive genetic effects. A Bayesian approach is taken and inference is made using integrated nested Laplace approximations. From a simulation study we find that wrongly assuming that missing data are missing at random can result in severely biased estimates of additive genetic variance. Using real data from a wild population of Swiss barn owls Tyto alba, our model indicates that the missing individuals would display large black spots; and we conclude that genes affecting this trait are already under selection before it is expressed. Our model is a tool to correctly estimate the magnitude of both natural selection and additive genetic variance.