969 resultados para duration model
Factors affecting hospital admission and recovery stay duration of in-patient motor victims in Spain
Resumo:
Hospital expenses are a major cost driver of healthcare systems in Europe, with motor injuries being the leading mechanism of hospitalizations. This paper investigates the injury characteristics which explain the hospitalization of victims of traffic accidents that took place in Spain. Using a motor insurance database with 16.081 observations a generalized Tobit regression model is applied to analyse the factors that influence both the likelihood of being admitted to hospital after a motor collision and the length of hospital stay in the event of admission. The consistency of Tobit estimates relies on the normality of perturbation terms. Here a semi-parametric regression model was fitted to test the consistency of estimates, concluding that a normal distribution of errors cannot be rejected. Among other results, it was found that older men with fractures and injuries located in the head and lower torso are more likely to be hospitalized after the collision, and that they also have a longer expected length of hospital recovery stay.
Resumo:
To date, for most biological and physiological phenomena, the scientific community has reach a consensus on their related function, except for sleep, which has an undetermined, albeit mystery, function. To further our understanding of sleep function(s), we first focused on the level of complexity at which sleep-like phenomenon can be observed. This lead to the development of an in vitro model. The second approach was to understand the molecular and cellular pathways regulating sleep and wakefulness, using both our in vitro and in vivo models. The third approach (ongoing) is to look across evolution when sleep or wakefulness appears. (1) To address the question as to whether sleep is a cellular property and how this is linked to the entire brain functioning, we developed a model of sleep in vitro by using dissociated primary cortical cultures. We aimed at simulating the major characteristics of sleep and wakefulness in vitro. We have shown that mature cortical cultures display a spontaneous electrical activity similar to sleep. When these cultures are stimulated by waking neurotransmitters, they show a tonic firing activity, similar to wakefulness, but return spontaneously to the "sleep-like" state 24h after stimulation. We have also shown that transcriptional, electrophysiological, and metabolic correlates of sleep and wakefulness can be reliably detected in dissociated cortical cultures. (2) To further understand at which molecular and cellular levels changes between sleep and wakefulness occur, we have used a pharmacological and systematic gene transcription approach in vitro and discovered a major role played by the Erk pathway. Indeed, pharmacological inhibition of this pathway in living animals decreased sleep by 2 hours per day and consolidated both sleep and wakefulness by reducing their fragmentation. (3) Finally, we tried to evaluate the presence of sleep in one of the most primitive species with a neural network. We set up Hydra as a model organism. We hypothesized that sleep as a cellular (neuronal) property may occur with the appearance of the most primitive nervous system. We were able to show that Hydra have periodic rest phases amounting to up to 5 hours per day. In conclusion, our work established an in vitro model to study sleep, discovered one of the major signaling pathways regulating vigilance states, and strongly suggests that sleep is a cellular property highly conserved at the molecular level during evolution. -- Jusqu'à ce jour, la communauté scientifique s'est mise d'accord sur la fonction d'une majorité des processus physiologiques, excepté pour le sommeil. En effet, la fonction du sommeil reste un mystère, et aucun consensus n'est atteint le concernant. Pour mieux comprendre la ou les fonctions du sommeil, (1) nous nous sommes d'abord concentré sur le niveau de complexité auquel un état ressemblant au sommeil peut être observé. Nous avons ainsi développé un modèle du sommeil in vitro, (2) nous avons disséqué les mécanismes moléculaires et cellulaires qui pourraient réguler le sommeil, (3) nous avons cherché à savoir si un état de sommeil peut être trouvé dans l'hydre, l'animal le plus primitif avec un système nerveux. (1) Pour répondre à la question de savoir à quel niveau de complexité apparaît un état de sommeil ou d'éveil, nous avons développé un modèle du sommeil, en utilisant des cellules dissociées de cortex. Nous avons essayé de reproduire les corrélats du sommeil et de l'éveil in vitro. Pour ce faire, nous avons développé des cultures qui montrent les signes électrophysiologiques du sommeil, puis quand stimulées chimiquement passent à un état proche de l'éveil et retournent dans un état de sommeil 24 heures après la stimulation. Notre modèle n'est pas parfait, mais nous avons montré que nous pouvions obtenir les corrélats électrophysiologiques, transcriptionnels et métaboliques du sommeil dans des cellules corticales dissociées. (2) Pour mieux comprendre ce qui se passe au niveau moléculaire et cellulaire durant les différents états de vigilance, nous avons utilisé ce modèle in vitro pour disséquer les différentes voies de signalisation moléculaire. Nous avons donc bloqué pharmacologiquement les voies majeures. Nous avons mis en évidence la voie Erkl/2 qui joue un rôle majeur dans la régulation du sommeil et dans la transcription des gènes qui corrèlent avec le cycle veille-sommeil. En effet, l'inhibition pharmacologique de cette voie chez la souris diminue de 2 heures la quantité du sommeil journalier et consolide l'éveil et le sommeil en diminuant leur fragmentation. (3) Finalement, nous avons cherché la présence du sommeil chez l'Hydre. Pour cela, nous avons étudié le comportement de l'Hydre pendant 24-48h et montrons que des périodes d'inactivité, semblable au sommeil, sont présentes dans cette espèce primitive. L'ensemble de ces travaux indique que le sommeil est une propriété cellulaire, présent chez tout animal avec un système nerveux et régulé par une voie de signalisation phylogénétiquement conservée.
Resumo:
ABSTRACT In the present study, the influence of temperature (15, 20, 25, 30 and 35°C) and leaf wetness period (6, 12, 24 and 48 hours) on the severity of Cercospora leaf spot of beet, caused by Cercospora beticola, was studied under controlled conditions. Lesion density was influenced by temperature and leaf wetness duration (P<0.05). Data were subjected to nonlinear regression analysis. The generalized beta function was used for fitting the disease severity and temperature data, while a logistic function was chosen to represent the effect of leaf wetness on the severity of Cercospora leaf spot. The response surface resultant of the product of the two functions was expressed as ES = 0.0001105 * (((x-8)2.294387) * ((36-x)0.955017)) * (0.39219/(1+25.93072 * exp (-0.16704*y))), where: ES represents the estimated severity value (0.1); x, the temperature (ºC) and y, the leaf wetness duration (hours). This model should be validated under field conditions to assess its use as a computational forecast system for Cercospora leaf spot of beet.
Resumo:
ABSTRACT In the present study, onion plants were tested under controlled conditions for the development of a climate model based on the influence of temperature (10, 15, 20 and 25°C) and leaf wetness duration (6, 12, 24 and 48 hours) on the severity of Botrytis leaf blight of onion caused by Botrytis squamosa. The relative lesion density was influenced by temperature and leaf wetness duration (P <0.05). The disease was most severe at 20°C. Data were subjected to nonlinear regression analysis. Beta generalized function was used to adjust severity and temperature data, while a logistic function was chosen to represent the effect of leaf wetness on the severity of Botrytis leaf blight. The response surface obtained by the product of two functions was expressed as ES = 0.008192 * (((x-5)1.01089) * ((30-x)1.19052)) * (0.33859/(1+3.77989 * exp (-0.10923*y))), where ES represents the estimated severity value (0.1); x, the temperature (°C); and y, the leaf wetness (in hours). This climate model should be validated under field conditions to verify its use as a computational system for the forecasting of Botrytis leaf blight in onion.
Resumo:
The objective of this work was to develop and validate a mathematical model to estimate the duration of cotton (Gossypium hirsutum L. r. latifolium hutch) cycle in the State of Goiás, Brazil, by applying the method of growing degree-days (GD), and considering, simultaneously, its time-space variation. The model was developed as a linear combination of elevation, latitude, longitude, and Fourier series of time variation. The model parameters were adjusted by using multiple-linear regression to the observed GD accumulated with air temperature in the range of 15°C to 40°C. The minimum and maximum temperature records used to calculate the GD were obtained from 21 meteorological stations, considering data varying from 8 to 20 years of observation. The coefficient of determination, resulting from the comparison between the estimated and calculated GD along the year was 0.84. Model validation was done by comparing estimated and measured crop cycle in the period from cotton germination to the stage when 90 percent of bolls were opened in commercial crop fields. Comparative results showed that the model performed very well, as indicated by the Pearson correlation coefficient of 0.90 and Willmott agreement index of 0.94, resulting in a performance index of 0.85.
Resumo:
Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.
Resumo:
A pulsatile pressure-flow model was developed for in vitro quantitative color Doppler flow mapping studies of valvular regurgitation. The flow through the system was generated by a piston which was driven by stepper motors controlled by a computer. The piston was connected to acrylic chambers designed to simulate "ventricular" and "atrial" heart chambers. Inside the "ventricular" chamber, a prosthetic heart valve was placed at the inflow connection with the "atrial" chamber while another prosthetic valve was positioned at the outflow connection with flexible tubes, elastic balloons and a reservoir arranged to mimic the peripheral circulation. The flow model was filled with a 0.25% corn starch/water suspension to improve Doppler imaging. A continuous flow pump transferred the liquid from the peripheral reservoir to another one connected to the "atrial" chamber. The dimensions of the flow model were designed to permit adequate imaging by Doppler echocardiography. Acoustic windows allowed placement of transducers distal and perpendicular to the valves, so that the ultrasound beam could be positioned parallel to the valvular flow. Strain-gauge and electromagnetic transducers were used for measurements of pressure and flow in different segments of the system. The flow model was also designed to fit different sizes and types of prosthetic valves. This pulsatile flow model was able to generate pressure and flow in the physiological human range, with independent adjustment of pulse duration and rate as well as of stroke volume. This model mimics flow profiles observed in patients with regurgitant prosthetic valves.
Resumo:
Sublethal ischemic preconditioning (IPC) is a powerful inducer of ischemic brain tolerance. However, its underlying mechanisms are still not well understood. In this study, we chose four different IPC paradigms, namely 5 min (5 min duration), 5×5 min (5 min duration, 2 episodes, 15-min interval), 5×5×5 min (5 min duration, 3 episodes, 15-min intervals), and 15 min (15 min duration), and demonstrated that three episodes of 5 min IPC activated autophagy to the greatest extent 24 h after IPC, as evidenced by Beclin expression and LC3-I/II conversion. Autophagic activation was mediated by the tuberous sclerosis type 1 (TSC1)-mTor signal pathway as IPC increased TSC1 but decreased mTor phosphorylation. Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) and hematoxylin and eosin staining confirmed that IPC protected against cerebral ischemic/reperfusion (I/R) injury. Critically, 3-methyladenine, an inhibitor of autophagy, abolished the neuroprotection of IPC and, by contrast, rapamycin, an autophagy inducer, potentiated it. Cleaved caspase-3 expression, neurological scores, and infarct volume in different groups further confirmed the protection of IPC against I/R injury. Taken together, our data indicate that autophagy activation might underlie the protection of IPC against ischemic injury by inhibiting apoptosis.
Resumo:
This study aimed to analyze the agreement between measurements of unloaded oxygen uptake and peak oxygen uptake based on equations proposed by Wasserman and on real measurements directly obtained with the ergospirometry system. We performed an incremental cardiopulmonary exercise test (CPET), which was applied to two groups of sedentary male subjects: one apparently healthy group (HG, n=12) and the other had stable coronary artery disease (n=16). The mean age in the HG was 47±4 years and that in the coronary artery disease group (CG) was 57±8 years. Both groups performed CPET on a cycle ergometer with a ramp-type protocol at an intensity that was calculated according to the Wasserman equation. In the HG, there was no significant difference between measurements predicted by the formula and real measurements obtained in CPET in the unloaded condition. However, at peak effort, a significant difference was observed between oxygen uptake (V˙O2)peak(predicted)and V˙O2peak(real)(nonparametric Wilcoxon test). In the CG, there was a significant difference of 116.26 mL/min between the predicted values by the formula and the real values obtained in the unloaded condition. A significant difference in peak effort was found, where V˙O2peak(real)was 40% lower than V˙O2peak(predicted)(nonparametric Wilcoxon test). There was no agreement between the real and predicted measurements as analyzed by Lin’s coefficient or the Bland and Altman model. The Wasserman formula does not appear to be appropriate for prediction of functional capacity of volunteers. Therefore, this formula cannot precisely predict the increase in power in incremental CPET on a cycle ergometer.
Resumo:
The present thesis examines the determinants of the bankruptcy protection duration for Canadian firms. Using a sample of Canadian firms that filed for bankruptcy protection between the calendar years 1992 and 2009, we fmd that the firm age, the industry adjusted operating margin, the default spread, the industrial production growth rate or the interest rate are influential factors on determining the length of the protection period. Older firms tend to stay longer under protection from creditors. As older firms have more complicated structures and issues to settle, the risk of exiting soon the protection (the hazard rate) is small. We also find that firms that perform better than their benchmark as measured by the industry they belong to, tend to leave quickly the bankruptcy protection state. We conclude that the fate of relatively successful companies is determined faster. Moreover, we report that it takes less time to achieve a final solution to firms under bankrupt~y when the default spread is low or when the appetite for risk is high. Conversely, during periods of high default spreads and flight for quality, it takes longer time to resolve the bankruptcy issue. This last finding may suggest that troubled firms should place themselves under protection when spreads are low. However, this ignores the endogeneity issue: high default spread may cause and incidentally reflect higher bankruptcy rates in the economy. Indeed, we find that bankruptcy protection is longer during economic downturns. We explain this relation by the natural increase in default rate among firms (and individuals) during economically troubled times. Default spreads are usually larger during these harsh periods as investors become more risk averse since their wealth shrinks. Using a Log-logistic hazard model, we also fmd that firms that file under the Companies' Creditors Arrangement Act (CCAA) protection spend longer time restructuring than firms that filed under the Bankruptcy and Insolvency Act (BIA). As BIA is more statutory and less flexible, solutions can be reached faster by court orders.
Resumo:
Apical leaf necrosis is a physiological process related to nitrogen (N) dynamics in the leaf. Pathogens use leaf nutrients and can thus accelerate this physiological apical necrosis. This process differs from necrosis occurring around pathogen lesions (lesion-induced necrosis), which is a direct result of the interaction between pathogen hyphae and leaf cells. This paper primarily concentrates on apical necrosis, only incorporating lesion-induced necrosis by necessity. The relationship between pathogen dynamics and physiological apical leaf necrosis is modelled through leaf nitrogen dynamics. The specific case of Puccinia triticina infections on Triticum aestivum flag leaves is studied. In the model, conversion of indirectly available N in the form of, for example, leaf cell proteins (N-2(t)) into directly available N (N-1(t), i.e. the form of N that can directly be used by either pathogen or plant sinks) results in apical necrosis. The model reproduces observed trends of disease severity, apical necrosis and green leaf area (GLA) and leaf N dynamics of uninfected and infected leaves. Decreasing the initial amount of directly available N results in earlier necrosis onset and longer necrosis duration. Decreasing the initial amount of indirectly available N, has no effect on necrosis onset and shortens necrosis duration. The model could be used to develop hypotheses on how the disease-GLA relation affects yield loss, which can be tested experimentally. Upon incorporation into crop simulation models, the model might provide a tool to more accurately estimate crop yield and effects of disease management strategies in crops sensitive to fungal pathogens.
Resumo:
Temperature is one of the most prominent environmental factors that determine plant growth, devel- opment, and yield. Cool and moist conditions are most favorable for wheat. Wheat is likely to be highly vulnerable to further warming because currently the temperature is already close to or above optimum. In this study, the impacts of warming and extreme high temperature stress on wheat yield over China were investigated by using the general large area model (GLAM) for annual crops. The results showed that each 1±C rise in daily mean temperature would reduce the average wheat yield in China by about 4.6%{5.7% mainly due to the shorter growth duration, except for a small increase in yield at some grid cells. When the maximum temperature exceeded 30.5±C, the simulated grain-set fraction declined from 1 at 30.5±C to close to 0 at about 36±C. When the total grain-set was lower than the critical fractional grain-set (0.575{0.6), harvest index and potential grain yield were reduced. In order to reduce the negative impacts of warming, it is crucial to take serious actions to adapt to the climate change, for example, by shifting sowing date, adjusting crop distribution and structure, breeding heat-resistant varieties, and improving the monitoring, forecasting, and early warning of extreme climate events.
Resumo:
The presence of resident Langerhans cells (LCs) in the epidermis makes the skin an attractive target for DNA vaccination. However, reliable animal models for cutaneous vaccination studies are limited. We demonstrate an ex vivo human skin model for cutaneous DNA vaccination which can potentially bridge the gap between pre-clinical in vivo animal models and clinical studies. Cutaneous transgene expression was utilised to demonstrate epidermal tissue viability in culture. LC response to the culture environment was monitored by immunohistochemistry. Full-thickness and split-thickness skin remained genetically viable in culture for at least 72 h in both phosphate-buffered saline (PBS) and full organ culture medium (OCM). The epidermis of explants cultured in OCM remained morphologically intact throughout the culture duration. LCs in full-thickness skin exhibited a delayed response (reduction in cell number and increase in cell size) to the culture conditions compared with split-thickness skin, whose response was immediate. In conclusion, excised human skin can be cultured for a minimum of 72 h for analysis of gene expression and immune cell activation. However, the use of split-thickness skin for vaccine formulation studies may not be appropriate because of the nature of the activation. Full-thickness skin explants are a more suitable model to assess cutaneous vaccination ex vivo.
Resumo:
During winter the ocean surface in polar regions freezes over to form sea ice. In the summer the upper layers of sea ice and snow melts producing meltwater that accumulates in Arctic melt ponds on the surface of sea ice. An accurate estimate of the fraction of the sea ice surface covered in melt ponds is essential for a realistic estimate of the albedo for global climate models. We present a melt-pond–sea-ice model that simulates the three-dimensional evolution of melt ponds on an Arctic sea ice surface. The advancements of this model compared to previous models are the inclusion of snow topography; meltwater transport rates are calculated from hydraulic gradients and ice permeability; and the incorporation of a detailed one-dimensional, thermodynamic radiative balance. Results of model runs simulating first-year and multiyear sea ice are presented. Model results show good agreement with observations, with duration of pond coverage, pond area, and ice ablation comparing well for both the first-year ice and multiyear ice cases. We investigate the sensitivity of the melt pond cover to changes in ice topography, snow topography, and vertical ice permeability. Snow was found to have an important impact mainly at the start of the melt season, whereas initial ice topography strongly controlled pond size and pond fraction throughout the melt season. A reduction in ice permeability allowed surface flooding of relatively flat, first-year ice but had little impact on the pond coverage of rougher, multiyear ice. We discuss our results, including model shortcomings and areas of experimental uncertainty.
Resumo:
We compare the characteristics of synthetic European droughts generated by the HiGEM1 coupled climate model run with present day atmospheric composition with observed drought events extracted from the CRU TS3 data set. The results demonstrate consistency in both the rate of drought occurrence and the spatiotemporal structure of the events. Estimates of the probability density functions for event area, duration and severity are shown to be similar with confidence > 90%. Encouragingly, HiGEM is shown to replicate the extreme tails of the observed distributions and thus the most damaging European drought events. The soil moisture state is shown to play an important role in drought development. Once a large-scale drought has been initiated it is found to be 50% more likely to continue if the local soil moisture is below the 40th percentile. In response to increased concentrations of atmospheric CO2, the modelled droughts are found to increase in duration, area and severity. The drought response can be largely attributed to temperature driven changes in relative humidity. 1 HiGEM is based on the latest climate configuration of the Met Office Hadley Centre Unified Model (HadGEM1) with the horizontal resolution increased to 1.25 x 0.83 degrees in longitude and latitude in the atmosphere and 1/3 x 1/3 degrees in the ocean.