940 resultados para Complex network. Optimal path. Optimal path cracks


Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVES: We have sought to develop an automated methodology for the continuous updating of optimal cerebral perfusion pressure (CPPopt) for patients after severe traumatic head injury, using continuous monitoring of cerebrovascular pressure reactivity. We then validated the CPPopt algorithm by determining the association between outcome and the deviation of actual CPP from CPPopt. DESIGN: Retrospective analysis of prospectively collected data. SETTING: Neurosciences critical care unit of a university hospital. PATIENTS: A total of 327 traumatic head-injury patients admitted between 2003 and 2009 with continuous monitoring of arterial blood pressure and intracranial pressure. MEASUREMENTS AND MAIN RESULTS: Arterial blood pressure, intracranial pressure, and CPP were continuously recorded, and pressure reactivity index was calculated online. Outcome was assessed at 6 months. An automated curve fitting method was applied to determine CPP at the minimum value for pressure reactivity index (CPPopt). A time trend of CPPopt was created using a moving 4-hr window, updated every minute. Identification of CPPopt was, on average, feasible during 55% of the whole recording period. Patient outcome correlated with the continuously updated difference between median CPP and CPPopt (chi-square=45, p<.001; outcome dichotomized into fatal and nonfatal). Mortality was associated with relative "hypoperfusion" (CPP<CPPopt), severe disability with "hyperperfusion" (CPP>CPPopt), and favorable outcome was associated with smaller deviations of CPP from the individualized CPPopt. While deviations from global target CPP values of 60 mm Hg and 70 mm Hg were also related to outcome, these relationships were less robust. CONCLUSIONS: Real-time CPPopt could be identified during the recording time of majority of the patients. Patients with a median CPP close to CPPopt were more likely to have a favorable outcome than those in whom median CPP was widely different from CPPopt. Deviations from individualized CPPopt were more predictive of outcome than deviations from a common target CPP. CPP management to optimize cerebrovascular pressure reactivity should be the subject of future clinical trial in severe traumatic head-injury patients.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Selostus: Politiikkamuutosten vaikutus lihanautojen optimaaliseen ruokintaan ja teurastuksen ajoitukseen

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The purpose of the present article is to take stock of a recent exchange in Organizational Research Methods between critics (Rönkkö & Evermann, 2013) and proponents (Henseler et al., 2014) of partial least squares path modeling (PLS-PM). The two target articles were centered around six principal issues, namely whether PLS-PM: (1) can be truly characterized as a technique for structural equation modeling (SEM); (2) is able to correct for measurement error; (3) can be used to validate measurement models; (4) accommodates small sample sizes; (5) is able to provide null hypothesis tests for path coefficients; and (6) can be employed in an exploratory, model-building fashion. We summarize and elaborate further on the key arguments underlying the exchange, drawing from the broader methodological and statistical literature in order to offer additional thoughts concerning the utility of PLS-PM and ways in which the technique might be improved. We conclude with recommendations as to whether and how PLS-PM serves as a viable contender to SEM approaches for estimating and evaluating theoretical models.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

AbstractBACKGROUND: Scientists have been trying to understand the molecular mechanisms of diseases to design preventive and therapeutic strategies for a long time. For some diseases, it has become evident that it is not enough to obtain a catalogue of the disease-related genes but to uncover how disruptions of molecular networks in the cell give rise to disease phenotypes. Moreover, with the unprecedented wealth of information available, even obtaining such catalogue is extremely difficult.PRINCIPAL FINDINGS: We developed a comprehensive gene-disease association database by integrating associations from several sources that cover different biomedical aspects of diseases. In particular, we focus on the current knowledge of human genetic diseases including mendelian, complex and environmental diseases. To assess the concept of modularity of human diseases, we performed a systematic study of the emergent properties of human gene-disease networks by means of network topology and functional annotation analysis. The results indicate a highly shared genetic origin of human diseases and show that for most diseases, including mendelian, complex and environmental diseases, functional modules exist. Moreover, a core set of biological pathways is found to be associated with most human diseases. We obtained similar results when studying clusters of diseases, suggesting that related diseases might arise due to dysfunction of common biological processes in the cell.CONCLUSIONS: For the first time, we include mendelian, complex and environmental diseases in an integrated gene-disease association database and show that the concept of modularity applies for all of them. We furthermore provide a functional analysis of disease-related modules providing important new biological insights, which might not be discovered when considering each of the gene-disease association repositories independently. Hence, we present a suitable framework for the study of how genetic and environmental factors, such as drugs, contribute to diseases.AVAILABILITY: The gene-disease networks used in this study and part of the analysis are available at http://ibi.imim.es/DisGeNET/DisGeNETweb.html#Download

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Due to the advances in sensor networks and remote sensing technologies, the acquisition and storage rates of meteorological and climatological data increases every day and ask for novel and efficient processing algorithms. A fundamental problem of data analysis and modeling is the spatial prediction of meteorological variables in complex orography, which serves among others to extended climatological analyses, for the assimilation of data into numerical weather prediction models, for preparing inputs to hydrological models and for real time monitoring and short-term forecasting of weather.In this thesis, a new framework for spatial estimation is proposed by taking advantage of a class of algorithms emerging from the statistical learning theory. Nonparametric kernel-based methods for nonlinear data classification, regression and target detection, known as support vector machines (SVM), are adapted for mapping of meteorological variables in complex orography.With the advent of high resolution digital elevation models, the field of spatial prediction met new horizons. In fact, by exploiting image processing tools along with physical heuristics, an incredible number of terrain features which account for the topographic conditions at multiple spatial scales can be extracted. Such features are highly relevant for the mapping of meteorological variables because they control a considerable part of the spatial variability of meteorological fields in the complex Alpine orography. For instance, patterns of orographic rainfall, wind speed and cold air pools are known to be correlated with particular terrain forms, e.g. convex/concave surfaces and upwind sides of mountain slopes.Kernel-based methods are employed to learn the nonlinear statistical dependence which links the multidimensional space of geographical and topographic explanatory variables to the variable of interest, that is the wind speed as measured at the weather stations or the occurrence of orographic rainfall patterns as extracted from sequences of radar images. Compared to low dimensional models integrating only the geographical coordinates, the proposed framework opens a way to regionalize meteorological variables which are multidimensional in nature and rarely show spatial auto-correlation in the original space making the use of classical geostatistics tangled.The challenges which are explored during the thesis are manifolds. First, the complexity of models is optimized to impose appropriate smoothness properties and reduce the impact of noisy measurements. Secondly, a multiple kernel extension of SVM is considered to select the multiscale features which explain most of the spatial variability of wind speed. Then, SVM target detection methods are implemented to describe the orographic conditions which cause persistent and stationary rainfall patterns. Finally, the optimal splitting of the data is studied to estimate realistic performances and confidence intervals characterizing the uncertainty of predictions.The resulting maps of average wind speeds find applications within renewable resources assessment and opens a route to decrease the temporal scale of analysis to meet hydrological requirements. Furthermore, the maps depicting the susceptibility to orographic rainfall enhancement can be used to improve current radar-based quantitative precipitation estimation and forecasting systems and to generate stochastic ensembles of precipitation fields conditioned upon the orography.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Accurate diagnosis of orthopedic device-associated infections can be challenging. Culture of tissue biopsy specimens is often considered the gold standard; however, there is currently no consensus on the ideal incubation time for specimens. The aim of our study was to assess the yield of a 14-day incubation protocol for tissue biopsy specimens from revision surgery (joint replacements and internal fixation devices) in a general orthopedic and trauma surgery setting. Medical records were reviewed retrospectively in order to identify cases of infection according to predefined diagnostic criteria. From August 2009 to March 2012, 499 tissue biopsy specimens were sampled from 117 cases. In 70 cases (59.8%), at least one sample showed microbiological growth. Among them, 58 cases (82.9%) were considered infections and 12 cases (17.1%) were classified as contaminations. The median time to positivity in the cases of infection was 1 day (range, 1 to 10 days), compared to 6 days (range, 1 to 11 days) in the cases of contamination (P < 0.001). Fifty-six (96.6%) of the infection cases were diagnosed within 7 days of incubation. In conclusion, the results of our study show that the incubation of tissue biopsy specimens beyond 7 days is not productive in a general orthopedic and trauma surgery setting. Prolonged 14-day incubation might be of interest in particular situations, however, in which the prevalence of slow-growing microorganisms and anaerobes is higher.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Protective immune responses relyon TCR-mediated recognition of antigenspresented by MHC molecules. Tcells directed against tumor antigensare thought to express TCRs of loweraffinity/avidity than pathogen-specificT lymphocytes. An attractivestrategy to improve anti-tumor T cellresponses is to adoptively transferCD8+ T cells engineered with TCRsof optimized affinity. However, themechanisms that control optimal Tcell activation and responsiveness remainpoorly defined. We aim at characterizingTCR-pMHC binding parametersand downstream signalingevents that regulate T cell functionalityby using an in silico designedpanel of tumor antigen-specific TCRsof incremental affinity for pMHC(Kd100 M- 15 nM).We found that optimalT cell responses (cytokine secretionand target cell killing) occurredwithin a well-defined window ofTCR-pMHC binding affinity (5 M-1 M), while drastic functional declinewas detected in T cells expressingvery low and very high TCRaffinities,which was not caused by any increasein apoptosis. Whole-genomemicroarray analysis revealed that Tcells with optimal TCR affinitieshighly up-regulated transcription ofgenes typical of T cell activation (i.e.IFN-, NF-B and TNFR), while reducedexpression was detected in Tcells of very low or very high TCR affinity.Strikingly, hierarchical clusteringshowed that the latter two variantsclustered together with the un-stimulatedcontrol Tcells.Yet, despite commonclustering, several genes seemedto be differentially expressed, suggestingthat the mechanisms involvedin this "unresponsiveness state" maydiffer between those two variants. Finally,calcium influx assays also demonstratedattenuated responses in Tcells of very high TCR affinity. Ourresults indicate that optimal T cellfunction is tightly controlled within adefinedTCRaffinity window throughvery proximal TCR-mediated mechanisms,possibly at the TCR-pMHCbinding interface. Uncovering themechanisms regulating optimal/maximalT cell function is essential to understandand promote therapeutic designlike adoptive T cell therapy.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the context of severe economic recession, the Library is compelled to adapt to this changing environment, in order to meet the requirements and demands of users with very specific needs. Taking the pillars of sustainable development as a reference point, and extrapolating them to our domain, we establish the next main goals

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The stop-loss reinsurance is one of the most important reinsurance contracts in the insurance market. From the insurer point of view, it presents an interesting property: it is optimal if the criterion of minimizing the variance of the cost of the insurer is used. The aim of the paper is to contribute to the analysis of the stop-loss contract in one period from the point of view of the insurer and the reinsurer. Firstly, the influence of the parameters of the reinsurance contract on the correlation coefficient between the cost of the insurer and the cost of the reinsurer is studied. Secondly, the optimal stop-loss contract is obtained if the criterion used is the maximization of the joint survival probability of the insurer and the reinsurer in one period.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Rapport de synthèse Cette thèse consiste en trois essais sur les stratégies optimales de dividendes. Chaque essai correspond à un chapitre. Les deux premiers essais ont été écrits en collaboration avec les Professeurs Hans Ulrich Gerber et Elias S. W. Shiu et ils ont été publiés; voir Gerber et al. (2006b) ainsi que Gerber et al. (2008). Le troisième essai a été écrit en collaboration avec le Professeur Hans Ulrich Gerber. Le problème des stratégies optimales de dividendes remonte à de Finetti (1957). Il se pose comme suit: considérant le surplus d'une société, déterminer la stratégie optimale de distribution des dividendes. Le critère utilisé consiste à maximiser la somme des dividendes escomptés versés aux actionnaires jusqu'à la ruine2 de la société. Depuis de Finetti (1957), le problème a pris plusieurs formes et a été résolu pour différents modèles. Dans le modèle classique de théorie de la ruine, le problème a été résolu par Gerber (1969) et plus récemment, en utilisant une autre approche, par Azcue and Muler (2005) ou Schmidli (2008). Dans le modèle classique, il y a un flux continu et constant d'entrées d'argent. Quant aux sorties d'argent, elles sont aléatoires. Elles suivent un processus à sauts, à savoir un processus de Poisson composé. Un exemple qui correspond bien à un tel modèle est la valeur du surplus d'une compagnie d'assurance pour lequel les entrées et les sorties sont respectivement les primes et les sinistres. Le premier graphique de la Figure 1 en illustre un exemple. Dans cette thèse, seules les stratégies de barrière sont considérées, c'est-à-dire quand le surplus dépasse le niveau b de la barrière, l'excédent est distribué aux actionnaires comme dividendes. Le deuxième graphique de la Figure 1 montre le même exemple du surplus quand une barrière de niveau b est introduite, et le troisième graphique de cette figure montre, quand à lui, les dividendes cumulés. Chapitre l: "Maximizing dividends without bankruptcy" Dans ce premier essai, les barrières optimales sont calculées pour différentes distributions du montant des sinistres selon deux critères: I) La barrière optimale est calculée en utilisant le critère usuel qui consiste à maximiser l'espérance des dividendes escomptés jusqu'à la ruine. II) La barrière optimale est calculée en utilisant le second critère qui consiste, quant à lui, à maximiser l'espérance de la différence entre les dividendes escomptés jusqu'à la ruine et le déficit au moment de la ruine. Cet essai est inspiré par Dickson and Waters (2004), dont l'idée est de faire supporter aux actionnaires le déficit au moment de la ruine. Ceci est d'autant plus vrai dans le cas d'une compagnie d'assurance dont la ruine doit être évitée. Dans l'exemple de la Figure 1, le déficit au moment de la ruine est noté R. Des exemples numériques nous permettent de comparer le niveau des barrières optimales dans les situations I et II. Cette idée, d'ajouter une pénalité au moment de la ruine, a été généralisée dans Gerber et al. (2006a). Chapitre 2: "Methods for estimating the optimal dividend barrier and the probability of ruin" Dans ce second essai, du fait qu'en pratique on n'a jamais toute l'information nécessaire sur la distribution du montant des sinistres, on suppose que seuls les premiers moments de cette fonction sont connus. Cet essai développe et examine des méthodes qui permettent d'approximer, dans cette situation, le niveau de la barrière optimale, selon le critère usuel (cas I ci-dessus). Les approximations "de Vylder" et "diffusion" sont expliquées et examinées: Certaines de ces approximations utilisent deux, trois ou quatre des premiers moments. Des exemples numériques nous permettent de comparer les approximations du niveau de la barrière optimale, non seulement avec les valeurs exactes mais également entre elles. Chapitre 3: "Optimal dividends with incomplete information" Dans ce troisième et dernier essai, on s'intéresse à nouveau aux méthodes d'approximation du niveau de la barrière optimale quand seuls les premiers moments de la distribution du montant des sauts sont connus. Cette fois, on considère le modèle dual. Comme pour le modèle classique, dans un sens il y a un flux continu et dans l'autre un processus à sauts. A l'inverse du modèle classique, les gains suivent un processus de Poisson composé et les pertes sont constantes et continues; voir la Figure 2. Un tel modèle conviendrait pour une caisse de pension ou une société qui se spécialise dans les découvertes ou inventions. Ainsi, tant les approximations "de Vylder" et "diffusion" que les nouvelles approximations "gamma" et "gamma process" sont expliquées et analysées. Ces nouvelles approximations semblent donner de meilleurs résultats dans certains cas.