970 resultados para Écart de Temps d’Échantillonnage


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Le suivi thérapeutique est recommandé pour l’ajustement de la dose des agents immunosuppresseurs. La pertinence de l’utilisation de la surface sous la courbe (SSC) comme biomarqueur dans l’exercice du suivi thérapeutique de la cyclosporine (CsA) dans la transplantation des cellules souches hématopoïétiques est soutenue par un nombre croissant d’études. Cependant, pour des raisons intrinsèques à la méthode de calcul de la SSC, son utilisation en milieu clinique n’est pas pratique. Les stratégies d’échantillonnage limitées, basées sur des approches de régression (R-LSS) ou des approches Bayésiennes (B-LSS), représentent des alternatives pratiques pour une estimation satisfaisante de la SSC. Cependant, pour une application efficace de ces méthodologies, leur conception doit accommoder la réalité clinique, notamment en requérant un nombre minimal de concentrations échelonnées sur une courte durée d’échantillonnage. De plus, une attention particulière devrait être accordée à assurer leur développement et validation adéquates. Il est aussi important de mentionner que l’irrégularité dans le temps de la collecte des échantillons sanguins peut avoir un impact non-négligeable sur la performance prédictive des R-LSS. Or, à ce jour, cet impact n’a fait l’objet d’aucune étude. Cette thèse de doctorat se penche sur ces problématiques afin de permettre une estimation précise et pratique de la SSC. Ces études ont été effectuées dans le cadre de l’utilisation de la CsA chez des patients pédiatriques ayant subi une greffe de cellules souches hématopoïétiques. D’abord, des approches de régression multiple ainsi que d’analyse pharmacocinétique de population (Pop-PK) ont été utilisées de façon constructive afin de développer et de valider adéquatement des LSS. Ensuite, plusieurs modèles Pop-PK ont été évalués, tout en gardant à l’esprit leur utilisation prévue dans le contexte de l’estimation de la SSC. Aussi, la performance des B-LSS ciblant différentes versions de SSC a également été étudiée. Enfin, l’impact des écarts entre les temps d’échantillonnage sanguins réels et les temps nominaux planifiés, sur la performance de prédiction des R-LSS a été quantifié en utilisant une approche de simulation qui considère des scénarios diversifiés et réalistes représentant des erreurs potentielles dans la cédule des échantillons sanguins. Ainsi, cette étude a d’abord conduit au développement de R-LSS et B-LSS ayant une performance clinique satisfaisante, et qui sont pratiques puisqu’elles impliquent 4 points d’échantillonnage ou moins obtenus dans les 4 heures post-dose. Une fois l’analyse Pop-PK effectuée, un modèle structural à deux compartiments avec un temps de délai a été retenu. Cependant, le modèle final - notamment avec covariables - n’a pas amélioré la performance des B-LSS comparativement aux modèles structuraux (sans covariables). En outre, nous avons démontré que les B-LSS exhibent une meilleure performance pour la SSC dérivée des concentrations simulées qui excluent les erreurs résiduelles, que nous avons nommée « underlying AUC », comparée à la SSC observée qui est directement calculée à partir des concentrations mesurées. Enfin, nos résultats ont prouvé que l’irrégularité des temps de la collecte des échantillons sanguins a un impact important sur la performance prédictive des R-LSS; cet impact est en fonction du nombre des échantillons requis, mais encore davantage en fonction de la durée du processus d’échantillonnage impliqué. Nous avons aussi mis en évidence que les erreurs d’échantillonnage commises aux moments où la concentration change rapidement sont celles qui affectent le plus le pouvoir prédictif des R-LSS. Plus intéressant, nous avons mis en exergue que même si différentes R-LSS peuvent avoir des performances similaires lorsque basées sur des temps nominaux, leurs tolérances aux erreurs des temps d’échantillonnage peuvent largement différer. En fait, une considération adéquate de l'impact de ces erreurs peut conduire à une sélection et une utilisation plus fiables des R-LSS. Par une investigation approfondie de différents aspects sous-jacents aux stratégies d’échantillonnages limités, cette thèse a pu fournir des améliorations méthodologiques notables, et proposer de nouvelles voies pour assurer leur utilisation de façon fiable et informée, tout en favorisant leur adéquation à la pratique clinique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le seul vrai livre, pour Proust, est la traduction des impressions perdues dont la trace subsiste dans notre mémoire sensible. Les personnages entrent dans le texte de la Recherche en frappant la sensibilité du héros. Or, « toujours déjà là, » la grand-mère, comme la mère, relève d'une réalité qui ne s'est jamais imprimée, une réalité antérieure à la conscience du narrateur et de ce fait, antérieure au texte. Néanmoins, la grand-mère est une mère qui vieillit et qui meurt. Alors, elle apparaît au narrateur, suivant ainsi le chemin inverse de l'altérité. De présence immédiate pour le héros, il lui faudra devenir autre, une vieille femme étrangère, indéfinie dans son geste vers la mort, afin que le texte lui restitue une première impression. C'est précisément dans cette distance à parcourir, cet itinéraire entre l'immédiateté du départ et la première impression, que la spécificité du personnage de la grand-mère touche à ce que Proust qualifierait lui-même de « névralgie » de son texte. La réalité maternelle, pour devenir objet du style littéraire, doit se plier au trait de l'écrivain. Or, le personnage de mère, telle qu'il est élaboré dans la Recherche, résiste à ce « fléchissement ». Le personnage de grand-mère permet à Proust d'exprimer la réalité de la mère qui se dégrade et qui meurt, une mère que la main du fils devenant écrivain rend malléable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research assesses the potential impact of weekly weather variability on the incidence of cryptosporidiosis disease using time series zero-inflated Poisson (ZIP) and classification and regression tree (CART) models. Data on weather variables, notified cryptosporidiosis cases and population size in Brisbane were supplied by the Australian Bureau of Meteorology, Queensland Department of Health, and Australian Bureau of Statistics, respectively. Both time series ZIP and CART models show a clear association between weather variables (maximum temperature, relative humidity, rainfall and wind speed) and cryptosporidiosis disease. The time series CART models indicated that, when weekly maximum temperature exceeded 31°C and relative humidity was less than 63%, the relative risk of cryptosporidiosis rose by 13.64 (expected morbidity: 39.4; 95% confidence interval: 30.9–47.9). These findings may have applications as a decision support tool in planning disease control and risk management programs for cryptosporidiosis disease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: It remains unclear whether it is possible to develop a spatiotemporal epidemic prediction model for cryptosporidiosis disease. This paper examined the impact of social economic and weather factors on cryptosporidiosis and explored the possibility of developing such a model using social economic and weather data in Queensland, Australia. ----- ----- Methods: Data on weather variables, notified cryptosporidiosis cases and social economic factors in Queensland were supplied by the Australian Bureau of Meteorology, Queensland Department of Health, and Australian Bureau of Statistics, respectively. Three-stage spatiotemporal classification and regression tree (CART) models were developed to examine the association between social economic and weather factors and monthly incidence of cryptosporidiosis in Queensland, Australia. The spatiotemporal CART model was used for predicting the outbreak of cryptosporidiosis in Queensland, Australia. ----- ----- Results: The results of the classification tree model (with incidence rates defined as binary presence/absence) showed that there was an 87% chance of an occurrence of cryptosporidiosis in a local government area (LGA) if the socio-economic index for the area (SEIFA) exceeded 1021, while the results of regression tree model (based on non-zero incidence rates) show when SEIFA was between 892 and 945, and temperature exceeded 32°C, the relative risk (RR) of cryptosporidiosis was 3.9 (mean morbidity: 390.6/100,000, standard deviation (SD): 310.5), compared to monthly average incidence of cryptosporidiosis. When SEIFA was less than 892 the RR of cryptosporidiosis was 4.3 (mean morbidity: 426.8/100,000, SD: 319.2). A prediction map for the cryptosporidiosis outbreak was made according to the outputs of spatiotemporal CART models. ----- ----- Conclusions: The results of this study suggest that spatiotemporal CART models based on social economic and weather variables can be used for predicting the outbreak of cryptosporidiosis in Queensland, Australia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ABSRACT. Despite the surge in online retail sales in recent years there still remains reluctance by consumers to complete the online shopping process. A number of authors have attributed consumers’ reluctance to purchase online to apparent barriers. However, such barriers as yet have not been fully examined within a theoretical context. This research explores the application of the perceived risk theoretical framework. Specifically, performance risk and the influence of perceived performance risk has on the phenomenon of Internet Abandoned Cart Syndrome (ACS) is evaluated. To explore this phenomenon, a number of extrinsic cues are identified as playing a major role in the performance evaluation process of online purchases. The results of this study suggest the extrinsic cues of brand, reputation, design and price have an overall impact on the performance evaluation process just prior to an online purchase. Varying these cues either positively or negatively had a strong impact on performance evaluation. Further, it was found that positive or negative reputation was heavily associated with shopping cart abandonment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is now widely accepted that first year students benefit from pedagogies which mediate and support their transitions to university, and assist them to develop an adaptive student identity. We present an initiative which takes an alternative and additional approach to this way of viewing the first year experience. Based on research into creative industries career trajectories, this initiative focuses on the establishment of nascent career identity and professional self-concept amongst 600 first semester Bachelor of Creative Industries (BCI) students at QUT. The BCI is offered as a three year undergraduate program involving self-selection of majors, minors and electives, and also as a four year double degree with Business and Law faculties. Students engage in a scaffolded process of initial career visioning and reflective course planning, based on their own industry and careers research, guided by industry-active academic and careers staff, and drawing upon the experiences of final year students.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The benefits of applying tree-based methods to the purpose of modelling financial assets as opposed to linear factor analysis are increasingly being understood by market practitioners. Tree-based models such as CART (classification and regression trees) are particularly well suited to analysing stock market data which is noisy and often contains non-linear relationships and high-order interactions. CART was originally developed in the 1980s by medical researchers disheartened by the stringent assumptions applied by traditional regression analysis (Brieman et al. [1984]). In the intervening years, CART has been successfully applied to many areas of finance such as the classification of financial distress of firms (see Frydman, Altman and Kao [1985]), asset allocation (see Sorensen, Mezrich and Miller [1996]), equity style timing (see Kao and Shumaker [1999]) and stock selection (see Sorensen, Miller and Ooi [2000])...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Top lists of and praise for the economy's fastest growing firms abound in business media around the world. Similarly, in academic research there has been a tendency to equate firm growth with business success. This tendency appears to be particularly pronounced in-but not confined to­ entrepreneurship research. In this study we critically examine this tendency to portray firm growth as more or less universally favorable. While several theories suggest that growth drives profitability we first show that the available empirical evidence does not support the existence of a general, positive relation­ ship between growth and profitability. Using the theoretical lens of the Resource-Based View (RBV) we then argue that sound growth usually starts with achieving sufficient levels of profitability. In summary, our theoretical argument is as follows: In a population of SMEs, superior profitability is likely to be indicative of having built a resource-based competitive advantage. Building such a valuable and hard to-copy advantage may at first constrain growth. However, the underlying advantage itself and the financial resources generated through high profitability make it possible for firms in this situation to now achieve sound and sustainable growth - which may require building a series of temporary advantages- without having to sacrifice profitability. By contrast, when firms strive for high growth starting from low profitability, the latter often indicates lack of competitive advantage. Therefore growth must be achieved in head-to-head competition with equally attractive alternatives, leading to profitability deterioration rather than improvement. In addition, these low profitability firms are unlikely to be able to finance strategies toward building valuable and difficult-to-imitate advantages while growing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background and Aims: The objective of the study was to compare data obtained from the Cosmed K4 b2 and the Deltatrac II™ metabolic cart for the purpose of determining the validity of the Cosmed K4 b2 in measuring resting energy expenditure. Methods: Nine adult subjects (four male, five female) were measured. Resting energy expenditure was measured in consecutive sessions using the Cosmed K4 b2, the Deltatrac II™ metabolic cart separately and the Cosmed K4 b2 and Deltatrac II™ metabolic cart simultaneously, performed in random order. Resting energy expenditure (REE) data from both devices were then compared with values obtained from predictive equations. Results: Bland and Altman analysis revealed a mean bias for the four variables, REE, respiratory quotient (RQ), VCO2, VO2 between data obtained from Cosmed K4 b2 and Deltatrac II™ metabolic cart of 268 ± 702 kcal/day, -0.0±0.2, 26.4±118.2 and 51.6±126.5 ml/min, respectively. Corresponding limits of agreement for the same four variables were all large. Also, Bland and Altman analysis revealed a larger mean bias between predicted REE and measured REE using Cosmed K4 b2 data (-194±603 kcal/day) than using Deltatrac™ metabolic cart data (73±197 kcal/day). Conclusions: Variability between the two devices was very high and a degree of measurement error was detected. Data from the Cosmed K4 b2 provided variable results on comparison with predicted values, thus, would seem an invalid device for measuring adults. © 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Species distribution modelling (SDM) typically analyses species’ presence together with some form of absence information. Ideally absences comprise observations or are inferred from comprehensive sampling. When such information is not available, then pseudo-absences are often generated from the background locations within the study region of interest containing the presences, or else absence is implied through the comparison of presences to the whole study region, e.g. as is the case in Maximum Entropy (MaxEnt) or Poisson point process modelling. However, the choice of which absence information to include can be both challenging and highly influential on SDM predictions (e.g. Oksanen and Minchin, 2002). In practice, the use of pseudo- or implied absences often leads to an imbalance where absences far outnumber presences. This leaves analysis highly susceptible to ‘naughty-noughts’: absences that occur beyond the envelope of the species, which can exert strong influence on the model and its predictions (Austin and Meyers, 1996). Also known as ‘excess zeros’, naughty noughts can be estimated via an overall proportion in simple hurdle or mixture models (Martin et al., 2005). However, absences, especially those that occur beyond the species envelope, can often be more diverse than presences. Here we consider an extension to excess zero models. The two-staged approach first exploits the compartmentalisation provided by classification trees (CTs) (as in O’Leary, 2008) to identify multiple sources of naughty noughts and simultaneously delineate several species envelopes. Then SDMs can be fit separately within each envelope, and for this stage, we examine both CTs (as in Falk et al., 2014) and the popular MaxEnt (Elith et al., 2006). We introduce a wider range of model performance measures to improve treatment of naughty noughts in SDM. We retain an overall measure of model performance, the area under the curve (AUC) of the Receiver-Operating Curve (ROC), but focus on its constituent measures of false negative rate (FNR) and false positive rate (FPR), and how these relate to the threshold in the predicted probability of presence that delimits predicted presence from absence. We also propose error rates more relevant to users of predictions: false omission rate (FOR), the chance that a predicted absence corresponds to (and hence wastes) an observed presence, and the false discovery rate (FDR), reflecting those predicted (or potential) presences that correspond to absence. A high FDR may be desirable since it could help target future search efforts, whereas zero or low FOR is desirable since it indicates none of the (often valuable) presences have been ignored in the SDM. For illustration, we chose Bradypus variegatus, a species that has previously been published as an exemplar species for MaxEnt, proposed by Phillips et al. (2006). We used CTs to increasingly refine the species envelope, starting with the whole study region (E0), eliminating more and more potential naughty noughts (E1–E3). When combined with an SDM fit within the species envelope, the best CT SDM had similar AUC and FPR to the best MaxEnt SDM, but otherwise performed better. The FNR and FOR were greatly reduced, suggesting that CTs handle absences better. Interestingly, MaxEnt predictions showed low discriminatory performance, with the most common predicted probability of presence being in the same range (0.00-0.20) for both true absences and presences. In summary, this example shows that SDMs can be improved by introducing an initial hurdle to identify naughty noughts and partition the envelope before applying SDMs. This improvement was barely detectable via AUC and FPR yet visible in FOR, FNR, and the comparison of predicted probability of presence distribution for pres/absence.