874 resultados para Filmic approach methods
Resumo:
Protecting native biodiversity against alien invasive species requires powerful methods to anticipate these invasions and to protect native species assumed to be at risk. Here, we describe how species distribution models (SDMs) can be used to identify areas predicted as suitable for rare native species and also predicted as highly susceptible to invasion by alien species, at present and under future climate and land-use scenarios. To assess the condition and dynamics of such conflicts, we developed a combined predictive modelling (CPM) approach, which predicts species distributions by combining two SDMs fitted using subsets of predictors classified as acting at either regional or local scales. We illustrate the CPM approach for an alien invader and a rare species associated to similar habitats in northwest Portugal. Combined models predict a wider variety of potential species responses, providing more informative projections of species distributions and future dynamics than traditional, non-combined models. They also provide more informative insight regarding current and future rare-invasive conflict areas. For our studied species, conflict areas of highest conservation relevance are predicted to decrease over the next decade, supporting previous reports that some invasive species may contract their geographic range and impact due to climate change. More generally, our results highlight the more informative character of the combined approach to address practical issues in conservation and management programs, especially those aimed at mitigating the impact of invasive plants, land-use and climate changes in sensitive regions
Resumo:
RATIONALE AND OBJECTIVES: Dose reduction may compromise patients because of a decrease of image quality. Therefore, the amount of dose savings in new dose-reduction techniques needs to be thoroughly assessed. To avoid repeated studies in one patient, chest computed tomography (CT) scans with different dose levels were performed in corpses comparing model-based iterative reconstruction (MBIR) as a tool to enhance image quality with current standard full-dose imaging. MATERIALS AND METHODS: Twenty-five human cadavers were scanned (CT HD750) after contrast medium injection at different, decreasing dose levels D0-D5 and respectively reconstructed with MBIR. The data at full-dose level, D0, have been additionally reconstructed with standard adaptive statistical iterative reconstruction (ASIR), which represented the full-dose baseline reference (FDBR). Two radiologists independently compared image quality (IQ) in 3-mm multiplanar reformations for soft-tissue evaluation of D0-D5 to FDBR (-2, diagnostically inferior; -1, inferior; 0, equal; +1, superior; and +2, diagnostically superior). For statistical analysis, the intraclass correlation coefficient (ICC) and the Wilcoxon test were used. RESULTS: Mean CT dose index values (mGy) were as follows: D0/FDBR = 10.1 ± 1.7, D1 = 6.2 ± 2.8, D2 = 5.7 ± 2.7, D3 = 3.5 ± 1.9, D4 = 1.8 ± 1.0, and D5 = 0.9 ± 0.5. Mean IQ ratings were as follows: D0 = +1.8 ± 0.2, D1 = +1.5 ± 0.3, D2 = +1.1 ± 0.3, D3 = +0.7 ± 0.5, D4 = +0.1 ± 0.5, and D5 = -1.2 ± 0.5. All values demonstrated a significant difference to baseline (P < .05), except mean IQ for D4 (P = .61). ICC was 0.91. CONCLUSIONS: Compared to ASIR, MBIR allowed for a significant dose reduction of 82% without impairment of IQ. This resulted in a calculated mean effective dose below 1 mSv.
Resumo:
Methods like Event History Analysis can show the existence of diffusion and part of its nature, but do not study the process itself. Nowadays, thanks to the increasing performance of computers, processes can be studied using computational modeling. This thesis presents an agent-based model of policy diffusion mainly inspired from the model developed by Braun and Gilardi (2006). I first start by developing a theoretical framework of policy diffusion that presents the main internal drivers of policy diffusion - such as the preference for the policy, the effectiveness of the policy, the institutional constraints, and the ideology - and its main mechanisms, namely learning, competition, emulation, and coercion. Therefore diffusion, expressed by these interdependencies, is a complex process that needs to be studied with computational agent-based modeling. In a second step, computational agent-based modeling is defined along with its most significant concepts: complexity and emergence. Using computational agent-based modeling implies the development of an algorithm and its programming. When this latter has been developed, we let the different agents interact. Consequently, a phenomenon of diffusion, derived from learning, emerges, meaning that the choice made by an agent is conditional to that made by its neighbors. As a result, learning follows an inverted S-curve, which leads to partial convergence - global divergence and local convergence - that triggers the emergence of political clusters; i.e. the creation of regions with the same policy. Furthermore, the average effectiveness in this computational world tends to follow a J-shaped curve, meaning that not only time is needed for a policy to deploy its effects, but that it also takes time for a country to find the best-suited policy. To conclude, diffusion is an emergent phenomenon from complex interactions and its outcomes as ensued from my model are in line with the theoretical expectations and the empirical evidence.Les méthodes d'analyse de biographie (event history analysis) permettent de mettre en évidence l'existence de phénomènes de diffusion et de les décrire, mais ne permettent pas d'en étudier le processus. Les simulations informatiques, grâce aux performances croissantes des ordinateurs, rendent possible l'étude des processus en tant que tels. Cette thèse, basée sur le modèle théorique développé par Braun et Gilardi (2006), présente une simulation centrée sur les agents des phénomènes de diffusion des politiques. Le point de départ de ce travail met en lumière, au niveau théorique, les principaux facteurs de changement internes à un pays : la préférence pour une politique donnée, l'efficacité de cette dernière, les contraintes institutionnelles, l'idéologie, et les principaux mécanismes de diffusion que sont l'apprentissage, la compétition, l'émulation et la coercition. La diffusion, définie par l'interdépendance des différents acteurs, est un système complexe dont l'étude est rendue possible par les simulations centrées sur les agents. Au niveau méthodologique, nous présenterons également les principaux concepts sous-jacents aux simulations, notamment la complexité et l'émergence. De plus, l'utilisation de simulations informatiques implique le développement d'un algorithme et sa programmation. Cette dernière réalisée, les agents peuvent interagir, avec comme résultat l'émergence d'un phénomène de diffusion, dérivé de l'apprentissage, où le choix d'un agent dépend en grande partie de ceux faits par ses voisins. De plus, ce phénomène suit une courbe en S caractéristique, poussant à la création de régions politiquement identiques, mais divergentes au niveau globale. Enfin, l'efficacité moyenne, dans ce monde simulé, suit une courbe en J, ce qui signifie qu'il faut du temps, non seulement pour que la politique montre ses effets, mais également pour qu'un pays introduise la politique la plus efficace. En conclusion, la diffusion est un phénomène émergent résultant d'interactions complexes dont les résultats du processus tel que développé dans ce modèle correspondent tant aux attentes théoriques qu'aux résultats pratiques.
Resumo:
BACKGROUND AND OBJECTIVES: The suprascapular nerve (SSN) block is frequently performed for different shoulder pain conditions and for perioperative and postoperative pain control after shoulder surgery. Blind and image-guided techniques have been described, all of which target the nerve within the supraspinous fossa or at the suprascapular notch. This classic target point is not always ideal when ultrasound (US) is used because it is located deep under the muscles, and hence the nerve is not always visible. Blocking the nerve in the supraclavicular region, where it passes underneath the omohyoid muscle, could be an attractive alternative. METHODS: In the first step, 60 volunteers were scanned with US, both in the supraclavicular and the classic target area. The visibility of the SSN in both regions was compared. In the second step, 20 needles were placed into or immediately next to the SSN in the supraclavicular region of 10 cadavers. The accuracy of needle placement was determined by injection of dye and following dissection. RESULTS: In the supraclavicular region of volunteers, the nerve was identified in 81% of examinations (95% confidence interval [CI], 74%-88%) and located at a median depth of 8 mm (interquartile range, 6-9 mm). Near the suprascapular notch (supraspinous fossa), the nerve was unambiguously identified in 36% of examinations (95% CI, 28%-44%) (P < 0.001) and located at a median depth of 35 mm (interquartile range, 31-38 mm; P < 0.001). In the cadaver investigation, the rate of correct needle placement of the supraclavicular approach was 95% (95% CI, 86%-100%). CONCLUSIONS: Visualization of the SSN with US is better in the supraclavicular region as compared with the supraspinous fossa. The anatomic dissections confirmed that our novel supraclavicular SSN block technique is accurate.
Resumo:
Background: The imatinib trough plasma concentration (C(min)) correlates with clinical response in cancer patients. Therapeutic drug monitoring (TDM) of plasma C(min) is therefore suggested. In practice, however, blood sampling for TDM is often not performed at trough. The corresponding measurement is thus only remotely informative about C(min) exposure. Objectives: The objectives of this study were to improve the interpretation of randomly measured concentrations by using a Bayesian approach for the prediction of C(min), incorporating correlation between pharmacokinetic parameters, and to compare the predictive performance of this method with alternative approaches, by comparing predictions with actual measured trough levels, and with predictions obtained by a reference method, respectively. Methods: A Bayesian maximum a posteriori (MAP) estimation method accounting for correlation (MAP-ρ) between pharmacokinetic parameters was developed on the basis of a population pharmacokinetic model, which was validated on external data. Thirty-one paired random and trough levels, observed in gastrointestinal stromal tumour patients, were then used for the evaluation of the Bayesian MAP-ρ method: individual C(min) predictions, derived from single random observations, were compared with actual measured trough levels for assessment of predictive performance (accuracy and precision). The method was also compared with alternative approaches: classical Bayesian MAP estimation assuming uncorrelated pharmacokinetic parameters, linear extrapolation along the typical elimination constant of imatinib, and non-linear mixed-effects modelling (NONMEM) first-order conditional estimation (FOCE) with interaction. Predictions of all methods were finally compared with 'best-possible' predictions obtained by a reference method (NONMEM FOCE, using both random and trough observations for individual C(min) prediction). Results: The developed Bayesian MAP-ρ method accounting for correlation between pharmacokinetic parameters allowed non-biased prediction of imatinib C(min) with a precision of ±30.7%. This predictive performance was similar for the alternative methods that were applied. The range of relative prediction errors was, however, smallest for the Bayesian MAP-ρ method and largest for the linear extrapolation method. When compared with the reference method, predictive performance was comparable for all methods. The time interval between random and trough sampling did not influence the precision of Bayesian MAP-ρ predictions. Conclusion: Clinical interpretation of randomly measured imatinib plasma concentrations can be assisted by Bayesian TDM. Classical Bayesian MAP estimation can be applied even without consideration of the correlation between pharmacokinetic parameters. Individual C(min) predictions are expected to vary less through Bayesian TDM than linear extrapolation. Bayesian TDM could be developed in the future for other targeted anticancer drugs and for the prediction of other pharmacokinetic parameters that have been correlated with clinical outcomes.
Resumo:
Recent advances in CT technologies had significantly improved the clinical utility of cardiac CT. Major efforts have been made to optimize the image quality, standardize protocols and limit the radiation exposure. Rapid progress in post-processing tools dedicated not only to the coronary artery assessment but also to the cardiac cavities, valves and veins extended applications of cardiac CT. This potential might be however used optimally considering the current appropriate indications for use as well as the current technical imitations. Coronary artery disease and related ischemic cardiomyopathy remain the major applications of cardiac CT and at the same time the most complex one. Integration of a specific knowledge is mandatory for optimal use in this area for asymptomatic as for symptomatic patients, with a specific regards to patient with acute chest pain. This review aimed to propose a practical approach to implement appropriate indications in our routine practice. Emerging indications and future direction are also discussed. Adequate preparation of the patient, training of physicians, and the multidisciplinary interaction between actors are the key of successful implementation of cardiac CT in daily practice.
Resumo:
This paper provides a new and accessible approach to establishing certain results concerning the discounted penalty function. The direct approach consists of two steps. In the first step, closed-form expressions are obtained in the special case in which the claim amount distribution is a combination of exponential distributions. A rational function is useful in this context. For the second step, one observes that the family of combinations of exponential distributions is dense. Hence, it suffices to reformulate the results of the first step to obtain general results. The surplus process has downward and upward jumps, modeled by two independent compound Poisson processes. If the distribution of the upward jumps is exponential, a series of new results can be obtained with ease. Subsequently, certain results of Gerber and Shiu [H. U. Gerber and E. S. W. Shiu, North American Actuarial Journal 2(1): 48–78 (1998)] can be reproduced. The two-step approach is also applied when an independent Wiener process is added to the surplus process. Certain results are related to Zhang et al. [Z. Zhang, H. Yang, and S. Li, Journal of Computational and Applied Mathematics 233: 1773–1 784 (2010)], which uses different methods.
Resumo:
Background. Obesity is considered a major public health issue in most developed countries nowadays. This paper provides an overview of current population data available in Spain and the approach to develop preventive strategies in the country. Methods. Review of population data available is based on individually measured weight and height as well as determinants. On this basis, the approach used in the country to develop preventive strategies is discussed. Results. According to the DORICA study, the prevalence of obesity (BMI ≥30 kg m−2) is 15.5% in Spanish adults aged 25–60 years (13.2% in men and 17.5% in women). Obesity rates are higher among women aged 45 years and older, low social class, living in semi-urban places. Population estimates for the prevalence of obesity in Spanish children and young people based on the enKid study are 13.9% for the whole group. In this study, overweight and obesity is related to absence of breastfeeding, low consumption of fruit and vegetables, high consumption of cakes, buns, softdrinks and butchery products, low physical activity levels and a positive association with time spent watching TV. In 2005, the Spanish Ministry of Health jointly with the Spanish Agency for Food Safety and Nutrition launched the multifaceted NAOS strategy for nutrition, physical activity and the prevention of obesity. The important role of the family and the school setting as well as the responsibility of the Health Administration and Pediatric Care in the prevention of obesity is highlighted in the document. The need for environmental actions is recognised. The PERSEO programme, a multicomponent school-based intervention project is part of the strategy currently in place. Conclusion. Obesity is a public health issue in Spain. A national multifaceted strategy was launched to counteract the problem. Environmental and policy actions are a priority. Young children and their families are among the main target groups.
Resumo:
Most cases of emphysema are managed conservatively. However, in severe symptomatic emphysema associated with hyperinflation, lung volume reduction (LVR) may be proposed to improve dyspnea, exercice capacity, pulmonary functions, walk distance and to decrease long-term mortality. LVR may be achieved either surgically (LVRS) or endoscopically (EVLR by valves or coils) according to specific clinical criteria. Currently, the optimal approach is discussed in a multidisciplinary setting. The latter permits a personalized evaluation the patient's clinical status and allows the best possible therapeutic intervention to be proposed to the patient.
Resumo:
Background The 'database search problem', that is, the strengthening of a case - in terms of probative value - against an individual who is found as a result of a database search, has been approached during the last two decades with substantial mathematical analyses, accompanied by lively debate and centrally opposing conclusions. This represents a challenging obstacle in teaching but also hinders a balanced and coherent discussion of the topic within the wider scientific and legal community. This paper revisits and tracks the associated mathematical analyses in terms of Bayesian networks. Their derivation and discussion for capturing probabilistic arguments that explain the database search problem are outlined in detail. The resulting Bayesian networks offer a distinct view on the main debated issues, along with further clarity. Methods As a general framework for representing and analyzing formal arguments in probabilistic reasoning about uncertain target propositions (that is, whether or not a given individual is the source of a crime stain), this paper relies on graphical probability models, in particular, Bayesian networks. This graphical probability modeling approach is used to capture, within a single model, a series of key variables, such as the number of individuals in a database, the size of the population of potential crime stain sources, and the rarity of the corresponding analytical characteristics in a relevant population. Results This paper demonstrates the feasibility of deriving Bayesian network structures for analyzing, representing, and tracking the database search problem. The output of the proposed models can be shown to agree with existing but exclusively formulaic approaches. Conclusions The proposed Bayesian networks allow one to capture and analyze the currently most well-supported but reputedly counter-intuitive and difficult solution to the database search problem in a way that goes beyond the traditional, purely formulaic expressions. The method's graphical environment, along with its computational and probabilistic architectures, represents a rich package that offers analysts and discussants with additional modes of interaction, concise representation, and coherent communication.
Resumo:
By using suitable parameters, we present a uni¯ed aproach for describing four methods for representing categorical data in a contingency table. These methods include:correspondence analysis (CA), the alternative approach using Hellinger distance (HD),the log-ratio (LR) alternative, which is appropriate for compositional data, and theso-called non-symmetrical correspondence analysis (NSCA). We then make an appropriate comparison among these four methods and some illustrative examples are given.Some approaches based on cumulative frequencies are also linked and studied usingmatrices.Key words: Correspondence analysis, Hellinger distance, Non-symmetrical correspondence analysis, log-ratio analysis, Taguchi inertia
Resumo:
BACKGROUND: Nineteen patients were evaluated after closure of intrathoracic esophageal leaks by a pediculated muscle flap onlay repair in the presence of mediastinal and systemic sepsis. METHODS: Intrathoracic esophageal leaks with mediastinitis and systemic sepsis occurred after delayed spontaneous perforations (n = 7) or surgical and endoscopic interventions (n = 12). Six patients presented with fulminant anastomotic leaks. Seven patients had previous attempts to close the leak by surgery (n = 4) or stenting (2) or both (n = 1). The debrided defects measured up to 2 x 12 cm or involved three quarters of the anastomotic circumference and were closed either by a full thickness diaphragmatic flap (n = 13) or a pediculated intrathoracically transposed extrathoracic muscle flap (n = 6). All patients had postoperative contrast esophagography between days 7 and 10 and an endoscopic evaluation 4 to 6 months after surgery. RESULTS: There was no 30-day mortality. During follow-up (4 to 42 months), 16 patients (84%) revealed functional and morphological restoration of the esophagointestinal integrity without further interventions. One patient required serial dilatations for a stricture, and 1 underwent temporary stenting for a persistent fistula; both patients had normal control endoscopy during follow-up. A third patient requiring permanent stenting for stenosis died from gastrointestinal bleeding due to stent erosion during follow-up. CONCLUSIONS: Intrathoracic esophageal leaks may be closed efficiently by a muscle flap onlay approach in the presence of mediastinitis and where a primary repair seems risky. The same holds true for fulminant intrathoracic anastomotic leaks after esophagectomy or other surgical interventions at the gastroesophageal junction.
Resumo:
Objectives: The study objective was to derive reference pharmacokinetic curves of antiretroviral drugs (ART) based on available population pharmacokinetic (Pop-PK) studies that can be used to optimize therapeutic drug monitoring guided dosage adjustment.¦Methods: A systematic search of Pop-PK studies of 8 ART in adults was performed in PubMed. To simulate reference PK curves, a summary of the PK parameters was obtained for each drug based on meta-analysis approach. Most models used one-compartment model, thus chosen as reference model. Models using bi-exponential disposition were simplified to one-compartment, since the first distribution phase was rapid and not determinant for the description of the terminal elimination phase, mostly relevant for this project. Different absorption were standardized for first-order absorption processes.¦Apparent clearance (CL), apparent volume of distribution of the terminal phase (Vz) and absorption rate constant (ka) and inter-individual variability were pooled into summary mean value, weighted by number of plasma levels; intra-individual variability was weighted by number of individuals in each study.¦Simulations based on summary PK parameters served to construct concentration PK percentiles (NONMEM®).¦Concordance between individual and summary parameters was assessed graphically using Forest-plots. To test robustness, difference in simulated curves based on published and summary parameters was calculated using efavirenz as probe drug.¦Results: CL was readily accessible from all studies. For studies with one-compartment, Vz was central volume of distribution; for two-compartment, Vz was CL/λz. ka was directly used or derived based on the mean absorption time (MAT) for more complicated absorption models, assuming MAT=1/ka.¦The value of CL for each drug was in excellent agreement throughout all Pop-PK models, suggesting that minimal concentration derived from summary models was adequately characterized. The comparison of the concentration vs. time profile for efavirenz between published and summary PK parameters revealed not more than 20% difference. Although our approach appears adequate for estimation of elimination phase, the simplification of absorption phase might lead to small bias shortly after drug intake.¦Conclusions: Simulated reference percentile curves based on such an approach represent a useful tool for interpretating drug concentrations. This Pop-PK meta-analysis approach should be further validated and could be extended to elaborate more sophisticated computerized tool for the Bayesian TDM of ART.
Resumo:
A project recently launched by the Faculty of biology and medicine of Lausanne introduces the approach of facing death during both the dissection and the course of clinical activities. Existential questions relating to mortality are bound to arise sooner or later during the course of the study. For the sake of humanized clinical practice, these questions must be confronted. In response to a request by a student association, an accompanying curriculum with active student's contribution through encounters with death in anatomy and clinical situations was created in Lausanne. Students will benefit from this new program throughout their curriculum. This program is the first of its kind in Switzerland.
Resumo:
Résumé Objectifs : La thérapie photodynamique a pour but la destruction sélective du tissu néoplasique par interaction de lumière, d'oxygène et d'une substance photosensibilisatrice (la Protoporphyrine IX dans notre étude). Malgré une accumulation sélective du photosensibilisateur dans le tissu tumoral, la thérapie photodynamique du carcinome urothélial de la vessie peut endommager les cellules normales de l'épithélium urinaire. La prévention de ces lésions est importante pour la régénération de la muqueuse. Notre étude sur un modèle in vitro d'urothélium porcin étudie l'influence de la concentration du photosensibilisateur, des paramètres d'irradiation et de la production d'intermédiaires réactifs de l'oxygène (ROS) sur les effets photodynamique. Le but était de déterminer les conditions seuil pour épargner l'urothélium sain. Méthode: Dans une chambre de culture transparente à deux compartiments, des muqueuses vésicales de porc maintenues en vie ont été incubées avec une solution d'hexyl-aminolévulinate (HAL), le précurseur de la Protoporphyrine IX. Ces muqueuses ont ensuite été irradiées avec des doses lumineuses croissantes en lumière bleue et en lumière blanche, et les altérations cellulaires ont été évaluées par microscopie électronique à balayage et par un colorant fluorescent, le Sytox green. Nous avons également évalué la production d'intermédiaires réactifs de l'oxygène parla mesure de la fluorescence intracellulaire de Rhodamine 123 (R123), produit de l'oxydation de la Dihydrorhodamine 123 (DHR123) non fluorescente. Ces valeurs ont été corrélées avec celles du photo blanchiment de la PAIX. Résultats : Le taux de mortalité cellulaire était dépendant de la concentration de PAIX. Après 3 heures d'incubation, la valeur seuil de dose lumineuse pour la lumière bleu était de 0.15 et 0.75 J/cm2 (irradiance 30 et 75 mW/cm2, respectivement) et pour la lumière blanche de 0.55 J/cm2 (irradiante 30 mW/cm2). Le taux de photo blanchiment était inversement proportionnel à l'irradiante. Le système de détection des intermédiaires réactifs de l'oxygène DHR123/R123 a démontré une bonne corrélation avec les valeurs seuil pour toutes les conditions d'irradiation utilisées. Conclusions : Nous avons déterminé les doses lumineuses permettant d'épargner 50% des cellules urothéliales saines. L'utilisation d'une faible irradiante associée à des systèmes permettant de mesurer la production d'intermédiaires réactifs de l'oxygène dans les tissus irradiés pourrait améliorer la dosimétrie in vivo et l'efficacité de la thérapie photodynamique. Abstract Background and Objectives: Photodynamic therapy of superficial bladder cancer may cause damages to the normal surrounding bladder wall. Prevention of these is important for bladder healing. We studied the influence of photosensitizes concentration, irradiation parameters and production of reactive oxygen species (ROS) on the photodynamically induced damage in the porcine urothelium in vitro. The aim was to determine the threshold conditions for the cell survival. Methods: Living porcine bladder mucosae were incubated with solution of hexylester of 5-aminolevulinic acid (HAL). The mucosae were irradiated with increasing doses and cell alterations were evaluated by scanning electron microscopy and by Sytox green fluorescence. The urothelial survival score was correlated with Protoporphyrin IX (PpIX) photobleaching and intracellular fluorescence of Rhodamine 123 reflecting the ROS production. Results: The mortality ratio was dependent on PpIX concentration. After 3 hours of incubation, the threshold radiant exposures for blue light were 0.15 and 0.75 J/cm2 (irradiance 30 and 75 mW/cm2, respectively) and for white light 0.55 J/cm2 (irradiance 30 mW/cm2). Photobleaching rate increased with decreasing irradiance. Interestingly, the DHR123/R123 reporter system correlated well with the threshold exposures under all conditions used. Conclusions: we have determined radiant exposures sparing half of normal urothelial cells. We propose that the use of low irradiance combined with systems reporting the ROS production in the irradiated tissue could improve the in vivo dosimetry and optimize the PDT.