874 resultados para Filmic approach methods
Resumo:
Objective: The vascular access steal syndrome is a complication occurring in 1-6% after native arterio-venous (AV) fistulas, often due to huge diameter of the vein. This results in very high flow, which could also be responsible for cardiac overload. The aim of this study is to evaluate the efficiency of a new approach in the treatment of this pathology using open-pore external scaffolding prosthesis.Methods: This a retrospective review of all patients presenting symptomatic high flow after native AV fistula between January 2007 and December 2009 in 3 vascular centers. Pre-operative duplex exam confirmed the diagnosis of high flow. The operation consisted in preparation of the whole fistula, measurement of the flow and section on the venous side. The vein was wrapped with this 6 to 8 mm open-pore external scaffolding prosthesis (ProVena, BBraun, Germany) according to its diameter and to the flow and then sutured. Measurement of the flow was repeated. Patients were followed by duplex exam at 1 week and at 1, 3, 6 and 12 months. Procedural success was defined as complete implantation of the prosthesis and reduction of the flow. Primary outcomes were reduction of the flow and recovery of the symptoms and secondary endpoint was patency of the fistula.Results: During the study period, 14 patients, with a mean age of 65・8 years old, have been operated with this technique.There were 2 native forearmfistulas and 12 on the armwith a mean pre-operative flow of 2600 ml/min (1800-3800). The mode of presentation was pain in 6 patients, neurological disorders in 10 and necrosis in 4. Moreover, 3 patients had cardiac insufficiency due to high flow in the fistula. The procedure was technically successful in 100% of cases. Re-intervention was necessary in 2 patients due to hematoma. Recovery of the initial symptoms occurred in 13 patients (93%). The mean flow reduction was 1200 ml/min (600-2000). In 1 patient, a persistent steal syndrome despite flow reduction to 1400 ml/min resulted in fistula closure 2 months later. At a mean follow-up of 22 months (4-35), all remaining patients (13/14) presented a patent fistula without recurrence.Conclusion: This new approach seems to be safe and effective in the treatment of symptomatic high flow native AV fistulas by significantly reducing the flow and avoiding closure of the vascular access. Longer follow-up with more patients are necessary to evaluate the risk of recurrence.
Resumo:
Using data from the Spanish household budget survey, we investigate life-cycle effects on several product expenditures. A latent-variable model approach is adopted to evaluate the impact of income on expenditures, controlling for the number of members in the family. Two latent factors underlying repeated measures of monetary and non-monetary income are used as explanatory variables in the expenditure regression equations, thus avoiding possible bias associated to the measurement error in income. The proposed methodology also takes care of the case in which product expenditures exhibit a pattern of infrequent purchases. Multiple-group analysis is used to assess the variation of key parameters of the model across various household life-cycle typologies. The analysis discloses significant life-cycle effects on the mean levels of expenditures; it also detects significant life-cycle effects on the way expenditures are affected by income and family size. Asymptotic robust methods are used to account for possible non-normality of the data.
Resumo:
The achievable region approach seeks solutions to stochastic optimisation problems by: (i) characterising the space of all possible performances(the achievable region) of the system of interest, and (ii) optimisingthe overall system-wide performance objective over this space. This isradically different from conventional formulations based on dynamicprogramming. The approach is explained with reference to a simpletwo-class queueing system. Powerful new methodologies due to the authorsand co-workers are deployed to analyse a general multiclass queueingsystem with parallel servers and then to develop an approach to optimalload distribution across a network of interconnected stations. Finally,the approach is used for the first time to analyse a class of intensitycontrol problems.
Resumo:
The potential of type-2 fuzzy sets for managing high levels of uncertainty in the subjective knowledge of experts or of numerical information has focused on control and pattern classification systems in recent years. One of the main challenges in designing a type-2 fuzzy logic system is how to estimate the parameters of type-2 fuzzy membership function (T2MF) and the Footprint of Uncertainty (FOU) from imperfect and noisy datasets. This paper presents an automatic approach for learning and tuning Gaussian interval type-2 membership functions (IT2MFs) with application to multi-dimensional pattern classification problems. T2MFs and their FOUs are tuned according to the uncertainties in the training dataset by a combination of genetic algorithm (GA) and crossvalidation techniques. In our GA-based approach, the structure of the chromosome has fewer genes than other GA methods and chromosome initialization is more precise. The proposed approach addresses the application of the interval type-2 fuzzy logic system (IT2FLS) for the problem of nodule classification in a lung Computer Aided Detection (CAD) system. The designed IT2FLS is compared with its type-1 fuzzy logic system (T1FLS) counterpart. The results demonstrate that the IT2FLS outperforms the T1FLS by more than 30% in terms of classification accuracy.
Resumo:
In this paper we propose a subsampling estimator for the distribution ofstatistics diverging at either known rates when the underlying timeseries in strictly stationary abd strong mixing. Based on our results weprovide a detailed discussion how to estimate extreme order statisticswith dependent data and present two applications to assessing financialmarket risk. Our method performs well in estimating Value at Risk andprovides a superior alternative to Hill's estimator in operationalizingSafety First portofolio selection.
Resumo:
We introduce a variation of the proof for weak approximations that issuitable for studying the densities of stochastic processes which areevaluations of the flow generated by a stochastic differential equation on a random variable that maybe anticipating. Our main assumption is that the process and the initial random variable have to be smooth in the Malliavin sense. Furthermore if the inverse of the Malliavin covariance matrix associated with the process under consideration is sufficiently integrable then approximations fordensities and distributions can also be achieved. We apply theseideas to the case of stochastic differential equations with boundaryconditions and the composition of two diffusions.
Resumo:
OBJECTIVE: To determine the risks of prosthesis dislocation, postoperative Trendelenburg gait, and sciatic nerve palsy after a posterior approach compared to a direct lateral approach for adult patients undergoing total hip arthroplasty (THA) for primary osteoarthritis (OA). METHODS: Medline, Embase, CINHAL, and Cochrane databases were searched until August 2003. All published trials comparing posterior and direct lateral surgical approaches to THA in adults with a diagnosis of primary hip osteoarthritis were collected. Retrieved articles were assessed independently for their methodological quality. RESULTS: Four prospective cohort studies involving 241 participants met the inclusion criteria. Regarding dislocation rate, no significant difference between posterior and direct lateral surgical approach was found (relative risk 0.35). The presence of postoperative Trendelenburg gait was not significantly different between surgical approaches. The risk of nerve palsy or injury was significantly higher with the direct lateral approach (relative risk 0.16). However, there were no significant differences when comparing this risk nerve by nerve, in particular for the sciatic nerve. Of the other outcomes considered, only the average range of internal rotation in extension of the hip was significantly higher (weighted mean difference 16 degrees ) in the posterior approach group (mean 35 degrees, SD 13 degrees ) compared to the direct lateral approach (mean 19 degrees, SD 13 degrees ). CONCLUSION: The quality and quantity of information extracted from the trials performed to date are insufficient to make a firm conclusion on the optimum choice of surgical approach for adult patients undergoing primary THA for OA.
Resumo:
Understanding the evolution of intraspecific variance is a major research question in evolutionary biology. While its importance to processes operating at individual and population levels is well-documented, much less is known about its role in macroevolutionary patterns. Nevertheless, both experimental and theoretical evidence suggest that the intraspecific variance is susceptible to selection, can transform into interspecific variation and, therefore, is crucial for macroevolutionary processes. The main objectives of this thesis were: (l) to investigate which factors impact evolution of intraspecific variation in Polygonaceae and determine if evolution of intraspecific variation influences species diversification; and (2) to develop a novel comparative phylogenetic method to model evolution of intraspecific variation. Using the buckwheat family, Polygonaceae, as a study system, I demonstrated which life-history and ecological traits are relevant to the evolution of intraspecific variation. I analyzed how differential intraspecific variation drives species diversification patterns. I showed with computer simulations the shortcomings of existing comparative methods with respect to intraspecific variation. I developed a novel comparative model that readily incorporates the intraspecific variance into phylogenetic comparative methods. The obtained results are complimentary, because they affect both empirical and methodological aspects of comparative analysis. Overall, I highlight that intraspecific variation is an important contributor to the macroevolutionary patterns and it should be explicitly considered in the comparative phylogenetic analysis. - En biologie évolutive comprendre l'évolution de la variance intraspécifique est un axe de recherche majeur. Bien que l'importance de cette variation soit bien documentée au niveau individuel et populationnel, on en sait beaucoup moins sur son rôle au niveau macroévolutif. Néanmoins, des preuves expérimentales et théoriques suggèrent que la variance intraspécifique est sensible à la sélection et peut se transformer en variation interspécifique. Par conséquent, elle est cruciale pour mieux comprendre les processus macroévolutifs. Les principaux objectifs de ma thèse étaient : (i) d'enquêter sur les facteurs qui affectent l'évolution de la variation intraspécifique chez les Polygonaceae et de déterminer si l'évolution de cette dernière influence la diversification des espèces, et (2) de développer une nouvelle méthode comparative permettant de modéliser l'évolution de la variation intraspécifique dans un cadre phylogénétique. En utilisant comme système d'étude la famille du sarrasin, les Polygonacées, je démontre que les traits d'histoire de vie sont pertinents pour comprendre l'évolution de la variation intraspécifique. J'ai également analysé l'influence de la variation intraspécifique au niveau de la diversification des espèces. J'ai ensuite démontré avec des données simulées les limites des méthodes comparatives existantes vis à vis de la variation intraspécifique. Finalement, j'ai développé un modèle comparatif qui intègre facilement la variance intraspécifique dans les méthodes comparatives phylogénétiques existantes. Les résultats obtenus lors de ma thèse sont complémentaires car ils abordent aspects empiriques et méthodologiques de l'analyse comparative. En conclusion, je souligne que la variation intraspécifique est un facteur important en macroévolution et qu'elle doit être explicitement considérée lors d'analyses comparatives phylogénétiques.
Resumo:
Recently, kernel-based Machine Learning methods have gained great popularity in many data analysis and data mining fields: pattern recognition, biocomputing, speech and vision, engineering, remote sensing etc. The paper describes the use of kernel methods to approach the processing of large datasets from environmental monitoring networks. Several typical problems of the environmental sciences and their solutions provided by kernel-based methods are considered: classification of categorical data (soil type classification), mapping of environmental and pollution continuous information (pollution of soil by radionuclides), mapping with auxiliary information (climatic data from Aral Sea region). The promising developments, such as automatic emergency hot spot detection and monitoring network optimization are discussed as well.
Spanning tests in return and stochastic discount factor mean-variance frontiers: A unifying approach
Resumo:
We propose new spanning tests that assess if the initial and additional assets share theeconomically meaningful cost and mean representing portfolios. We prove their asymptoticequivalence to existing tests under local alternatives. We also show that unlike two-step oriterated procedures, single-step methods such as continuously updated GMM yield numericallyidentical overidentifyng restrictions tests, so there is arguably a single spanning test.To prove these results, we extend optimal GMM inference to deal with singularities in thelong run second moment matrix of the influence functions. Finally, we test for spanningusing size and book-to-market sorted US stock portfolios.
Resumo:
Two main approaches are commonly used to empirically evaluate linear factor pricingmodels: regression and SDF methods, with centred and uncentred versions of the latter.We show that unlike standard two-step or iterated GMM procedures, single-step estimatorssuch as continuously updated GMM yield numerically identical values for prices of risk,pricing errors, Jensen s alphas and overidentifying restrictions tests irrespective of the modelvalidity. Therefore, there is arguably a single approach regardless of the factors being tradedor not, or the use of excess or gross returns. We illustrate our results by revisiting Lustigand Verdelhan s (2007) empirical analysis of currency returns.
Resumo:
INTRODUCTION: The management of large lesions of the skull base, such as vestibular schwannomas (VS) is challenging. Microsurgery remains the main treatment option. Combined approaches (planned subtotal resection followed by gamma knife surgery (GKS) for residual tumor long-term control) are being increasingly considered to reduce the risk of neurological deficits following complete resection. The current study aims to prospectively evaluate the safety-efficacy of combined approach in patients with large VS. MATERIALS AND METHODS: We present our experience with planned subtotal resection followed by gamma knife surgery (GKS) in a consecutive a series of 20 patients with large vestibular schwannomas, treated between 2009 and 2014 in Lausanne University Hospital, Switzerland. Clinical and radiological data and audiograms were prospectively collected for all patients, before and after surgery, before and after GKS, at regular intervals, in dedicated case-report forms. Additionally, for GKS, dose-planning parameters were registered. RESULTS: Twenty patients (6 males and 14 females) with large VS had been treated by this approach. The mean age at the time of surgery was 51.6years (range 34.4-73.4). The mean presurgical diameter was 36.7 (range 26.1-45). The mean presurgical tumor volume was 15.9cm(3) (range 534.9). Three patients (15%) needed a second surgical intervention because of high volume of the tumor remnant considered too large for a safe GKS. The mean follow-up after surgery was 27.2months (range 6-61.3). The timing of GKS was decided on the basis of the residual tumor shape and size following surgery. The mean duration between surgery and GKS was 7.6months (range 413.9, median 6months). The mean tumor volume at the time of GKS was 4.1cm(3) (range 0.5-12.8). The mean prescription isodose volume was 6.3cm(3) (range 0.8-15.5). The mean number of isocenters was 20.4 (range 11-31) and the mean marginal prescription dose was 11.7Gy (range 11-12). We did not have any major complications in our series. Postoperative status showed normal facial nerve function (House-Brackmann grade I) in all patients. Six patients with useful pre-operative hearing (GR class 1) underwent surgery with the aim to preserve cochlear nerve function; of these patients, 5 (83.3%) of them remained in GR class 1 and one (16.7%) lost hearing (GR class 5). Two patients having GR class 3 at baseline remained in the same GR class, but the tonal audiometry improved in one of them during follow-up. Eleven patients (57.8%) were in GR class 5 preoperatively; one patient improved hearing after surgery, passing to GR class 3 postoperatively. Following GKS, there were no new neurological deficits, with facial and hearing function remaining identical to that after surgery. CONCLUSION: Our data suggest that planned subtotal resection followed by GKS has an excellent clinical outcome with respect to retaining facial and cochlear nerve function. This represents a paradigm shift of the treatment goals from a complete tumor excision perspective to that of a surgery designed to preserve neural functions. As long-term results emerge, this approach of a combined treatment (microsurgery and GKS) will most probably become the standard of care in the management of large vestibular schwanomma.
Resumo:
The Generalized Assignment Problem consists in assigning a setof tasks to a set of agents with minimum cost. Each agent hasa limited amount of a single resource and each task must beassigned to one and only one agent, requiring a certain amountof the resource of the agent. We present new metaheuristics forthe generalized assignment problem based on hybrid approaches.One metaheuristic is a MAX-MIN Ant System (MMAS), an improvedversion of the Ant System, which was recently proposed byStutzle and Hoos to combinatorial optimization problems, and itcan be seen has an adaptive sampling algorithm that takes inconsideration the experience gathered in earlier iterations ofthe algorithm. Moreover, the latter heuristic is combined withlocal search and tabu search heuristics to improve the search.A greedy randomized adaptive search heuristic (GRASP) is alsoproposed. Several neighborhoods are studied, including one basedon ejection chains that produces good moves withoutincreasing the computational effort. We present computationalresults of the comparative performance, followed by concludingremarks and ideas on future research in generalized assignmentrelated problems.
Resumo:
ABSTRACT: BACKGROUND: The main objective of our study was to assess the impact of a board game on smoking status and smoking-related variables in current smokers. To accomplish this objective, we conducted a randomized controlled trial comparing the game group with a psychoeducation group and a waiting-list control group. METHODS: The following measures were performed at participant inclusion, as well as after a 2-week and a 3-month follow-up period: "Attitudes Towards Smoking Scale" (ATS-18), "Smoking Self-Efficacy Questionnaire" (SEQ-12), "Attitudes Towards Nicotine Replacement Therapy" scale (ANRT-12), number of cigarettes smoked per day, stages of change, quit attempts, and smoking status. Furthermore, participants were assessed for concurrent psychiatric disorders and for the severity of nicotine dependence with the Fagerström Test for Nicotine Dependence (FTND). RESULTS: A time × group effect was observed for subscales of the ANRT-12, ATS-18 and SEQ-12, as well as for the number of cigarettes smoked per day. At three months follow-up, compared to the participants allocated to the waiting list group, those on Pick-Klop group were less likely to remain smoker.Outcomes at 3 months were not predicted by gender, age, FTND, stage of change, or psychiatric disorders at inclusion. CONCLUSIONS: The board game seems to be a good option for smokers. The game led to improvements in variables known to predict quitting in smokers. Furthermore, it increased smoking-cessation rates at 3-months follow-up. The game is also an interesting alternative for smokers in the precontemplation stage.
Resumo:
The n-octanol/water partition coefficient (log Po/w) is a key physicochemical parameter for drug discovery, design, and development. Here, we present a physics-based approach that shows a strong linear correlation between the computed solvation free energy in implicit solvents and the experimental log Po/w on a cleansed data set of more than 17,500 molecules. After internal validation by five-fold cross-validation and data randomization, the predictive power of the most interesting multiple linear model, based on two GB/SA parameters solely, was tested on two different external sets of molecules. On the Martel druglike test set, the predictive power of the best model (N = 706, r = 0.64, MAE = 1.18, and RMSE = 1.40) is similar to six well-established empirical methods. On the 17-drug test set, our model outperformed all compared empirical methodologies (N = 17, r = 0.94, MAE = 0.38, and RMSE = 0.52). The physical basis of our original GB/SA approach together with its predictive capacity, computational efficiency (1 to 2 s per molecule), and tridimensional molecular graphics capability lay the foundations for a promising predictor, the implicit log P method (iLOGP), to complement the portfolio of drug design tools developed and provided by the SIB Swiss Institute of Bioinformatics.