921 resultados para systems approach
Resumo:
Résumé Objectifs : La thérapie photodynamique a pour but la destruction sélective du tissu néoplasique par interaction de lumière, d'oxygène et d'une substance photosensibilisatrice (la Protoporphyrine IX dans notre étude). Malgré une accumulation sélective du photosensibilisateur dans le tissu tumoral, la thérapie photodynamique du carcinome urothélial de la vessie peut endommager les cellules normales de l'épithélium urinaire. La prévention de ces lésions est importante pour la régénération de la muqueuse. Notre étude sur un modèle in vitro d'urothélium porcin étudie l'influence de la concentration du photosensibilisateur, des paramètres d'irradiation et de la production d'intermédiaires réactifs de l'oxygène (ROS) sur les effets photodynamique. Le but était de déterminer les conditions seuil pour épargner l'urothélium sain. Méthode: Dans une chambre de culture transparente à deux compartiments, des muqueuses vésicales de porc maintenues en vie ont été incubées avec une solution d'hexyl-aminolévulinate (HAL), le précurseur de la Protoporphyrine IX. Ces muqueuses ont ensuite été irradiées avec des doses lumineuses croissantes en lumière bleue et en lumière blanche, et les altérations cellulaires ont été évaluées par microscopie électronique à balayage et par un colorant fluorescent, le Sytox green. Nous avons également évalué la production d'intermédiaires réactifs de l'oxygène parla mesure de la fluorescence intracellulaire de Rhodamine 123 (R123), produit de l'oxydation de la Dihydrorhodamine 123 (DHR123) non fluorescente. Ces valeurs ont été corrélées avec celles du photo blanchiment de la PAIX. Résultats : Le taux de mortalité cellulaire était dépendant de la concentration de PAIX. Après 3 heures d'incubation, la valeur seuil de dose lumineuse pour la lumière bleu était de 0.15 et 0.75 J/cm2 (irradiance 30 et 75 mW/cm2, respectivement) et pour la lumière blanche de 0.55 J/cm2 (irradiante 30 mW/cm2). Le taux de photo blanchiment était inversement proportionnel à l'irradiante. Le système de détection des intermédiaires réactifs de l'oxygène DHR123/R123 a démontré une bonne corrélation avec les valeurs seuil pour toutes les conditions d'irradiation utilisées. Conclusions : Nous avons déterminé les doses lumineuses permettant d'épargner 50% des cellules urothéliales saines. L'utilisation d'une faible irradiante associée à des systèmes permettant de mesurer la production d'intermédiaires réactifs de l'oxygène dans les tissus irradiés pourrait améliorer la dosimétrie in vivo et l'efficacité de la thérapie photodynamique. Abstract Background and Objectives: Photodynamic therapy of superficial bladder cancer may cause damages to the normal surrounding bladder wall. Prevention of these is important for bladder healing. We studied the influence of photosensitizes concentration, irradiation parameters and production of reactive oxygen species (ROS) on the photodynamically induced damage in the porcine urothelium in vitro. The aim was to determine the threshold conditions for the cell survival. Methods: Living porcine bladder mucosae were incubated with solution of hexylester of 5-aminolevulinic acid (HAL). The mucosae were irradiated with increasing doses and cell alterations were evaluated by scanning electron microscopy and by Sytox green fluorescence. The urothelial survival score was correlated with Protoporphyrin IX (PpIX) photobleaching and intracellular fluorescence of Rhodamine 123 reflecting the ROS production. Results: The mortality ratio was dependent on PpIX concentration. After 3 hours of incubation, the threshold radiant exposures for blue light were 0.15 and 0.75 J/cm2 (irradiance 30 and 75 mW/cm2, respectively) and for white light 0.55 J/cm2 (irradiance 30 mW/cm2). Photobleaching rate increased with decreasing irradiance. Interestingly, the DHR123/R123 reporter system correlated well with the threshold exposures under all conditions used. Conclusions: we have determined radiant exposures sparing half of normal urothelial cells. We propose that the use of low irradiance combined with systems reporting the ROS production in the irradiated tissue could improve the in vivo dosimetry and optimize the PDT.
Resumo:
This paper presents an application of the Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism (MuSIASEM) approach to the estimation of quantities of Gross Value Added (GVA) referring to economic entities defined at different scales of study. The method first estimates benchmark values of the pace of GVA generation per hour of labour across economic sectors. These values are estimated as intensive variables –e.g. €/hour– by dividing the various sectorial GVA of the country (expressed in € per year) by the hours of paid work in that same sector per year. This assessment is obtained using data referring to national statistics (top down information referring to the national level). Then, the approach uses bottom-up information (the number of hours of paid work in the various economic sectors of an economic entity –e.g. a city or a province– operating within the country) to estimate the amount of GVA produced by that entity. This estimate is obtained by multiplying the number of hours of work in each sector in the economic entity by the benchmark value of GVA generation per hour of work of that particular sector (national average). This method is applied and tested on two different socio-economic systems: (i) Catalonia (considered level n) and Barcelona (considered level n-1); and (ii) the region of Lima (considered level n) and Lima Metropolitan Area (considered level n-1). In both cases, the GVA per year of the local economic entity –Barcelona and Lima Metropolitan Area – is estimated and the resulting value is compared with GVA data provided by statistical offices. The empirical analysis seems to validate the approach, even though the case of Lima Metropolitan Area indicates a need for additional care when dealing with the estimate of GVA in primary sectors (agriculture and mining).
Resumo:
This paper focuses on one of the methods for bandwidth allocation in an ATM network: the convolution approach. The convolution approach permits an accurate study of the system load in statistical terms by accumulated calculations, since probabilistic results of the bandwidth allocation can be obtained. Nevertheless, the convolution approach has a high cost in terms of calculation and storage requirements. This aspect makes real-time calculations difficult, so many authors do not consider this approach. With the aim of reducing the cost we propose to use the multinomial distribution function: the enhanced convolution approach (ECA). This permits direct computation of the associated probabilities of the instantaneous bandwidth requirements and makes a simple deconvolution process possible. The ECA is used in connection acceptance control, and some results are presented
Resumo:
Miralls deformables més i més grans, amb cada cop més actuadors estan sent utilitzats actualment en aplicacions d'òptica adaptativa. El control dels miralls amb centenars d'actuadors és un tema de gran interès, ja que les tècniques de control clàssiques basades en la seudoinversa de la matriu de control del sistema es tornen massa lentes quan es tracta de matrius de dimensions tan grans. En aquesta tesi doctoral es proposa un mètode per l'acceleració i la paral.lelitzacó dels algoritmes de control d'aquests miralls, a través de l'aplicació d'una tècnica de control basada en la reducció a zero del components més petits de la matriu de control (sparsification), seguida de l'optimització de l'ordenació dels accionadors de comandament atenent d'acord a la forma de la matriu, i finalment de la seva posterior divisió en petits blocs tridiagonals. Aquests blocs són molt més petits i més fàcils de fer servir en els càlculs, el que permet velocitats de càlcul molt superiors per l'eliminació dels components nuls en la matriu de control. A més, aquest enfocament permet la paral.lelització del càlcul, donant una com0onent de velocitat addicional al sistema. Fins i tot sense paral. lelització, s'ha obtingut un augment de gairebé un 40% de la velocitat de convergència dels miralls amb només 37 actuadors, mitjançant la tècnica proposada. Per validar això, s'ha implementat un muntatge experimental nou complet , que inclou un modulador de fase programable per a la generació de turbulència mitjançant pantalles de fase, i s'ha desenvolupat un model complert del bucle de control per investigar el rendiment de l'algorisme proposat. Els resultats, tant en la simulació com experimentalment, mostren l'equivalència total en els valors de desviació després de la compensació dels diferents tipus d'aberracions per als diferents algoritmes utilitzats, encara que el mètode proposat aquí permet una càrrega computacional molt menor. El procediment s'espera que sigui molt exitós quan s'aplica a miralls molt grans.
Resumo:
Piecewise linear models systems arise as mathematical models of systems in many practical applications, often from linearization for nonlinear systems. There are two main approaches of dealing with these systems according to their continuous or discrete-time aspects. We propose an approach which is based on the state transformation, more particularly the partition of the phase portrait in different regions where each subregion is modeled as a two-dimensional linear time invariant system. Then the Takagi-Sugeno model, which is a combination of local model is calculated. The simulation results show that the Alpha partition is well-suited for dealing with such a system
Resumo:
Executive summaryThe increasing prevalence of chronic diseases is one of the major causes of rising health expenditure, as stated by the WHO. Not only chronic diseases are very costly, but they are by far the leading cause of mortality in the world, representing 60% of all deaths. Diabetes in particular is becoming a major burden of disease. In Switzerland around 5% of the population suffer of type 2 diabetes and 5 to 10% of the annual health care budget is attributable to diabetes. If the predictions of WHO do realise, the prevalence of diabetes will double until 2030 and so is expected the attributable health expenditure.The objective of this thesis is to provide policy recommendations as to slow down the disease progression and its costly complication. We study the factors that influence diabetes dynamics and the interventions that improve health outcomes while decreasing costs according to different time horizon and use systems thinking and system dynamic.Our results show that managing diabetes requires using integrated care interventions that are effective on three fronts: (1) delaying the onset of complications, (2) slowing down the disease progression and (3) accelerating the time to diagnosis of diabetes and its complications. We recommend firstly the implementation of those interventions targeted at changing patients' behaviour which are also less expensive, but require a change in the delivery of care and medical practices. Then policies targeted at an earlier diagnosis of diabetes, its prevention and the diagnosis of complications are to be considered. This sequence of interventions allows saving money, as total costs decrease, even including the costs of interventions and result in longer life expectancy of diabetics in the long term.In diabetes management there is therefore a trade-off between medical costs and patients' benefits on the one hand and between the objectives of obtaining results in the short or long term on the other hand. Decision makers need to deliver acceptable outcomes in the short term. Considering this criterion, the preferred policy may be to focus only on diagnosed diabetics, thus attempting to slow down the progression of their disease, compared to an integrated care approach addressing all the aspects of the disease. Such a policy also yields desirable results in terms of costs and patients' benefits.
Resumo:
Not considered in the analytical model of the plant, uncertainties always dramatically decrease the performance of the fault detection task in the practice. To cope better with this prevalent problem, in this paper we develop a methodology using Modal Interval Analysis which takes into account those uncertainties in the plant model. A fault detection method is developed based on this model which is quite robust to uncertainty and results in no false alarm. As soon as a fault is detected, an ANFIS model is trained in online to capture the major behavior of the occurred fault which can be used for fault accommodation. The simulation results understandably demonstrate the capability of the proposed method for accomplishing both tasks appropriately
Resumo:
Given a set of images of scenes containing different object categories (e.g. grass, roads) our objective is to discover these objects in each image, and to use this object occurrences to perform a scene classification (e.g. beach scene, mountain scene). We achieve this by using a supervised learning algorithm able to learn with few images to facilitate the user task. We use a probabilistic model to recognise the objects and further we classify the scene based on their object occurrences. Experimental results are shown and evaluated to prove the validity of our proposal. Object recognition performance is compared to the approaches of He et al. (2004) and Marti et al. (2001) using their own datasets. Furthermore an unsupervised method is implemented in order to evaluate the advantages and disadvantages of our supervised classification approach versus an unsupervised one
Resumo:
The speed of fault isolation is crucial for the design and reconfiguration of fault tolerant control (FTC). In this paper the fault isolation problem is stated as a constraint satisfaction problem (CSP) and solved using constraint propagation techniques. The proposed method is based on constraint satisfaction techniques and uncertainty space refining of interval parameters. In comparison with other approaches based on adaptive observers, the major advantage of the presented method is that the isolation speed is fast even taking into account uncertainty in parameters, measurements and model errors and without the monotonicity assumption. In order to illustrate the proposed approach, a case study of a nonlinear dynamic system is presented
Resumo:
A comparision of the local effects of the basis set superposition error (BSSE) on the electron densities and energy components of three representative H-bonded complexes was carried out. The electron densities were obtained with Hartee-Fock and density functional theory versions of the chemical Hamiltonian approach (CHA) methodology. It was shown that the effects of the BSSE were common for all complexes studied. The electron density difference maps and the chemical energy component analysis (CECA) analysis confirmed that the local effects of the BSSE were different when diffuse functions were present in the calculations
Resumo:
Comparison of donor-acceptor electronic couplings calculated within two-state and three-state models suggests that the two-state treatment can provide unreliable estimates of Vda because of neglecting the multistate effects. We show that in most cases accurate values of the electronic coupling in a π stack, where donor and acceptor are separated by a bridging unit, can be obtained as Ṽ da = (E2 - E1) μ12 Rda + (2 E3 - E1 - E2) 2 μ13 μ23 Rda2, where E1, E2, and E3 are adiabatic energies of the ground, charge-transfer, and bridge states, respectively, μij is the transition dipole moments between the states i and j, and Rda is the distance between the planes of donor and acceptor. In this expression based on the generalized Mulliken-Hush approach, the first term corresponds to the coupling derived within a two-state model, whereas the second term is the superexchange correction accounting for the bridge effect. The formula is extended to bridges consisting of several subunits. The influence of the donor-acceptor energy mismatch on the excess charge distribution, adiabatic dipole and transition moments, and electronic couplings is examined. A diagnostic is developed to determine whether the two-state approach can be applied. Based on numerical results, we showed that the superexchange correction considerably improves estimates of the donor-acceptor coupling derived within a two-state approach. In most cases when the two-state scheme fails, the formula gives reliable results which are in good agreement (within 5%) with the data of the three-state generalized Mulliken-Hush model
Resumo:
Dialogic learning and interactive groups have proved to be a useful methodological approach appliedin educational situations for lifelong adult learners. The principles of this approach stress theimportance of dialogue and equal participation also when designing the training activities. This paperadopts these principles as the basis for a configurable template that can be integrated in runtimesystems. The template is formulated as a meta-UoL which can be interpreted by IMS Learning Designplayers. This template serves as a guide to flexibly select and edit the activities at runtime (on the fly).The meta-UoL has been used successfully by a practitioner so as to create a real-life example, withpositive and encouraging results
Resumo:
The potential of type-2 fuzzy sets for managing high levels of uncertainty in the subjective knowledge of experts or of numerical information has focused on control and pattern classification systems in recent years. One of the main challenges in designing a type-2 fuzzy logic system is how to estimate the parameters of type-2 fuzzy membership function (T2MF) and the Footprint of Uncertainty (FOU) from imperfect and noisy datasets. This paper presents an automatic approach for learning and tuning Gaussian interval type-2 membership functions (IT2MFs) with application to multi-dimensional pattern classification problems. T2MFs and their FOUs are tuned according to the uncertainties in the training dataset by a combination of genetic algorithm (GA) and crossvalidation techniques. In our GA-based approach, the structure of the chromosome has fewer genes than other GA methods and chromosome initialization is more precise. The proposed approach addresses the application of the interval type-2 fuzzy logic system (IT2FLS) for the problem of nodule classification in a lung Computer Aided Detection (CAD) system. The designed IT2FLS is compared with its type-1 fuzzy logic system (T1FLS) counterpart. The results demonstrate that the IT2FLS outperforms the T1FLS by more than 30% in terms of classification accuracy.
Resumo:
The Generalized Assignment Problem consists in assigning a setof tasks to a set of agents with minimum cost. Each agent hasa limited amount of a single resource and each task must beassigned to one and only one agent, requiring a certain amountof the resource of the agent. We present new metaheuristics forthe generalized assignment problem based on hybrid approaches.One metaheuristic is a MAX-MIN Ant System (MMAS), an improvedversion of the Ant System, which was recently proposed byStutzle and Hoos to combinatorial optimization problems, and itcan be seen has an adaptive sampling algorithm that takes inconsideration the experience gathered in earlier iterations ofthe algorithm. Moreover, the latter heuristic is combined withlocal search and tabu search heuristics to improve the search.A greedy randomized adaptive search heuristic (GRASP) is alsoproposed. Several neighborhoods are studied, including one basedon ejection chains that produces good moves withoutincreasing the computational effort. We present computationalresults of the comparative performance, followed by concludingremarks and ideas on future research in generalized assignmentrelated problems.
Resumo:
We address the performance optimization problem in a single-stationmulticlass queueing network with changeover times by means of theachievable region approach. This approach seeks to obtainperformance bounds and scheduling policies from the solution of amathematical program over a relaxation of the system's performanceregion. Relaxed formulations (including linear, convex, nonconvexand positive semidefinite constraints) of this region are developedby formulating equilibrium relations satisfied by the system, withthe help of Palm calculus. Our contributions include: (1) newconstraints formulating equilibrium relations on server dynamics;(2) a flow conservation interpretation of the constraintspreviously derived by the potential function method; (3) newpositive semidefinite constraints; (4) new work decomposition lawsfor single-station multiclass queueing networks, which yield newconvex constraints; (5) a unified buffer occupancy method ofperformance analysis obtained from the constraints; (6) heuristicscheduling policies from the solution of the relaxations.