847 resultados para internet-based application components
Resumo:
The optimal dosing schedule for melphalan therapy of recurrent malignant melanoma in isolated limb perfusions has been examined using a physiological pharmacokinetic model with data from isolated rat hindlimb perfusions (IRHP), The study included a comparison of melphalan distribution in IRHP under hyperthermia and normothermia conditions. Rat hindlimbs were perfused with Krebs-Henseleit buffer containing 4.7% bovine serum albumin at 37 or 41.5 degrees C at a flow rate of 4 ml/min. Concentrations of melphalan in perfusate and tissues were determined by high performance liquid chromatography with fluorescence detection, The concentration of melphalan in perfusate and tissues was linearly related to the input concentration. The rate and amount of melphalan uptake into the different tissues was higher at 41.5 degrees C than at 37 degrees C. A physiological pharmacokinetic model was validated from the tissue and perfusate time course of melphalan after melphalan perfusion. Application of the model involved the amount of melphalan exposure in the muscle, skin and fat in a recirculation system was related to the method of melphalan administration: single bolus > divided bolus > infusion, The peak concentration of melphalan in the perfusate was also related to the method of administration in the same order, Infusing the total dose of melphalan over 20 min during a 60 min perfusion optimized the exposure of tissues to melphalan whilst minimizing the peak perfusate concentration of melphalan. It is suggested that this method of melphalan administration may be preferable to other methods in terms of optimizing the efficacy of melphalan whilst minimizing the limb toxicity associated with its use in isolated limb perfusion.
Resumo:
Incremental parsing has long been recognized as a technique of great utility in the construction of language-based editors, and correspondingly, the area currently enjoys a mature theory. Unfortunately, many practical considerations have been largely overlooked in previously published algorithms. Many user requirements for an editing system necessarily impact on the design of its incremental parser, but most approaches focus only on one: response time. This paper details an incremental parser based on LR parsing techniques and designed for use in a modeless syntax recognition editor. The nature of this editor places significant demands on the structure and quality of the document representation it uses, and hence, on the parser. The strategy presented here is novel in that both the parser and the representation it constructs are tolerant of the inevitable and frequent syntax errors that arise during editing. This is achieved by a method that differs from conventional error repair techniques, and that is more appropriate for use in an interactive context. Furthermore, the parser aims to minimize disturbance to this representation, not only to ensure other system components can operate incrementally, but also to avoid unfortunate consequences for certain user-oriented services. The algorithm is augmented with a limited form of predictive tree-building, and a technique is presented for the determination of valid symbols for menu-based insertion. Copyright (C) 2001 John Wiley & Sons, Ltd.
Resumo:
Cyclic peptides are appealing targets in the drug-discovery process. Unfortunately, there currently exist no robust solid-phase strategies that allow the synthesis of large arrays of discrete cyclic peptides. Existing strategies are complicated, when synthesizing large libraries, by the extensive workup that is required to extract the cyclic product from the deprotection/cleavage mixture. To overcome this, we have developed a new safety-catch linker. The safety-catch concept described here involves the use of a protected catechol derivative in which one of the hydroxyls is masked with a benzyl group during peptide synthesis, thus making the linker deactivated to aminolysis. This masked derivative of the linker allows BOC solid-phase peptide assembly of the linear precursor. Prior to cyclization, the linker is activated and the linear peptide deprotected using conditions commonly employed (TFMSA), resulting in deprotected peptide attached to the activated form of the linker. Scavengers and deprotection adducts are removed by simple washing and filtration. Upon neutralization of the N-terminal amine, cyclization with concomitant cleavage from the resin yields the cyclic peptide in DMF solution. Workup is simple solvent removal. To exemplify this strategy, several cyclic peptides were synthesized targeted toward the somatostatin and integrin receptors. From this initial study and to show the strength of this method, we were able to synthesize a cyclic-peptide library containing over 400 members. This linker technology provides a new solid-phase avenue to access large arrays of cyclic peptides.
Resumo:
The binary diffusivities of water in low molecular weight sugars; fructose, sucrose and a high molecular weight carbohydrate; maltodextrin (DE 11) and the effective diffusivities of water in mixtures of these sugars (sucrose, glucose, fructose) and maltodextrin (DE 11) were determined using a simplified procedure based on the Regular Regime Approach. The effective diffusivity of these mixtures exhibited both the concentration and molecular weight dependence. Surface stickiness was observed in all samples during desorption, with fructose exhibiting the highest and maltodextrin the lowest. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
We present global and regional rates of brain atrophy measured on serially acquired T1-weighted brain MR images for a group of Alzheimer's disease (AD) patients and age-matched normal control (NC) subjects using the analysis procedure described in Part I. Three rates of brain atrophy: the rate of atrophy in the cerebrum, the rate of lateral ventricular enlargement and the rate of atrophy in the region of temporal lobes, were evaluated for 14 AD patients and 14 age-matched NC subjects. All three rates showed significant differences between the two groups, However, the greatest separation of the two groups was obtained when the regional rates were combined. This application has demonstrated that rates of brain atrophy, especially in specific regions of the brain, based on MR images can provide sensitive measures for evaluating the progression of AD. These measures will be useful for the evaluation of therapeutic effects of novel therapies for AD. (C) 2002 Elsevier Science Inc. All rights reserved.
Resumo:
In microarray studies, the application of clustering techniques is often used to derive meaningful insights into the data. In the past, hierarchical methods have been the primary clustering tool employed to perform this task. The hierarchical algorithms have been mainly applied heuristically to these cluster analysis problems. Further, a major limitation of these methods is their inability to determine the number of clusters. Thus there is a need for a model-based approach to these. clustering problems. To this end, McLachlan et al. [7] developed a mixture model-based algorithm (EMMIX-GENE) for the clustering of tissue samples. To further investigate the EMMIX-GENE procedure as a model-based -approach, we present a case study involving the application of EMMIX-GENE to the breast cancer data as studied recently in van 't Veer et al. [10]. Our analysis considers the problem of clustering the tissue samples on the basis of the genes which is a non-standard problem because the number of genes greatly exceed the number of tissue samples. We demonstrate how EMMIX-GENE can be useful in reducing the initial set of genes down to a more computationally manageable size. The results from this analysis also emphasise the difficulty associated with the task of separating two tissue groups on the basis of a particular subset of genes. These results also shed light on why supervised methods have such a high misallocation error rate for the breast cancer data.
Resumo:
This paper describes a process-based metapopulation dynamics and phenology model of prickly acacia, Acacia nilotica, an invasive alien species in Australia. The model, SPAnDX, describes the interactions between riparian and upland sub-populations of A. nilotica within livestock paddocks, including the effects of extrinsic factors such as temperature, soil moisture availability and atmospheric concentrations of carbon dioxide. The model includes the effects of management events such as changing the livestock species or stocking rate, applying fire, and herbicide application. The predicted population behaviour of A. nilotica was sensitive to climate. Using 35 years daily weather datasets for five representative sites spanning the range of conditions that A. nilotica is found in Australia, the model predicted biomass levels that closely accord with expected values at each site. SPAnDX can be used as a decision-support tool in integrated weed management, and to explore the sensitivity of cultural management practices to climate change throughout the range of A. nilotica. The cohort-based DYMEX modelling package used to build and run SPAnDX provided several advantages over more traditional population modelling approaches (e.g. an appropriate specific formalism (discrete time, cohort-based, process-oriented), user-friendly graphical environment, extensible library of reusable components, and useful and flexible input/output support framework). (C) 2003 Published by Elsevier Science B.V.
Resumo:
Reclaimed water from small wastewater treatment facilities in the rural areas of the Beira Interior region (Portugal) may constitute an alternative water source for aquifer recharge. A 21-month monitoring period in a constructed wetland treatment system has shown that 21,500 m(3) year(-1) of treated wastewater (reclaimed water) could be used for aquifer recharge. A GIS-based multi-criteria analysis was performed, combining ten thematic maps and economic, environmental and technical criteria, in order to produce a suitability map for the location of sites for reclaimed water infiltration. The areas chosen for aquifer recharge with infiltration basins are mainly composed of anthrosol with more than 1 m deep and fine sand texture, which allows an average infiltration velocity of up to 1 m d(-1). These characteristics will provide a final polishing treatment of the reclaimed water after infiltration (soil aquifer treatment (SAT)), suitable for the removal of the residual load (trace organics, nutrients, heavy metals and pathogens). The risk of groundwater contamination is low since the water table in the anthrosol areas ranges from 10 m to 50 m. Oil the other hand, these depths allow a guaranteed unsaturated area suitable for SAT. An area of 13,944 ha was selected for study, but only 1607 ha are suitable for reclaimed water infiltration. Approximately 1280 m(2) were considered enough to set up 4 infiltration basins to work in flooding and drying cycles.
Resumo:
This paper presents a methodology that aims to increase the probability of delivering power to any load point of the electrical distribution system by identifying new investments in distribution components. The methodology is based on statistical failure and repair data of the distribution power system components and it uses fuzzy-probabilistic modelling for system component outage parameters. Fuzzy membership functions of system component outage parameters are obtained by statistical records. A mixed integer non-linear optimization technique is developed to identify adequate investments in distribution networks components that allow increasing the availability level for any customer in the distribution system at minimum cost for the system operator. To illustrate the application of the proposed methodology, the paper includes a case study that considers a real distribution network.
Resumo:
This paper presents a methodology supported on the data base knowledge discovery process (KDD), in order to find out the failure probability of electrical equipments’, which belong to a real electrical high voltage network. Data Mining (DM) techniques are used to discover a set of outcome failure probability and, therefore, to extract knowledge concerning to the unavailability of the electrical equipments such us power transformers and high-voltages power lines. The framework includes several steps, following the analysis of the real data base, the pre-processing data, the application of DM algorithms, and finally, the interpretation of the discovered knowledge. To validate the proposed methodology, a case study which includes real databases is used. This data have a heavy uncertainty due to climate conditions for this reason it was used fuzzy logic to determine the set of the electrical components failure probabilities in order to reestablish the service. The results reflect an interesting potential of this approach and encourage further research on the topic.
Resumo:
Electric vehicles (EV) offer a great potential to address the integration of renewable energy sources (RES) in the power grid, and thus reduce the dependence on oil as well as the greenhouse gases (GHG) emissions. The high share of wind energy in the Portuguese energy mix expected for 2020 can led to eventual curtailment, especially during the winter when high levels of hydro generation occur. In this paper a methodology based on a unit commitment and economic dispatch is implemented, and a hydro-thermal dispatch is performed in order to evaluate the impact of the EVs integration into the grid. Results show that the considered 10 % penetration of EVs in the Portuguese fleet would increase load in 3 % and would not integrate a significant amount of wind energy because curtailment is already reduced in the absence of EVs. According to the results, the EV is charged mostly with thermal generation and the associated emissions are much higher than if they were calculated based on the generation mix.
Resumo:
This thesis presents the Fuzzy Monte Carlo Model for Transmission Power Systems Reliability based studies (FMC-TRel) methodology, which is based on statistical failure and repair data of the transmission power system components and uses fuzzyprobabilistic modeling for system component outage parameters. Using statistical records allows developing the fuzzy membership functions of system component outage parameters. The proposed hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. A network contingency analysis to identify any overloading or voltage violation in the network is performed once obtained the system states. This is followed by a remedial action algorithm, based on Optimal Power Flow, to reschedule generations and alleviate constraint violations and, at the same time, to avoid any load curtailment, if possible, or, otherwise, to minimize the total load curtailment, for the states identified by the contingency analysis. For the system states that cause load curtailment, an optimization approach is applied to reduce the probability of occurrence of these states while minimizing the costs to achieve that reduction. This methodology is of most importance for supporting the transmission system operator decision making, namely in the identification of critical components and in the planning of future investments in the transmission power system. A case study based on Reliability Test System (RTS) 1996 IEEE 24 Bus is presented to illustrate with detail the application of the proposed methodology.
Resumo:
Conferência: 39th Annual Conference of the IEEE Industrial-Electronics-Society (IECON), Vienna, Austria, Nov 10-14, 2013
Resumo:
Due to usage conditions, hazardous environments or intentional causes, physical and virtual systems are subject to faults in their components, which may affect their overall behaviour. In a ‘black-box’ agent modelled by a set of propositional logic rules, in which just a subset of components is externally visible, such faults may only be recognised by examining some output function of the agent. A (fault-free) model of the agent’s system provides the expected output given some input. If the real output differs from that predicted output, then the system is faulty. However, some faults may only become apparent in the system output when appropriate inputs are given. A number of problems regarding both testing and diagnosis thus arise, such as testing a fault, testing the whole system, finding possible faults and differentiating them to locate the correct one. The corresponding optimisation problems of finding solutions that require minimum resources are also very relevant in industry, as is minimal diagnosis. In this dissertation we use a well established set of benchmark circuits to address such diagnostic related problems and propose and develop models with different logics that we formalise and generalise as much as possible. We also prove that all techniques generalise to agents and to multiple faults. The developed multi-valued logics extend the usual Boolean logic (suitable for faultfree models) by encoding values with some dependency (usually on faults). Such logics thus allow modelling an arbitrary number of diagnostic theories. Each problem is subsequently solved with CLP solvers that we implement and discuss, together with a new efficient search technique that we present. We compare our results with other approaches such as SAT (that require substantial duplication of circuits), showing the effectiveness of constraints over multi-valued logics, and also the adequacy of a general set constraint solver (with special inferences over set functions such as cardinality) on other problems. In addition, for an optimisation problem, we integrate local search with a constructive approach (branch-and-bound) using a variety of logics to improve an existing efficient tool based on SAT and ILP.
Resumo:
Many-core platforms based on Network-on-Chip (NoC [Benini and De Micheli 2002]) present an emerging technology in the real-time embedded domain. Although the idea to group the applications previously executed on separated single-core devices, and accommodate them on an individual many-core chip offers various options for power savings, cost reductions and contributes to the overall system flexibility, its implementation is a non-trivial task. In this paper we address the issue of application mapping onto a NoCbased many-core platform when considering fundamentals and trends of current many-core operating systems, specifically, we elaborate on a limited migrative application model encompassing a message-passing paradigm as a communication primitive. As the main contribution, we formulate the problem of real-time application mapping, and propose a three-stage process to efficiently solve it. Through analysis it is assured that derived solutions guarantee the fulfilment of posed time constraints regarding worst-case communication latencies, and at the same time provide an environment to perform load balancing for e.g. thermal, energy, fault tolerance or performance reasons.We also propose several constraints regarding the topological structure of the application mapping, as well as the inter- and intra-application communication patterns, which efficiently solve the issues of pessimism and/or intractability when performing the analysis.