967 resultados para Portlet-based application
Resumo:
Data are provided to CJJP through statistical summary forms completed by the JCSLs. Because forms are completed only when meaningful contact between a student and a liaison takes place, only a portion of the total population served is reported. Meaningful contact is defined as having at least five contacts within a 60-day period (at any point during the academic year) regarding at least one of the referral reasons supplied on the form. Data are entered into a web-based application by the liaisons and retrieved electronically by CJJP via the internet. Service information is submitted and uploaded only at the end of the academic year.
Resumo:
Background: Microarray techniques have become an important tool to the investigation of genetic relationships and the assignment of different phenotypes. Since microarrays are still very expensive, most of the experiments are performed with small samples. This paper introduces a method to quantify dependency between data series composed of few sample points. The method is used to construct gene co-expression subnetworks of highly significant edges. Results: The results shown here are for an adapted subset of a Saccharomyces cerevisiae gene expression data set with low temporal resolution and poor statistics. The method reveals common transcription factors with a high confidence level and allows the construction of subnetworks with high biological relevance that reveals characteristic features of the processes driving the organism adaptations to specific environmental conditions. Conclusion: Our method allows a reliable and sophisticated analysis of microarray data even under severe constraints. The utilization of systems biology improves the biologists ability to elucidate the mechanisms underlying celular processes and to formulate new hypotheses.
Resumo:
This paper presents a new statistical algorithm to estimate rainfall over the Amazon Basin region using the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). The algorithm relies on empirical relationships derived for different raining-type systems between coincident measurements of surface rainfall rate and 85-GHz polarization-corrected brightness temperature as observed by the precipitation radar (PR) and TMI on board the TRMM satellite. The scheme includes rain/no-rain area delineation (screening) and system-type classification routines for rain retrieval. The algorithm is validated against independent measurements of the TRMM-PR and S-band dual-polarization Doppler radar (S-Pol) surface rainfall data for two different periods. Moreover, the performance of this rainfall estimation technique is evaluated against well-known methods, namely, the TRMM-2A12 [ the Goddard profiling algorithm (GPROF)], the Goddard scattering algorithm (GSCAT), and the National Environmental Satellite, Data, and Information Service (NESDIS) algorithms. The proposed algorithm shows a normalized bias of approximately 23% for both PR and S-Pol ground truth datasets and a mean error of 0.244 mm h(-1) ( PR) and -0.157 mm h(-1)(S-Pol). For rain volume estimates using PR as reference, a correlation coefficient of 0.939 and a normalized bias of 0.039 were found. With respect to rainfall distributions and rain area comparisons, the results showed that the formulation proposed is efficient and compatible with the physics and dynamics of the observed systems over the area of interest. The performance of the other algorithms showed that GSCAT presented low normalized bias for rain areas and rain volume [0.346 ( PR) and 0.361 (S-Pol)], and GPROF showed rainfall distribution similar to that of the PR and S-Pol but with a bimodal distribution. Last, the five algorithms were evaluated during the TRMM-Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) 1999 field campaign to verify the precipitation characteristics observed during the easterly and westerly Amazon wind flow regimes. The proposed algorithm presented a cumulative rainfall distribution similar to the observations during the easterly regime, but it underestimated for the westerly period for rainfall rates above 5 mm h(-1). NESDIS(1) overestimated for both wind regimes but presented the best westerly representation. NESDIS(2), GSCAT, and GPROF underestimated in both regimes, but GPROF was closer to the observations during the easterly flow.
Resumo:
This paper presents a compact embedded fuzzy system for three-phase induction-motor scalar speed control. The control strategy consists in keeping constant the voltage-frequency ratio of the induction-motor supply source. A fuzzy-control system is built on a digital signal processor, which uses speed error and speed-error variation to change both the fundamental voltage amplitude and frequency of a sinusoidal pulsewidth modulation inverter. An alternative optimized method for embedded fuzzy-system design is also proposed. The controller performance, in relation to reference and load-torque variations, is evaluated by experimental results. A comparative analysis with conventional proportional-integral controller is also achieved.
Resumo:
An enantioselective liquid chromatographic method using two-phase hollow fiber liquid-phase microextraction (HF-LPME-HPLC) was developed for the determination of isradipine (ISR) enantiomers and its main metabolite (pyridine derivative of isradipine, PDI) in microsomal fractions isolated from rat liver. The analytes were extracted from 1 mL of microsomal medium using a two-phase HF-LPME procedure with hexyl acetate as the acceptor phase, 30 min of extraction, and sample agitation at 1,500 rpm. For the first time, ISR enantiomers and PDI were resolved. For this separation, a ChiralpakA (R) AD column with hexane/2-propanol/ethanol (94:04:02, v/v/v) as the mobile phase at a flow rate of 1.5 mL min(-1) was used. The column was kept at 23 A +/- 2 A degrees C. The drug and metabolite detection was performed at 325 nm and the internal standard oxybutynin was detected at 225 nm. The recoveries were 23% for PDI and 19% for each ISR enantiomer. The method presented quantification limits (LOQ) of 50 ng mL(-1) and was linear over the concentration range of 50-5,000 and 50-2,500 ng mL(-1) for PDI and each ISR enantiomer, respectively. The validated method was employed to an in vitro biotransformation study of ISR using rat liver microsomal fraction showing that (+)-(S)-ISR is preferentially biotransformed.
Resumo:
The optimal dosing schedule for melphalan therapy of recurrent malignant melanoma in isolated limb perfusions has been examined using a physiological pharmacokinetic model with data from isolated rat hindlimb perfusions (IRHP), The study included a comparison of melphalan distribution in IRHP under hyperthermia and normothermia conditions. Rat hindlimbs were perfused with Krebs-Henseleit buffer containing 4.7% bovine serum albumin at 37 or 41.5 degrees C at a flow rate of 4 ml/min. Concentrations of melphalan in perfusate and tissues were determined by high performance liquid chromatography with fluorescence detection, The concentration of melphalan in perfusate and tissues was linearly related to the input concentration. The rate and amount of melphalan uptake into the different tissues was higher at 41.5 degrees C than at 37 degrees C. A physiological pharmacokinetic model was validated from the tissue and perfusate time course of melphalan after melphalan perfusion. Application of the model involved the amount of melphalan exposure in the muscle, skin and fat in a recirculation system was related to the method of melphalan administration: single bolus > divided bolus > infusion, The peak concentration of melphalan in the perfusate was also related to the method of administration in the same order, Infusing the total dose of melphalan over 20 min during a 60 min perfusion optimized the exposure of tissues to melphalan whilst minimizing the peak perfusate concentration of melphalan. It is suggested that this method of melphalan administration may be preferable to other methods in terms of optimizing the efficacy of melphalan whilst minimizing the limb toxicity associated with its use in isolated limb perfusion.
Resumo:
Cyclic peptides are appealing targets in the drug-discovery process. Unfortunately, there currently exist no robust solid-phase strategies that allow the synthesis of large arrays of discrete cyclic peptides. Existing strategies are complicated, when synthesizing large libraries, by the extensive workup that is required to extract the cyclic product from the deprotection/cleavage mixture. To overcome this, we have developed a new safety-catch linker. The safety-catch concept described here involves the use of a protected catechol derivative in which one of the hydroxyls is masked with a benzyl group during peptide synthesis, thus making the linker deactivated to aminolysis. This masked derivative of the linker allows BOC solid-phase peptide assembly of the linear precursor. Prior to cyclization, the linker is activated and the linear peptide deprotected using conditions commonly employed (TFMSA), resulting in deprotected peptide attached to the activated form of the linker. Scavengers and deprotection adducts are removed by simple washing and filtration. Upon neutralization of the N-terminal amine, cyclization with concomitant cleavage from the resin yields the cyclic peptide in DMF solution. Workup is simple solvent removal. To exemplify this strategy, several cyclic peptides were synthesized targeted toward the somatostatin and integrin receptors. From this initial study and to show the strength of this method, we were able to synthesize a cyclic-peptide library containing over 400 members. This linker technology provides a new solid-phase avenue to access large arrays of cyclic peptides.
Resumo:
The binary diffusivities of water in low molecular weight sugars; fructose, sucrose and a high molecular weight carbohydrate; maltodextrin (DE 11) and the effective diffusivities of water in mixtures of these sugars (sucrose, glucose, fructose) and maltodextrin (DE 11) were determined using a simplified procedure based on the Regular Regime Approach. The effective diffusivity of these mixtures exhibited both the concentration and molecular weight dependence. Surface stickiness was observed in all samples during desorption, with fructose exhibiting the highest and maltodextrin the lowest. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
We present global and regional rates of brain atrophy measured on serially acquired T1-weighted brain MR images for a group of Alzheimer's disease (AD) patients and age-matched normal control (NC) subjects using the analysis procedure described in Part I. Three rates of brain atrophy: the rate of atrophy in the cerebrum, the rate of lateral ventricular enlargement and the rate of atrophy in the region of temporal lobes, were evaluated for 14 AD patients and 14 age-matched NC subjects. All three rates showed significant differences between the two groups, However, the greatest separation of the two groups was obtained when the regional rates were combined. This application has demonstrated that rates of brain atrophy, especially in specific regions of the brain, based on MR images can provide sensitive measures for evaluating the progression of AD. These measures will be useful for the evaluation of therapeutic effects of novel therapies for AD. (C) 2002 Elsevier Science Inc. All rights reserved.
Resumo:
In microarray studies, the application of clustering techniques is often used to derive meaningful insights into the data. In the past, hierarchical methods have been the primary clustering tool employed to perform this task. The hierarchical algorithms have been mainly applied heuristically to these cluster analysis problems. Further, a major limitation of these methods is their inability to determine the number of clusters. Thus there is a need for a model-based approach to these. clustering problems. To this end, McLachlan et al. [7] developed a mixture model-based algorithm (EMMIX-GENE) for the clustering of tissue samples. To further investigate the EMMIX-GENE procedure as a model-based -approach, we present a case study involving the application of EMMIX-GENE to the breast cancer data as studied recently in van 't Veer et al. [10]. Our analysis considers the problem of clustering the tissue samples on the basis of the genes which is a non-standard problem because the number of genes greatly exceed the number of tissue samples. We demonstrate how EMMIX-GENE can be useful in reducing the initial set of genes down to a more computationally manageable size. The results from this analysis also emphasise the difficulty associated with the task of separating two tissue groups on the basis of a particular subset of genes. These results also shed light on why supervised methods have such a high misallocation error rate for the breast cancer data.
Resumo:
Reclaimed water from small wastewater treatment facilities in the rural areas of the Beira Interior region (Portugal) may constitute an alternative water source for aquifer recharge. A 21-month monitoring period in a constructed wetland treatment system has shown that 21,500 m(3) year(-1) of treated wastewater (reclaimed water) could be used for aquifer recharge. A GIS-based multi-criteria analysis was performed, combining ten thematic maps and economic, environmental and technical criteria, in order to produce a suitability map for the location of sites for reclaimed water infiltration. The areas chosen for aquifer recharge with infiltration basins are mainly composed of anthrosol with more than 1 m deep and fine sand texture, which allows an average infiltration velocity of up to 1 m d(-1). These characteristics will provide a final polishing treatment of the reclaimed water after infiltration (soil aquifer treatment (SAT)), suitable for the removal of the residual load (trace organics, nutrients, heavy metals and pathogens). The risk of groundwater contamination is low since the water table in the anthrosol areas ranges from 10 m to 50 m. Oil the other hand, these depths allow a guaranteed unsaturated area suitable for SAT. An area of 13,944 ha was selected for study, but only 1607 ha are suitable for reclaimed water infiltration. Approximately 1280 m(2) were considered enough to set up 4 infiltration basins to work in flooding and drying cycles.
Resumo:
Many-core platforms based on Network-on-Chip (NoC [Benini and De Micheli 2002]) present an emerging technology in the real-time embedded domain. Although the idea to group the applications previously executed on separated single-core devices, and accommodate them on an individual many-core chip offers various options for power savings, cost reductions and contributes to the overall system flexibility, its implementation is a non-trivial task. In this paper we address the issue of application mapping onto a NoCbased many-core platform when considering fundamentals and trends of current many-core operating systems, specifically, we elaborate on a limited migrative application model encompassing a message-passing paradigm as a communication primitive. As the main contribution, we formulate the problem of real-time application mapping, and propose a three-stage process to efficiently solve it. Through analysis it is assured that derived solutions guarantee the fulfilment of posed time constraints regarding worst-case communication latencies, and at the same time provide an environment to perform load balancing for e.g. thermal, energy, fault tolerance or performance reasons.We also propose several constraints regarding the topological structure of the application mapping, as well as the inter- and intra-application communication patterns, which efficiently solve the issues of pessimism and/or intractability when performing the analysis.
Resumo:
The advent of Wireless Sensor Network (WSN) technologies is paving the way for a panoply of new ubiquitous computing applications, some of them with critical requirements. In the ART-WiSe framework, we are designing a two-tiered communication architecture for supporting real-time and reliable communications in WSNs. Within this context, we have been developing a test-bed application, for testing, validating and demonstrating our theoretical findings - a search&rescue/pursuit-evasion application. Basically, a WSN deployment is used to detect, localize and track a target robot and a station controls a rescuer/pursuer robot until it gets close enough to the target robot. This paper describes how this application was engineered, particularly focusing on the implementation of the localization mechanism.
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Ciência e Sistemas de Informação Geográfica
Resumo:
Demand response programs and models have been developed and implemented for an improved performance of electricity markets, taking full advantage of smart grids. Studying and addressing the consumers’ flexibility and network operation scenarios makes possible to design improved demand response models and programs. The methodology proposed in the present paper aims to address the definition of demand response programs that consider the demand shifting between periods, regarding the occurrence of multi-period demand response events. The optimization model focuses on minimizing the network and resources operation costs for a Virtual Power Player. Quantum Particle Swarm Optimization has been used in order to obtain the solutions for the optimization model that is applied to a large set of operation scenarios. The implemented case study illustrates the use of the proposed methodology to support the decisions of the Virtual Power Player in what concerns the duration of each demand response event.