998 resultados para parallel potential
Resumo:
GPR (Ground Penetrating Radar) results are shown for perpendicular broadside and parallel broadside antenna orientations. Performance in detection and localization of concrete tubes and steel tanks is compared as a function of acquisition configuration. The comparison is done using 100 MHz and 200 MHz center frequency antennas. All tubes and tanks are buried at the geophysical test site of IAG/USP in Sao Paulo city, Brazil. The results show that the long steel pipe with a 38-mm diameter was well detected with the perpendicular broadside configuration. The concrete tubes were better detected with the parallel broadside configuration, clearly showing hyperbolic diffraction events from all targets up to 2-m depth. Steel tanks were detected with the two configurations. However, the parallel broadside configuration was generated to a much lesser extent an apparent hyperbolic reflection corresponding to constructive interference of diffraction hyperbolas of adjacent targets placed at the same depth. Vertical concrete tubes and steel tanks were better contained with parallel broadside antennas, where the apexes of the diffraction hyperbolas better corresponded to the horizontal location of the buried target disposition. The two configurations provide details about buried targets emphasizing how GPR multi-component configurations have the potential to improve the subsurface image quality as well as to discriminate different buried targets. It is judged that they hold some applicability in geotechnical and geoscientific studies. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Tuberculosis (TB) remains the leading cause of mortality due to a bacterial pathogen, Mycobacterium tuberculosis. However, no new classes of drugs for TB have been developed in the past 30 years. Therefore there is an urgent need to develop faster acting and effective new antitubercular agents, preferably belonging to new structural classes, to better combat TB, including MDR-TB, to shorten the duration of current treatment to improve patient compliance, and to provide effective treatment of latent tuberculosis infection. The enzymes in the shikimate pathway are potential targets for development of a new generation of antitubercular drugs. The shikimate pathway has been shown by disruption of aroK gene to be essential for the Mycobacterium tuberculosis. The shikimate kinase (SK) catalyses the phosphorylation of the 3-hydroxyl group of shikimic acid (shikimate) using ATP as a co-substrate. SK belongs to family of nucleoside monophosphate (NMP) kinases. The enzyme is an alpha/beta protein consisting of a central sheet of five parallel beta-strands flanked by alpha-helices. The shikimate kinases are composed of three domains: Core domain, Lid domain and Shikimate-binding domain. The Lid and Shikimate-binding domains are responsible for large conformational changes during catalysis. More recently, the precise interactions between SK and substrate have been elucidated, showing the binding of shikimate with three charged residues conserved among the SK sequences. The elucidation of interactions between MtSK and their substrates is crucial for the development of a new generation of drugs against tuberculosis through rational drug design.
Resumo:
Research on the influence of multiple representations in mathematics education gained new momentum when personal computers and software started to become available in the mid-1980s. It became much easier for students who were not fond of algebraic representations to work with concepts such as function using graphs or tables. Research on how students use such software showed that they shaped the tools to their own needs, resulting in an intershaping relationship in which tools shape the way students know at the same time the students shape the tools and influence the design of the next generation of tools. This kind of research led to the theoretical perspective presented in this paper: knowledge is constructed by collectives of humans-with-media. In this paper, I will discuss how media have shaped the notions of problem and knowledge, and a parallel will be developed between the way that software has brought new possibilities to mathematics education and the changes that the Internet may bring to mathematics education. This paper is, therefore, a discussion about the future of mathematics education. Potential scenarios for the future of mathematics education, if the Internet becomes accepted in the classroom, will be discussed. © FIZ Karlsruhe 2009.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
As in the case of most small organic molecules, the electro-oxidation of methanol to CO2 is believed to proceed through a so-called dual pathway mechanism. The direct pathway proceeds via reactive intermediates such as formaldehyde or formic acid, whereas the indirect pathway occurs in parallel, and proceeds via the formation of adsorbed carbon monoxide (COad). Despite the extensive literature on the electro-oxidation of methanol, no study to date distinguished the production of CO2 from direct and indirect pathways. Working under, far-from-equilibrium, oscillatory conditions, we were able to decouple, for the first time, the direct and indirect pathways that lead to CO2 during the oscillatory electro-oxidation of methanol on platinum. The CO2 production was followed by differential electrochemical mass spectrometry and the individual contributions of parallel pathways were identified by a combination of experiments and numerical simulations. We believe that our report opens some perspectives, particularly as a methodology to be used to identify the role played by surface modifiers in the relative weight of both pathways-a key issue to the effective development of catalysts for low temperature fuel cells.
Resumo:
Background: Thyroid hormones (THs) are known to regulate protein synthesis by acting at the transcriptional level and inducing the expression of many genes. However, little is known about their role in protein expression at the post-transcriptional level, even though studies have shown enhancement of protein synthesis associated with mTOR/p70S6K activation after triiodo-l-thyronine (T3) administration. On the other hand, the effects of TH on translation initiation and polypeptidic chain elongation factors, being essential for activating protein synthesis, have been poorly explored. Therefore, considering that preliminary studies from our laboratory have demonstrated an increase in insulin content in INS-1E cells in response to T3 treatment, the aim of the present study was to investigate if proteins of translational nature might be involved in this effect. Methods: INS-1E cells were maintained in the presence or absence of T3 (10(-6) or 10(-8) M) for 12 hours. Thereafter, insulin concentration in the culture medium was determined by radioimmunoassay, and the cells were processed for Western blot detection of insulin, eukaryotic initiation factor 2 (eIF2), p-eIF2, eIF5A, EF1A, eIF4E binding protein (4E-BP), p-4E-BP, p70S6K, and p-p70S6K. Results: It was found that, in parallel with increased insulin generation, T3 induced p70S6K phosphorylation and the expression of the translational factors eIF2, eIF5A, and eukaryotic elongation factor 1 alpha (eEF1A). In contrast, total and phosphorylated 4E-BP, as well as total p70S6K and p-eIF2 content, remained unchanged after T3 treatment. Conclusions: Considering that (i) p70S6K induces S6 phosphorylation of the 40S ribosomal subunit, an essential condition for protein synthesis; (ii) eIF2 is essential for the initiation of messenger RNA translation process; and (iii) eIF5A and eEF1A play a central role in the elongation of the polypeptidic chain during the transcripts decoding, the data presented here lead us to suppose that a part of T3-induced insulin expression in INS-1E cells depends on the protein synthesis activation at the post-transcriptional level, as these proteins of the translational machinery were shown to be regulated by T3.
Resumo:
Medulloblastoma (MB) is the most common malignant brain tumor in children and occurs mainly in the cerebellum. Important intracellular signaling molecules, such those present in the Sonic Hedgehog and Wnt pathways, are involved in its development and can also be employed to determine tumor grade and prognosis. Ectonucleotidases, particularly ecto-5'NT/CD73, are important enzymes in the malignant process of different tumor types regulating extracellular ATP and adenosine levels. Here, we investigated the activity of ectonucleotidases in three malignant human cell lines: Daoy and ONS76, being representative of primary MB, and the D283 cell line, derived from a metastatic MB. All cell lines secreted ATP into the extracellular medium while hydrolyze poorly this nucleotide, which is in agreement with the low expression and activity of pyrophosphate/phosphodiesterase, NTPDases and alkaline phosphatase. The analysis of AMP hydrolysis showed that Daoy and ONS76 completely hydrolyzed AMP, with parallel adenosine production (Daoy) and inosine accumulation (ONS76). On the other hand, D283 cell line did not hydrolyze AMP. Moreover, primary MB tumor cells, Daoy and ONS76 express the ecto-5'NT/CD73 while D283 representative of a metastatic tumor, revealed poor expression of this enzyme, while the ecto-adenosine deaminase showed higher expression in D283 compared to Daoy and ONS76 cells. Nuclear beta-catenin has been suggested as a marker for MB prognosis. Further it can promotes expression of ecto-5'NT/CD73 and suppression of adenosine deaminase. It was observed that Daoy and ONS76 showed greater nuclear beta-catenin immunoreactivity than D283, which presented mainly cytoplasmic immunoreactivity. In summary, the absence of ecto-5'NT/CD73 in the D283 cell line, a metastatic MB phenotype, suggests that high expression levels of this ectonucleotidase could be correlated with a poor prognosis in patients with MB.
Resumo:
Máster en Oceanografía
Resumo:
[EN] Today, science is difficult to pursue because funding is so tenuous. In such a financial climate, researchers need to consider parallel alternatives to ensure that scientific research can continue. Based on this thinking, we created BIOCEANSolutions, a company born of a research group. A great variety of environmental regulations and standards have emerged over recent years with the purpose of protecting natural ecosystems. These have enabled us to link our research to the market of environmental management. Marine activities can alter environmental conditions, resulting in changes in physiological states, species diversity, abundance, and biomass in the local biological communities. In this way, we can apply our knowledge, to plankton ecophysiology and biochemical oceanography. We measure enzyme activities as bio-indicators of energy metabolism and other physiological rates and biologic-oceanographic processes in marine organisms. This information provides insight into the health of marine communities, the stress levels of individual organisms, and potential anomalies that may be affecting them. In the process of verifying standards and complying with regulations, we can apply our analytic capability and knowledge. The main analyses that we offer are: (1) the activity of the electron transport system (ETS) or potential respiration (Φ), (2) the physiological measurement of respiration (oxygen consumption), (3) the activity of Isocitrate dehydrogenase (IDH), (4) the respiratory CO2 production, and (5) the activity of Glutamate dehydrogenase (GDH) and (6) the physiological measurement of ammonium excretion. In addition, our experience in a productive research group allows us to pursue and develop technical-experimental activities such as marine and freshwater aquaculture, oceanographic field sampling, as well as providing guidance, counseling, and academic services. In summary, this new company will permit us to create a symbiosis between public and private sectors that serve clients and will allow us to grow and expand as a research team.
Resumo:
The term "Brain Imaging" identi�es a set of techniques to analyze the structure and/or functional behavior of the brain in normal and/or pathological situations. These techniques are largely used in the study of brain activity. In addition to clinical usage, analysis of brain activity is gaining popularity in others recent �fields, i.e. Brain Computer Interfaces (BCI) and the study of cognitive processes. In this context, usage of classical solutions (e.g. f MRI, PET-CT) could be unfeasible, due to their low temporal resolution, high cost and limited portability. For these reasons alternative low cost techniques are object of research, typically based on simple recording hardware and on intensive data elaboration process. Typical examples are ElectroEncephaloGraphy (EEG) and Electrical Impedance Tomography (EIT), where electric potential at the patient's scalp is recorded by high impedance electrodes. In EEG potentials are directly generated from neuronal activity, while in EIT by the injection of small currents at the scalp. To retrieve meaningful insights on brain activity from measurements, EIT and EEG relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of the electric �field distribution therein. The inhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeo�ff between physical accuracy and technical feasibility, which currently severely limits the capabilities of these techniques. Moreover elaboration of data recorded requires usage of regularization techniques computationally intensive, which influences the application with heavy temporal constraints (such as BCI). This work focuses on the parallel implementation of a work-flow for EEG and EIT data processing. The resulting software is accelerated using multi-core GPUs, in order to provide solution in reasonable times and address requirements of real-time BCI systems, without over-simplifying the complexity and accuracy of the head models.
Resumo:
Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.
Resumo:
Treatment for cancer often involves combination therapies used both in medical practice and clinical trials. Korn and Simon listed three reasons for the utility of combinations: 1) biochemical synergism, 2) differential susceptibility of tumor cells to different agents, and 3) higher achievable dose intensity by exploiting non-overlapping toxicities to the host. Even if the toxicity profile of each agent of a given combination is known, the toxicity profile of the agents used in combination must be established. Thus, caution is required when designing and evaluating trials with combination therapies. Traditional clinical design is based on the consideration of a single drug. However, a trial of drugs in combination requires a dose-selection procedure that is vastly different than that needed for a single-drug trial. When two drugs are combined in a phase I trial, an important trial objective is to determine the maximum tolerated dose (MTD). The MTD is defined as the dose level below the dose at which two of six patients experience drug-related dose-limiting toxicity (DLT). In phase I trials that combine two agents, more than one MTD generally exists, although all are rarely determined. For example, there may be an MTD that includes high doses of drug A with lower doses of drug B, another one for high doses of drug B with lower doses of drug A, and yet another for intermediate doses of both drugs administered together. With classic phase I trial designs, only one MTD is identified. Our new trial design allows identification of more than one MTD efficiently, within the context of a single protocol. The two drugs combined in our phase I trial are temsirolimus and bevacizumab. Bevacizumab is a monoclonal antibody targeting the vascular endothelial growth factor (VEGF) pathway which is fundamental for tumor growth and metastasis. One mechanism of tumor resistance to antiangiogenic therapy is upregulation of hypoxia inducible factor 1α (HIF-1α) which mediates responses to hypoxic conditions. Temsirolimus has resulted in reduced levels of HIF-1α making this an ideal combination therapy. Dr. Donald Berry developed a trial design schema for evaluating low, intermediate and high dose levels of two drugs given in combination as illustrated in a recently published paper in Biometrics entitled “A Parallel Phase I/II Clinical Trial Design for Combination Therapies.” His trial design utilized cytotoxic chemotherapy. We adapted this design schema by incorporating greater numbers of dose levels for each drug. Additional dose levels are being examined because it has been the experience of phase I trials that targeted agents, when given in combination, are often effective at dosing levels lower than the FDA-approved dose of said drugs. A total of thirteen dose levels including representative high, intermediate and low dose levels of temsirolimus with representative high, intermediate, and low dose levels of bevacizumab will be evaluated. We hypothesize that our new trial design will facilitate identification of more than one MTD, if they exist, efficiently and within the context of a single protocol. Doses gleaned from this approach could potentially allow for a more personalized approach in dose selection from among the MTDs obtained that can be based upon a patient’s specific co-morbid conditions or anticipated toxicities.
Resumo:
The PROPELLER (Periodically Rotated Overlapping Parallel Lines with Enhanced Reconstruction) magnetic resonance imaging (MRI) technique has inherent advantages over other fast imaging methods, including robust motion correction, reduced image distortion, and resistance to off-resonance effects. These features make PROPELLER highly desirable for T2*-sensitive imaging, high-resolution diffusion imaging, and many other applications. However, PROPELLER has been predominantly implemented as a fast spin-echo (FSE) technique, which is insensitive to T2* contrast, and requires time-inefficient signal averaging to achieve adequate signal-to-noise ratio (SNR) for many applications. These issues presently constrain the potential clinical utility of FSE-based PROPELLER. ^ In this research, our aim was to extend and enhance the potential applications of PROPELLER MRI by developing a novel multiple gradient echo PROPELLER (MGREP) technique that can overcome the aforementioned limitations. The MGREP pulse sequence was designed to acquire multiple gradient-echo images simultaneously, without any increase in total scan time or RF energy deposition relative to FSE-based PROPELLER. A new parameter was also introduced for direct user-control over gradient echo spacing, to allow variable sensitivity to T2* contrast. In parallel to pulse sequence development, an improved algorithm for motion correction was also developed and evaluated against the established method through extensive simulations. The potential advantages of MGREP over FSE-based PROPELLER were illustrated via three specific applications: (1) quantitative T2* measurement, (2) time-efficient signal averaging, and (3) high-resolution diffusion imaging. Relative to the FSE-PROPELLER method, the MGREP sequence was found to yield quantitative T2* values, increase SNR by ∼40% without any increase in acquisition time or RF energy deposition, and noticeably improve image quality in high-resolution diffusion maps. In addition, the new motion algorithm was found to improve the performance considerably in motion-artifact reduction. ^ Overall, this work demonstrated a number of enhancements and extensions to existing PROPELLER techniques. The new technical capabilities of PROPELLER imaging, developed in this thesis research, are expected to serve as the foundation for further expanding the scope of PROPELLER applications. ^
Resumo:
Since the early days of logic programming, researchers in the field realized the potential for exploitation of parallelism present in the execution of logic programs. Their high-level nature, the presence of nondeterminism, and their referential transparency, among other characteristics, make logic programs interesting candidates for obtaining speedups through parallel execution. At the same time, the fact that the typical applications of logic programming frequently involve irregular computations, make heavy use of dynamic data structures with logical variables, and involve search and speculation, makes the techniques used in the corresponding parallelizing compilers and run-time systems potentially interesting even outside the field. The objective of this article is to provide a comprehensive survey of the issues arising in parallel execution of logic programming languages along with the most relevant approaches explored to date in the field. Focus is mostly given to the challenges emerging from the parallel execution of Prolog programs. The article describes the major techniques used for shared memory implementation of Or-parallelism, And-parallelism, and combinations of the two. We also explore some related issues, such as memory management, compile-time analysis, and execution visualization.