917 resultados para Multiple-use forestry
Resumo:
Background: Aortic aneurysm and dissection are important causes of death in older people. Ruptured aneurysms show catastrophic fatality rates reaching near 80%. Few population-based mortality studies have been published in the world and none in Brazil. The objective of the present study was to use multiple-cause-of-death methodology in the analysis of mortality trends related to aortic aneurysm and dissection in the state of Sao Paulo, between 1985 and 2009. Methods: We analyzed mortality data from the Sao Paulo State Data Analysis System, selecting all death certificates on which aortic aneurysm and dissection were listed as a cause-of-death. The variables sex, age, season of the year, and underlying, associated or total mentions of causes of death were studied using standardized mortality rates, proportions and historical trends. Statistical analyses were performed by chi-square goodness-of-fit and H Kruskal-Wallis tests, and variance analysis. The joinpoint regression model was used to evaluate changes in age-standardized rates trends. A p value less than 0.05 was regarded as significant. Results: Over a 25-year period, there were 42,615 deaths related to aortic aneurysm and dissection, of which 36,088 (84.7%) were identified as underlying cause and 6,527 (15.3%) as an associated cause-of-death. Dissection and ruptured aneurysms were considered as an underlying cause of death in 93% of the deaths. For the entire period, a significant increased trend of age-standardized death rates was observed in men and women, while certain non-significant decreases occurred from 1996/2004 until 2009. Abdominal aortic aneurysms and aortic dissections prevailed among men and aortic dissections and aortic aneurysms of unspecified site among women. In 1985 and 2009 death rates ratios of men to women were respectively 2.86 and 2.19, corresponding to a difference decrease between rates of 23.4%. For aortic dissection, ruptured and non-ruptured aneurysms, the overall mean ages at death were, respectively, 63.2, 68.4 and 71.6 years; while, as the underlying cause, the main associated causes of death were as follows: hemorrhages (in 43.8%/40.5%/13.9%); hypertensive diseases (in 49.2%/22.43%/24.5%) and atherosclerosis (in 14.8%/25.5%/15.3%); and, as associated causes, their principal overall underlying causes of death were diseases of the circulatory (55.7%), and respiratory (13.8%) systems and neoplasms (7.8%). A significant seasonal variation, with highest frequency in winter, occurred in deaths identified as underlying cause for aortic dissection, ruptured and non-ruptured aneurysms. Conclusions: This study introduces the methodology of multiple-causes-of-death to enhance epidemiologic knowledge of aortic aneurysm and dissection in São Paulo, Brazil. The results presented confer light to the importance of mortality statistics and the need for epidemiologic studies to understand unique trends in our own population.
Resumo:
The objective of the present study was to determine if there is a relationship between serum levels of brain-derived neurotrophic factor (BDNF) and the number of T2/fluid-attenuated inversion recovery (T2/FLAIR) lesions in multiple sclerosis (MS). The use of magnetic resonance imaging (MRI) has revolutionized the study of MS. However, MRI has limitations and the use of other biomarkers such as BDNF may be useful for the clinical assessment and the study of the disease. Serum was obtained from 28 MS patients, 18-50 years old (median 38), 21 women, 0.5-10 years (median 5) of disease duration, EDSS 1-4 (median 1.5) and 28 healthy controls, 19-49 years old (median 33), 19 women. BDNF levels were measured by ELISA. T1, T2/FLAIR and gadolinium-enhanced lesions were measured by a trained radiologist. BDNF was reduced in MS patients (median [range] pg/mL; 1160 [352.6-2640]) compared to healthy controls (1640 [632.4-4268]; P = 0.03, Mann-Whitney test) and was negatively correlated (Spearman correlation test, r = -0.41; P = 0.02) with T2/FLAIR (11-81 lesions, median 42). We found that serum BDNF levels were inversely correlated with the number of T2/FLAIR lesions in patients with MS. BDNF may be a promising biomarker of MS.
Resumo:
Brazil is expected to have 19.6 million patients with diabetes by the year 2030. A key concept in the treatment of type 2 diabetes mellitus (T2DM) is establishing individualized glycemic goals based on each patient’s clinical characteristics, which impact the choice of antihyperglycemic therapy. Targets for glycemic control, including fasting blood glucose, postprandial blood glucose, and glycated hemoglobin (A1C), are often not reached solely with antihyperglycemic therapy, and insulin therapy is often required. Basal insulin is considered an initial strategy; however, premixed insulins are convenient and are equally or more effective, especially for patients who require both basal and prandial control but desire a more simplified strategy involving fewer daily injections than a basal-bolus regimen. Most physicians are reluctant to transition patients to insulin treatment due to inappropriate assumptions and insufficient information. We conducted a nonsystematic review in PubMed and identified the most relevant and recently published articles that compared the use of premixed insulin versus basal insulin analogues used alone or in combination with rapid-acting insulin analogues before meals in patients with T2DM. These studies suggest that premixed insulin analogues are equally or more effective in reducing A1C compared to basal insulin analogues alone in spite of the small increase in the risk of nonsevere hypoglycemic events and nonclinically significant weight gain. Premixed insulin analogues can be used in insulin-naïve patients, in patients already on basal insulin therapy, and those using basal-bolus therapy who are noncompliant with blood glucose self-monitoring and titration of multiple insulin doses. We additionally provide practical aspects related to titration for the specific premixed insulin analogue formulations commercially available in Brazil.
Resumo:
The principal capsular component of Cryptococcus neoformans, glucuronoxylomannan (GXM), interacts with surface glycans, including chitin-like oligomers. Although the role of GXM in cryptococcal infection has been well explored, there is no information on how chitooligomers affect fungal pathogenesis. In this study, surface chitooligomers of C. neoformans were blocked through the use of the wheat germ lectin (WGA) and the effects on animal pathogenesis, interaction with host cells, fungal growth and capsule formation were analyzed. Treatment of C. neoformans cells with WGA followed by infection of mice delayed mortality relative to animals infected with untreated fungal cells. This observation was associated with reduced brain colonization by lectin-treated cryptococci. Blocking chitooligomers also rendered yeast cells less efficient in their ability to associate with phagocytes. WGA did not affect fungal viability, but inhibited GXM release to the extracellular space and capsule formation. In WGA-treated yeast cells, genes that are involved in capsule formation and GXM traffic had their transcription levels decreased in comparison with untreated cells. Our results suggest that cellular pathways required for capsule formation and pathogenic mechanisms are affected by blocking chitin-derived structures at the cell surface of C. neoformans. Targeting chitooligomers with specific ligands may reveal new therapeutic alternatives to control cryptococcosis.
Resumo:
Network reconfiguration for service restoration (SR) in distribution systems is a complex optimization problem. For large-scale distribution systems, it is computationally hard to find adequate SR plans in real time since the problem is combinatorial and non-linear, involving several constraints and objectives. Two Multi-Objective Evolutionary Algorithms that use Node-Depth Encoding (NDE) have proved able to efficiently generate adequate SR plans for large distribution systems: (i) one of them is the hybridization of the Non-Dominated Sorting Genetic Algorithm-II (NSGA-II) with NDE, named NSGA-N; (ii) the other is a Multi-Objective Evolutionary Algorithm based on subpopulation tables that uses NDE, named MEAN. Further challenges are faced now, i.e. the design of SR plans for larger systems as good as those for relatively smaller ones and for multiple faults as good as those for one fault (single fault). In order to tackle both challenges, this paper proposes a method that results from the combination of NSGA-N, MEAN and a new heuristic. Such a heuristic focuses on the application of NDE operators to alarming network zones according to technical constraints. The method generates similar quality SR plans in distribution systems of significantly different sizes (from 3860 to 30,880 buses). Moreover, the number of switching operations required to implement the SR plans generated by the proposed method increases in a moderate way with the number of faults.
Resumo:
The purpose of this thesis is to analyse interactions between freshwater flows, terrestrial ecosystems and human well-being. Freshwater management and policy has mainly focused on the liquid water part (surface and ground water run off) of the hydrological cycle including aquatic ecosystems. Although of great significance, this thesis shows that such a focus will not be sufficient for coping with freshwater related social-ecological vulnerability. The thesis illustrates that the terrestrial component of the hydrological cycle, reflected in vapour flows (or evapotranspiration), serves multiple functions in the human life-support system. A broader understanding of the interactions between terrestrial systems and freshwater flows is particularly important in light of present widespread land cover change in terrestrial ecosystems. The water vapour flows from continental ecosystems were quantified at a global scale in Paper I of the thesis. It was estimated that in order to sustain the majority of global terrestrial ecosystem services on which humanity depends, an annual water vapour flow of 63 000 km3/yr is needed, including 6800 km3/yr for crop production. In comparison, the annual human withdrawal of liquid water amounts to roughly 4000 km3/yr. A potential conflict between freshwater for future food production and for terrestrial ecosystem services was identified. Human redistribution of water vapour flows as a consequence of long-term land cover change was addressed at both continental (Australia) (Paper II) and global scales (Paper III). It was estimated that the annual vapour flow had decreased by 10% in Australia during the last 200 years. This is due to a decrease in woody vegetation for agricultural production. The reduction in vapour flows has caused severe problems with salinity of soils and rivers. The human-induced alteration of vapour flows was estimated at more than 15 times the volume of human-induced change in liquid water (Paper II).
Resumo:
[EN] In the last years we have developed some methods for 3D reconstruction. First we began with the problem of reconstructing a 3D scene from a stereoscopic pair of images. We developed some methods based on energy functionals which produce dense disparity maps by preserving discontinuities from image boundaries. Then we passed to the problem of reconstructing a 3D scene from multiple views (more than 2). The method for multiple view reconstruction relies on the method for stereoscopic reconstruction. For every pair of consecutive images we estimate a disparity map and then we apply a robust method that searches for good correspondences through the sequence of images. Recently we have proposed several methods for 3D surface regularization. This is a postprocessing step necessary for smoothing the final surface, which could be afected by noise or mismatch correspondences. These regularization methods are interesting because they use the information from the reconstructing process and not only from the 3D surface. We have tackled all these problems from an energy minimization approach. We investigate the associated Euler-Lagrange equation of the energy functional, and we approach the solution of the underlying partial differential equation (PDE) using a gradient descent method.
Resumo:
[EN] Indoor position estimation has become an attractive research topic due to growing interest in location-aware services. Nevertheless, satisfying solutions have not been found with the considerations of both accuracy and system complexity. From the perspective of lightweight mobile devices, they are extremely important characteristics, because both the processor power and energy availability are limited. Hence, an indoor localization system with high computational complexity can cause complete battery drain within a few hours. In our research, we use a data mining technique named boosting to develop a localization system based on multiple weighted decision trees to predict the device location, since it has high accuracy and low computational complexity.
Resumo:
The progresses of electron devices integration have proceeded for more than 40 years following the well–known Moore’s law, which states that the transistors density on chip doubles every 24 months. This trend has been possible due to the downsizing of the MOSFET dimensions (scaling); however, new issues and new challenges are arising, and the conventional ”bulk” architecture is becoming inadequate in order to face them. In order to overcome the limitations related to conventional structures, the researchers community is preparing different solutions, that need to be assessed. Possible solutions currently under scrutiny are represented by: • devices incorporating materials with properties different from those of silicon, for the channel and the source/drain regions; • new architectures as Silicon–On–Insulator (SOI) transistors: the body thickness of Ultra-Thin-Body SOI devices is a new design parameter, and it permits to keep under control Short–Channel–Effects without adopting high doping level in the channel. Among the solutions proposed in order to overcome the difficulties related to scaling, we can highlight heterojunctions at the channel edge, obtained by adopting for the source/drain regions materials with band–gap different from that of the channel material. This solution allows to increase the injection velocity of the particles travelling from the source into the channel, and therefore increase the performance of the transistor in terms of provided drain current. The first part of this thesis work addresses the use of heterojunctions in SOI transistors: chapter 3 outlines the basics of the heterojunctions theory and the adoption of such approach in older technologies as the heterojunction–bipolar–transistors; moreover the modifications introduced in the Monte Carlo code in order to simulate conduction band discontinuities are described, and the simulations performed on unidimensional simplified structures in order to validate them as well. Chapter 4 presents the results obtained from the Monte Carlo simulations performed on double–gate SOI transistors featuring conduction band offsets between the source and drain regions and the channel. In particular, attention has been focused on the drain current and to internal quantities as inversion charge, potential energy and carrier velocities. Both graded and abrupt discontinuities have been considered. The scaling of devices dimensions and the adoption of innovative architectures have consequences on the power dissipation as well. In SOI technologies the channel is thermally insulated from the underlying substrate by a SiO2 buried–oxide layer; this SiO2 layer features a thermal conductivity that is two orders of magnitude lower than the silicon one, and it impedes the dissipation of the heat generated in the active region. Moreover, the thermal conductivity of thin semiconductor films is much lower than that of silicon bulk, due to phonon confinement and boundary scattering. All these aspects cause severe self–heating effects, that detrimentally impact the carrier mobility and therefore the saturation drive current for high–performance transistors; as a consequence, thermal device design is becoming a fundamental part of integrated circuit engineering. The second part of this thesis discusses the problem of self–heating in SOI transistors. Chapter 5 describes the causes of heat generation and dissipation in SOI devices, and it provides a brief overview on the methods that have been proposed in order to model these phenomena. In order to understand how this problem impacts the performance of different SOI architectures, three–dimensional electro–thermal simulations have been applied to the analysis of SHE in planar single and double–gate SOI transistors as well as FinFET, featuring the same isothermal electrical characteristics. In chapter 6 the same simulation approach is extensively employed to study the impact of SHE on the performance of a FinFET representative of the high–performance transistor of the 45 nm technology node. Its effects on the ON–current, the maximum temperatures reached inside the device and the thermal resistance associated to the device itself, as well as the dependence of SHE on the main geometrical parameters have been analyzed. Furthermore, the consequences on self–heating of technological solutions such as raised S/D extensions regions or reduction of fin height are explored as well. Finally, conclusions are drawn in chapter 7.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
The increasing precision of current and future experiments in high-energy physics requires a likewise increase in the accuracy of the calculation of theoretical predictions, in order to find evidence for possible deviations of the generally accepted Standard Model of elementary particles and interactions. Calculating the experimentally measurable cross sections of scattering and decay processes to a higher accuracy directly translates into including higher order radiative corrections in the calculation. The large number of particles and interactions in the full Standard Model results in an exponentially growing number of Feynman diagrams contributing to any given process in higher orders. Additionally, the appearance of multiple independent mass scales makes even the calculation of single diagrams non-trivial. For over two decades now, the only way to cope with these issues has been to rely on the assistance of computers. The aim of the xloops project is to provide the necessary tools to automate the calculation procedures as far as possible, including the generation of the contributing diagrams and the evaluation of the resulting Feynman integrals. The latter is based on the techniques developed in Mainz for solving one- and two-loop diagrams in a general and systematic way using parallel/orthogonal space methods. These techniques involve a considerable amount of symbolic computations. During the development of xloops it was found that conventional computer algebra systems were not a suitable implementation environment. For this reason, a new system called GiNaC has been created, which allows the development of large-scale symbolic applications in an object-oriented fashion within the C++ programming language. This system, which is now also in use for other projects besides xloops, is the main focus of this thesis. The implementation of GiNaC as a C++ library sets it apart from other algebraic systems. Our results prove that a highly efficient symbolic manipulator can be designed in an object-oriented way, and that having a very fine granularity of objects is also feasible. The xloops-related parts of this work consist of a new implementation, based on GiNaC, of functions for calculating one-loop Feynman integrals that already existed in the original xloops program, as well as the addition of supplementary modules belonging to the interface between the library of integral functions and the diagram generator.
Resumo:
Images of a scene, static or dynamic, are generally acquired at different epochs from different viewpoints. They potentially gather information about the whole scene and its relative motion with respect to the acquisition device. Data from different (in the spatial or temporal domain) visual sources can be fused together to provide a unique consistent representation of the whole scene, even recovering the third dimension, permitting a more complete understanding of the scene content. Moreover, the pose of the acquisition device can be achieved by estimating the relative motion parameters linking different views, thus providing localization information for automatic guidance purposes. Image registration is based on the use of pattern recognition techniques to match among corresponding parts of different views of the acquired scene. Depending on hypotheses or prior information about the sensor model, the motion model and/or the scene model, this information can be used to estimate global or local geometrical mapping functions between different images or different parts of them. These mapping functions contain relative motion parameters between the scene and the sensor(s) and can be used to integrate accordingly informations coming from the different sources to build a wider or even augmented representation of the scene. Accordingly, for their scene reconstruction and pose estimation capabilities, nowadays image registration techniques from multiple views are increasingly stirring up the interest of the scientific and industrial community. Depending on the applicative domain, accuracy, robustness, and computational payload of the algorithms represent important issues to be addressed and generally a trade-off among them has to be reached. Moreover, on-line performance is desirable in order to guarantee the direct interaction of the vision device with human actors or control systems. This thesis follows a general research approach to cope with these issues, almost independently from the scene content, under the constraint of rigid motions. This approach has been motivated by the portability to very different domains as a very desirable property to achieve. A general image registration approach suitable for on-line applications has been devised and assessed through two challenging case studies in different applicative domains. The first case study regards scene reconstruction through on-line mosaicing of optical microscopy cell images acquired with non automated equipment, while moving manually the microscope holder. By registering the images the field of view of the microscope can be widened, preserving the resolution while reconstructing the whole cell culture and permitting the microscopist to interactively explore the cell culture. In the second case study, the registration of terrestrial satellite images acquired by a camera integral with the satellite is utilized to estimate its three-dimensional orientation from visual data, for automatic guidance purposes. Critical aspects of these applications are emphasized and the choices adopted are motivated accordingly. Results are discussed in view of promising future developments.
Resumo:
The aim of the present thesis was to better understand the physiological role of the phytohormones jasmonates (JAs) and abscisic acid (ABA) during fruit ripening in prospect of a possible field application of JAs and ABA to improve fruit yield and quality. In particular, the effects of exogenous application of these substances at different fruit developmental stages and under different experimental conditions were evaluated. Some aspects of the water relations upon ABA treatment were also analysed. Three fruit species, peach (Prunus persica L. Batsch), golden (Actinidia chinensis) and green kiwifruit (Actinidia deliciosa), and several of their cvs, were used for the trials. Different experimental models were adopted: fruits in planta, detached fruit, detached branches with fruit, girdled branches and micropropagated plants. The work was structured into four sets of experiments as follows: (i) Pre-harvest methyl jasmonate (MJ) application was performed at S3/S4 transition under field conditions in Redhaven peach; ethylene production, ripening index, fruit quality and shelf-life were assessed showing that MJ-treated fruit were firmer and thus less ripe than controls as confirmed by the Index of Absorbance Difference (IAD), but exhibited a shorter shelf-life due to an increase in ethylene production. Moreover, the time course of the expression of ethylene-, auxin- and other ripening-related genes was determined. Ripening-related ACO1 and ACS1 transcript accumulation was inhibited though transiently by MJ, and gene expression of the ethylene receptor ETR2 and of the ethylene-related transcription factor ERF2 was also altered. The time course of the expression of several auxin-related genes was strongly affected by MJ suggesting an increase in auxin biosynthesis, altered auxin conjugation and release as well as perception and transport; the need for a correct ethylene/auxin balance during ripening was confirmed. (ii) Pre- and post-harvest ABA applications were carried out under field conditions in Flaminia and O’Henry peach and Stark Red Gold nectarine fruit; ethylene production, ripening index, fruit quality and shelf-life were assessed. Results show that pre-harvest ABA applications increase fruit size and skin color intensity. Also post-harvest ABA treatments alter ripening-related parameters; in particular, while ethylene production is impaired in ABA-treated fruit soluble solids concentration (SSC) is enhanced. Following field ABA applications stem water potential was modified since ABA-treated peach trees retain more water. (iii) Pre- and post-harvest ABA and PDJ treatments were carried out in both kiwifruit species under field conditions at different fruit developmental stages and in post-harvest. Ripening index, fruit quality, plant transpiration, photosynthesis and stomatal conductance were assessed. Pre-harvest treatments enhance SSC in the two cvs and flesh color development in golden kiwifruit. Post-harvest applications of either ABA or ABA plus PDJ lead to increased SSC. In addition, ABA reduces gas exchanges in A. deliciosa. (iv) Spray, drench and dipping ABA treatments were performed in micropropagated peach plants and in peach and nectarine detached branches; plant water use and transpiration, biomass production and fruit dehydration were determined. In both plants and branches ABA significantly reduces water use and fruit dehydration. No negative effects on biomass production were detected. The present information, mainly arising from plant growth regulator application in a field environment, where plants have to cope with multiple biotic and abiotic stresses, may implement the perspectives for the use of these substances in the control of fruit ripening.
Resumo:
Understanding the biology of Multiple Myeloma (MM) is of primary importance in the struggle to achieve a cure for this yet incurable neoplasm. A better knowledge of the mechanism underlying the development of MM can guide us in the development of new treatment strategies. Studies both on solid and haematological tumours have shown that cancer comprises a collection of related but subtly different clones, a feature that has been termed “intra-clonal heterogeneity”. This intra-clonal heterogeneity is likely, from a “Darwinian” natural selection perspective, to be the essential substrate for cancer evolution, disease progression and relapse. In this context the critical mechanism for tumour progression is competition between individual clones (and cancer stem cells) for the same microenvironmental “niche”, combined with the process of adaptation and natural selection. The Darwinian behavioural characteristics of cancer stem cells are applicable to MM. The knowledge that intra-clonal heterogeneity is an important feature of tumours’ biology has changed our way to addressing cancer, now considered as a composite mixture of clones and not as a linear evolving disease. In this variable therapeutic landscape it is important for clinicians and researchers to consider the impact that evolutionary biology and intra-clonal heterogeneity have on the treatment of myeloma and the emergence of treatment resistance. It is clear that if we want to effectively cure myeloma it is of primarily importance to understand disease biology and evolution. Only by doing so will we be able to effectively use all of the new tools we have at our disposal to cure myeloma and to use treatment in the most effective way possible. The aim of the present research project was to investigate at different levels the presence of intra-clonal heterogeneity in MM patients, and to evaluate the impact of treatment on clonal evolution and on patients’ outcomes.