983 resultados para best linear unbiased predictor
Resumo:
Nowadays, the culture of the sugarcane plays an important role regarding the Brazilian reality, especially in the aspect related to the alternative energy sources. In 2009, the municipality of Suzanapolis (SP), in the Brazilian Cerrado, an experiment was conducted with the culture of the sugarcane in a Red eutrophic, with the aim of selecting, using Pearson correlation coefficients, modeling, simple, linear and multiple regressions and spatial correlation, and also the best technological and productive components, to explain the variability of the productivity of the sugarcane. The geostatistical grid was installed in order to collect the data, with 120 sampling points, in an area of 14.53 ha. For the simple linear regressions, the plants population is the component of production that presents the best quadratic correlation with the productivity of the sugarcane, given by: PRO = -0.553**xPOP(2)+16.14*xPOP-15.77. However, for multiple linear regressions, the equation PRO = -21.11+4.92xPOP**+0.76xPUR** is the one that best presents in order to estimate that productivity. Spatially, the best correlation with yield of the sugarcane is also determined by the component of the production population of plants.
Resumo:
No safe ultrasound (US) parameters have been established to differentiate the causes of graft dysfunction.To define US parameters and identify the predictors of normal graft evolution, delayed graft function (DGF), and rejection at the early period after kidney transplantation.Between June 2012 and August 2013, 79 renal transplant recipients underwent US examination 1-3 days posttransplantation. Resistive index (RI), power Doppler (PD), and RI + PD (quantified PD) were assessed. Patients were allocated into three groups: normal graft evolution, DGF, and rejection.Resistive index of upper and middle segments and PD were higher in the DGF group than in the normal group. ROC curve analysis revealed that RI + PD was the index that best correlated with DGF (cutoff = 0.84). In the high RI + PD group, time to renal function recovery (6.33 +/- A 6.5 days) and number of dialysis sessions (2.81 +/- A 2.8) were greater than in the low RI + PD group (2.11 +/- A 5.3 days and 0.69 +/- A 1.5 sessions, respectively), p = 0.0001. Multivariate analysis showed that high donor final creatinine with a relative risk (RR) of 19.7 (2.01-184.7, p = 0.009) and older donor age (RR = 1.17 (1.04-1.32), p = 0.007) correlated with risk DGF.Quantified PD (RI + PD) was the best DGF predictor. PD quantification has not been previously reported .
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Osteoporosis is well recognized as a cirrhosis complication; however, most studies assessing this condition included only patients on liver transplantation lists with an elevated rate of bone diseases. While general population studies show that handgrip strength is clearly associated with bone mineral density, until now this tool has not been applied to cirrhotic patients in relation to their bone condition. This study aimed to evaluate whether handgrip strength, bone and liver tests may be useful as predictors of bone disease in cirrhotic outpatients. 129 subjects were included (77 men and 52 women). Dual energy X-ray absorptiometry was applied to evaluate lumbar-spine and femoral-neck T scores. Osteoporosis/osteopenia rates were 26.3%/35.6% in the lumbar spine and 6.9%/41.8% in the femoral neck, respectively. Model selections were based on backward procedures to find the best predictors of low T scores. For lumbar spine, only low handgrip strength and high parathyroid hormone levels were clearly related to low T scores. For femoral neck, only age was associated with low T scores. Handgrip strength may serve as an effective predictor of low lumbar spine T score among cirrhotic outpatients. As cirrhosis affects the lumbar spine more than the femoral neck, these results suggest that handgrip strength should be tested in all cirrhotic patients as a first indicator of bone health. This article is protected by copyright. All rights reserved.
Resumo:
Pós-graduação em Arquitetura e Urbanismo - FAAC
Resumo:
Background: Although linear growth during childhood may be affected by early-life exposures, few studies have examined whether the effects of these exposures linger on during school age, particularly in low-and middle-income countries. Methods: We conducted a population-based longitudinal study of 256 children living in the Brazilian Amazon, aged 0.1 y to 5.5 y in 2003. Data regarding socioeconomic and maternal characteristics, infant feeding practices, morbidities, and birth weight and length were collected at baseline of the study (2003). Child body length/height was measured at baseline and at follow-up visits (in 2007 and 2009). Restricted cubic splines were used to construct average height-for-age Z score (HAZ) growth curves, yielding estimated HAZ differences among exposure categories at ages 0.5 y, 1 y, 2 y, 5 y, 7 y, and 10 y. Results: At baseline, median age was 2.6 y (interquartile range, 1.4 y-3.8 y), and mean HAZ was -0.53 (standard deviation, 1.15); 10.2% of children were stunted. In multivariable analysis, children in households above the household wealth index median were 0.30 Z taller at age 5 y (P = 0.017), and children whose families owned land were 0.34 Z taller by age 10 y (P = 0.023), when compared with poorer children. Mothers in the highest tertile for height had children whose HAZ were significantly higher compared with those of children from mothers in the lowest height tertile at all ages. Birth weight and length were positively related to linear growth throughout childhood; by age 10 y, children weighing >3500 g at birth were 0.31 Z taller than those weighing 2501 g to 3500 g (P = 0.022) at birth, and children measuring >= 51 cm at birth were 0.51 Z taller than those measuring <= 48 cm (P = 0.005). Conclusions: Results suggest socioeconomic background is a potentially modifiable predictor of linear growth during the school-aged years. Maternal height and child's anthropometric characteristics at birth are positively associated with HAZ up until child age 10 y.
Resumo:
A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.
Resumo:
Further advances in magnetic hyperthermia might be limited by biological constraints, such as using sufficiently low frequencies and low field amplitudes to inhibit harmful eddy currents inside the patient's body. These incite the need to optimize the heating efficiency of the nanoparticles, referred to as the specific absorption rate (SAR). Among the several properties currently under research, one of particular importance is the transition from the linear to the non-linear regime that takes place as the field amplitude is increased, an aspect where the magnetic anisotropy is expected to play a fundamental role. In this paper we investigate the heating properties of cobalt ferrite and maghemite nanoparticles under the influence of a 500 kHz sinusoidal magnetic field with varying amplitude, up to 134 Oe. The particles were characterized by TEM, XRD, FMR and VSM, from which most relevant morphological, structural and magnetic properties were inferred. Both materials have similar size distributions and saturation magnetization, but strikingly different magnetic anisotropies. From magnetic hyperthermia experiments we found that, while at low fields maghemite is the best nanomaterial for hyperthermia applications, above a critical field, close to the transition from the linear to the non-linear regime, cobalt ferrite becomes more efficient. The results were also analyzed with respect to the energy conversion efficiency and compared with dynamic hysteresis simulations. Additional analysis with nickel, zinc and copper-ferrite nanoparticles of similar sizes confirmed the importance of the magnetic anisotropy and the damping factor. Further, the analysis of the characterization parameters suggested core-shell nanostructures, probably due to a surface passivation process during the nanoparticle synthesis. Finally, we discussed the effect of particle-particle interactions and its consequences, in particular regarding discrepancies between estimated parameters and expected theoretical predictions. Copyright 2012 Author(s). This article is distributed under a Creative Commons Attribution 3.0 Unported License. [http://dx.doi. org/10.1063/1.4739533]
Resumo:
We analyzed the effectiveness of linear short- and long-term variability time domain parameters, an index of sympatho-vagal balance (SDNN/RMSSD) and entropy in differentiating fetal heart rate patterns (fHRPs) on the fetal heart rate (fHR) series of 5, 3 and 2 min duration reconstructed from 46 fetal magnetocardiograms. Gestational age (GA) varied from 21 to 38 weeks. FHRPs were classified based on the fHR standard deviation. In sleep states, we observed that vagal influence increased with GA, and entropy significantly increased (decreased) with GA (SDNN/RMSSD), demonstrating that a prevalence of vagal activity with autonomous nervous system maturation may be associated with increased sleep state complexity. In active wakefulness, we observed a significant negative (positive) correlation of short-term (long-term) variability parameters with SDNN/RMSSD. ANOVA statistics demonstrated that long-term irregularity and standard deviation of normal-to-normal beat intervals (SDNN) best differentiated among fHRPs. Our results confirm that short-and long-term variability parameters are useful to differentiate between quiet and active states, and that entropy improves the characterization of sleep states. All measures differentiated fHRPs more effectively on very short HR series, as a result of the fMCG high temporal resolution and of the intrinsic timescales of the events that originate the different fHRPs.
Resumo:
Spatial linear models have been applied in numerous fields such as agriculture, geoscience and environmental sciences, among many others. Spatial dependence structure modelling, using a geostatistical approach, is an indispensable tool to estimate the parameters that define this structure. However, this estimation may be greatly affected by the presence of atypical observations in the sampled data. The purpose of this paper is to use diagnostic techniques to assess the sensitivity of the maximum-likelihood estimators, covariance functions and linear predictor to small perturbations in the data and/or the spatial linear model assumptions. The methodology is illustrated with two real data sets. The results allowed us to conclude that the presence of atypical values in the sample data have a strong influence on thematic maps, changing the spatial dependence structure.
Resumo:
Abstract Background Decreased heart rate variability (HRV) is related to higher morbidity and mortality. In this study we evaluated the linear and nonlinear indices of the HRV in stable angina patients submitted to coronary angiography. Methods We studied 77 unselected patients for elective coronary angiography, which were divided into two groups: coronary artery disease (CAD) and non-CAD groups. For analysis of HRV indices, HRV was recorded beat by beat with the volunteers in the supine position for 40 minutes. We analyzed the linear indices in the time (SDNN [standard deviation of normal to normal], NN50 [total number of adjacent RR intervals with a difference of duration greater than 50ms] and RMSSD [root-mean square of differences]) and frequency domains ultra-low frequency (ULF) ≤ 0,003 Hz, very low frequency (VLF) 0,003 – 0,04 Hz, low frequency (LF) (0.04–0.15 Hz), and high frequency (HF) (0.15–0.40 Hz) as well as the ratio between LF and HF components (LF/HF). In relation to the nonlinear indices we evaluated SD1, SD2, SD1/SD2, approximate entropy (−ApEn), α1, α2, Lyapunov Exponent, Hurst Exponent, autocorrelation and dimension correlation. The definition of the cutoff point of the variables for predictive tests was obtained by the Receiver Operating Characteristic curve (ROC). The area under the ROC curve was calculated by the extended trapezoidal rule, assuming as relevant areas under the curve ≥ 0.650. Results Coronary arterial disease patients presented reduced values of SDNN, RMSSD, NN50, HF, SD1, SD2 and -ApEn. HF ≤ 66 ms2, RMSSD ≤ 23.9 ms, ApEn ≤−0.296 and NN50 ≤ 16 presented the best discriminatory power for the presence of significant coronary obstruction. Conclusion We suggest the use of Heart Rate Variability Analysis in linear and nonlinear domains, for prognostic purposes in patients with stable angina pectoris, in view of their overall impairment.
Resumo:
Dasyatis guttata has been target of artisanal fisheries in the coast of Bahia (Northeast Brazil) mainly by “arraieira” (gillnet) and “grozeira” (bottom long-line), but until now there is no stock assessment study. One of the important data for this knowledge is reliable indices of abundance. The aims of the present work are to: (1) estimate the best predictor for relative abundance (catch-per-unit-of-effort, CPUE), examining whether catch (production – kg) was related to: soak time of the gear, size of the gillnet or number of hooks, applying generalized linear model (GLM); (2) estimate the annual CPUE (kg/hooks and kg/m) averaged by gear; and (3) assess the temporal CPUE variance. Based on monthly sampling between January 2012 and January 2013, 222 landings by grozeira and 76 by arraiaiera were recorded in the two landing sites in Todos os Santos Bay, Bahia. A total of 14,550 kg (average = 44 kg/month) of D. guttata was captured. Models for both gears were highly significant (P < 0.0001). The analysis indicated that the most appropriate variable for CPUE analysis was the size of the gillnet (P < 0.001) and the number of hooks (P < 0.0001). Soak time of the gear was not significant for both gears (P = 0.4). High residual deviance expresses the complexity of the relations between ecosystem factors and other fisheries factors affecting relative abundance, which were not considered in this study. The average CPUE by grozeira was 6.39 kg/100 hooks ± 8.89 and by arraieira, 1.47 kg/100 m ± 1.66 over the year. Kruskal-Wallis test showed effect of the month on the mean grozeira CPUE (P = <0.001), but no effect (P = 0.096) on the mean arraieira CPUE. Grozeira CPUE values were highest in December and March, and lowest between May to August
Resumo:
Background: Few data on the definition of simple robust parameters to predict image noise in cardiac computed tomography (CT) exist. Objectives: To evaluate the value of a simple measure of subcutaneous tissue as a predictor of image noise in cardiac CT. Methods: 86 patients underwent prospective ECG-gated coronary computed tomographic angiography (CTA) and coronary calcium scoring (CAC) with 120 kV and 150 mA. The image quality was objectively measured by the image noise in the aorta in the cardiac CTA, and low noise was defined as noise < 30HU. The chest anteroposterior diameter and lateral width, the image noise in the aorta and the skin-sternum (SS) thickness were measured as predictors of cardiac CTA noise. The association of the predictors and image noise was performed by using Pearson correlation. Results: The mean radiation dose was 3.5 ± 1.5 mSv. The mean image noise in CT was 36.3 ± 8.5 HU, and the mean image noise in non-contrast scan was 17.7 ± 4.4 HU. All predictors were independently associated with cardiac CTA noise. The best predictors were SS thickness, with a correlation of 0.70 (p < 0.001), and noise in the non-contrast images, with a correlation of 0.73 (p < 0.001). When evaluating the ability to predict low image noise, the areas under the ROC curve for the non-contrast noise and for the SS thickness were 0.837 and 0.864, respectively. Conclusion: Both SS thickness and CAC noise are simple accurate predictors of cardiac CTA image noise. Those parameters can be incorporated in standard CT protocols to adequately adjust radiation exposure.
Resumo:
Primary stability of stems in cementless total hip replacements is recognized to play a critical role for long-term survival and thus for the success of the overall surgical procedure. In Literature, several studies addressed this important issue. Different approaches have been explored aiming to evaluate the extent of stability achieved during surgery. Some of these are in-vitro protocols while other tools are coinceived for the post-operative assessment of prosthesis migration relative to the host bone. In vitro protocols reported in the literature are not exportable to the operating room. Anyway most of them show a good overall accuracy. The RSA, EBRA and the radiographic analysis are currently used to check the healing process of the implanted femur at different follow-ups, evaluating implant migration, occurance of bone resorption or osteolysis at the interface. These methods are important for follow up and clinical study but do not assist the surgeon during implantation. At the time I started my Ph.D Study in Bioengineering, only one study had been undertaken to measure stability intra-operatively. No follow-up was presented to describe further results obtained with that device. In this scenario, it was believed that an instrument that could measure intra-operatively the stability achieved by an implanted stem would consistently improve the rate of success. This instrument should be accurate and should give to the surgeon during implantation a quick answer concerning the stability of the implanted stem. With this aim, an intra-operative device was designed, developed and validated. The device is meant to help the surgeon to decide how much to press-fit the implant. It is essentially made of a torsional load cell, able to measure the extent of torque applied by the surgeon to test primary stability, an angular sensor that measure the relative angular displacement between stem and femur, a rigid connector that enable connecting the device to the stem, and all the electronics for signals conditioning. The device was successfully validated in-vitro, showing a good overall accuracy in discriminating stable from unstable implants. Repeatability tests showed that the device was reliable. A calibration procedure was then performed in order to convert the angular readout into a linear displacement measurement, which is an information clinically relevant and simple to read in real-time by the surgeon. The second study reported in my thesis, concerns the evaluation of the possibility to have predictive information regarding the primary stability of a cementless stem, by measuring the micromotion of the last rasp used by the surgeon to prepare the femoral canal. This information would be really useful to the surgeon, who could check prior to the implantation process if the planned stem size can achieve a sufficient degree of primary stability, under optimal press fitting conditions. An intra-operative tool was developed to this aim. It was derived from a previously validated device, which was adapted for the specific purpose. The device is able to measure the relative micromotion between the femur and the rasp, when a torsional load is applied. An in-vitro protocol was developed and validated on both composite and cadaveric specimens. High correlation was observed between one of the parameters extracted form the acquisitions made on the rasp and the stability of the corresponding stem, when optimally press-fitted by the surgeon. After tuning in-vitro the protocol as in a closed loop, verification was made on two hip patients, confirming the results obtained in-vitro and highlighting the independence of the rasp indicator from the bone quality, anatomy and preserving conditions of the tested specimens, and from the sharpening of the rasp blades. The third study is related to an approach that have been recently explored in the orthopaedic community, but that was already in use in other scientific fields. It is based on the vibration analysis technique. This method has been successfully used to investigate the mechanical properties of the bone and its application to evaluate the extent of fixation of dental implants has been explored, even if its validity in this field is still under discussion. Several studies have been published recently on the stability assessment of hip implants by vibration analysis. The aim of the reported study was to develop and validate a prototype device based on the vibration analysis technique to measure intra-operatively the extent of implant stability. The expected advantages of a vibration-based device are easier clinical use, smaller dimensions and minor overall cost with respect to other devices based on direct micromotion measurement. The prototype developed consists of a piezoelectric exciter connected to the stem and an accelerometer attached to the femur. Preliminary tests were performed on four composite femurs implanted with a conventional stem. The results showed that the input signal was repeatable and the output could be recorded accurately. The fourth study concerns the application of the device based on the vibration analysis technique to several cases, considering both composite and cadaveric specimens. Different degrees of bone quality were tested, as well as different femur anatomies and several levels of press-fitting were considered. The aim of the study was to verify if it is possible to discriminate between stable and quasi-stable implants, because this is the most challenging detection for the surgeon in the operation room. Moreover, it was possible to validate the measurement protocol by comparing the results of the acquisitions made with the vibration-based tool to two reference measurements made by means of a validated technique, and a validated device. The results highlighted that the most sensitive parameter to stability is the shift in resonance frequency of the stem-bone system, showing high correlation with residual micromotion on all the tested specimens. Thus, it seems possible to discriminate between many levels of stability, from the grossly loosened implant, through the quasi-stable implants, to the definitely stable one. Finally, an additional study was performed on a different type of hip prosthesis, which has recently gained great interest thus becoming fairly popular in some countries in the last few years: the hip resurfacing prosthesis. The study was motivated by the following rationale: although bone-prosthesis micromotion is known to influence the stability of total hip replacement, its effect on the outcome of resurfacing implants has not been investigated in-vitro yet, but only clinically. Thus the work was aimed at verifying if it was possible to apply to the resurfacing prosthesis one of the intraoperative devices just validated for the measurement of the micromotion in the resurfacing implants. To do that, a preliminary study was performed in order to evaluate the extent of migration and the typical elastic movement for an epiphyseal prosthesis. An in-vitro procedure was developed to measure micromotions of resurfacing implants. This included a set of in-vitro loading scenarios that covers the range of directions covered by hip resultant forces in the most typical motor-tasks. The applicability of the protocol was assessed on two different commercial designs and on different head sizes. The repeatability and reproducibility were excellent (comparable to the best previously published protocols for standard cemented hip stems). Results showed that the procedure is accurate enough to detect micromotions of the order of few microns. The protocol proposed was thus completely validated. The results of the study demonstrated that the application of an intra-operative device to the resurfacing implants is not necessary, as the typical micromovement associated to this type of prosthesis could be considered negligible and thus not critical for the stabilization process. Concluding, four intra-operative tools have been developed and fully validated during these three years of research activity. The use in the clinical setting was tested for one of the devices, which could be used right now by the surgeon to evaluate the degree of stability achieved through the press-fitting procedure. The tool adapted to be used on the rasp was a good predictor of the stability of the stem. Thus it could be useful for the surgeon while checking if the pre-operative planning was correct. The device based on the vibration technique showed great accuracy, small dimensions, and thus has a great potential to become an instrument appreciated by the surgeon. It still need a clinical evaluation, and must be industrialized as well. The in-vitro tool worked very well, and can be applied for assessing resurfacing implants pre-clinically.
Resumo:
The relation between the intercepted light and orchard productivity was considered linear, although this dependence seems to be more subordinate to planting system rather than light intensity. At whole plant level not always the increase of irradiance determines productivity improvement. One of the reasons can be the plant intrinsic un-efficiency in using energy. Generally in full light only the 5 – 10% of the total incoming energy is allocated to net photosynthesis. Therefore preserving or improving this efficiency becomes pivotal for scientist and fruit growers. Even tough a conspicuous energy amount is reflected or transmitted, plants can not avoid to absorb photons in excess. The chlorophyll over-excitation promotes the reactive species production increasing the photoinhibition risks. The dangerous consequences of photoinhibition forced plants to evolve a complex and multilevel machine able to dissipate the energy excess quenching heat (Non Photochemical Quenching), moving electrons (water-water cycle , cyclic transport around PSI, glutathione-ascorbate cycle and photorespiration) and scavenging the generated reactive species. The price plants must pay for this equipment is the use of CO2 and reducing power with a consequent decrease of the photosynthetic efficiency, both because some photons are not used for carboxylation and an effective CO2 and reducing power loss occurs. Net photosynthesis increases with light until the saturation point, additional PPFD doesn’t improve carboxylation but it rises the efficiency of the alternative pathways in energy dissipation but also ROS production and photoinhibition risks. The wide photo-protective apparatus, although is not able to cope with the excessive incoming energy, therefore photodamage occurs. Each event increasing the photon pressure and/or decreasing the efficiency of the described photo-protective mechanisms (i.e. thermal stress, water and nutritional deficiency) can emphasize the photoinhibition. Likely in nature a small amount of not damaged photosystems is found because of the effective, efficient and energy consuming recovery system. Since the damaged PSII is quickly repaired with energy expense, it would be interesting to investigate how much PSII recovery costs to plant productivity. This PhD. dissertation purposes to improve the knowledge about the several strategies accomplished for managing the incoming energy and the light excess implication on photo-damage in peach. The thesis is organized in three scientific units. In the first section a new rapid, non-intrusive, whole tissue and universal technique for functional PSII determination was implemented and validated on different kinds of plants as C3 and C4 species, woody and herbaceous plants, wild type and Chlorophyll b-less mutant and monocot and dicot plants. In the second unit, using a “singular” experimental orchard named “Asymmetric orchard”, the relation between light environment and photosynthetic performance, water use and photoinhibition was investigated in peach at whole plant level, furthermore the effect of photon pressure variation on energy management was considered on single leaf. In the third section the quenching analysis method suggested by Kornyeyev and Hendrickson (2007) was validate on peach. Afterwards it was applied in the field where the influence of moderate light and water reduction on peach photosynthetic performances, water requirements, energy management and photoinhibition was studied. Using solar energy as fuel for life plant is intrinsically suicidal since the high constant photodamage risk. This dissertation would try to highlight the complex relation existing between plant, in particular peach, and light analysing the principal strategies plants developed to manage the incoming light for deriving the maximal benefits as possible minimizing the risks. In the first instance the new method proposed for functional PSII determination based on P700 redox kinetics seems to be a valid, non intrusive, universal and field-applicable technique, even because it is able to measure in deep the whole leaf tissue rather than the first leaf layers as fluorescence. Fluorescence Fv/Fm parameter gives a good estimate of functional PSII but only when data obtained by ad-axial and ab-axial leaf surface are averaged. In addition to this method the energy quenching analysis proposed by Kornyeyev and Hendrickson (2007), combined with the photosynthesis model proposed by von Caemmerer (2000) is a forceful tool to analyse and study, even in the field, the relation between plant and environmental factors such as water, temperature but first of all light. “Asymmetric” training system is a good way to study light energy, photosynthetic performance and water use relations in the field. At whole plant level net carboxylation increases with PPFD reaching a saturating point. Light excess rather than improve photosynthesis may emphasize water and thermal stress leading to stomatal limitation. Furthermore too much light does not promote net carboxylation improvement but PSII damage, in fact in the most light exposed plants about 50-60% of the total PSII is inactivated. At single leaf level, net carboxylation increases till saturation point (1000 – 1200 μmolm-2s-1) and light excess is dissipated by non photochemical quenching and non net carboxylative transports. The latter follows a quite similar pattern of Pn/PPFD curve reaching the saturation point at almost the same photon flux density. At middle-low irradiance NPQ seems to be lumen pH limited because the incoming photon pressure is not enough to generate the optimum lumen pH for violaxanthin de-epoxidase (VDE) full activation. Peach leaves try to cope with the light excess increasing the non net carboxylative transports. While PPFD rises the xanthophyll cycle is more and more activated and the rate of non net carboxylative transports is reduced. Some of these alternative transports, such as the water-water cycle, the cyclic transport around the PSI and the glutathione-ascorbate cycle are able to generate additional H+ in lumen in order to support the VDE activation when light can be limiting. Moreover the alternative transports seems to be involved as an important dissipative way when high temperature and sub-optimal conductance emphasize the photoinhibition risks. In peach, a moderate water and light reduction does not determine net carboxylation decrease but, diminishing the incoming light and the environmental evapo-transpiration request, stomatal conductance decreases, improving water use efficiency. Therefore lowering light intensity till not limiting levels, water could be saved not compromising net photosynthesis. The quenching analysis is able to partition absorbed energy in the several utilization, photoprotection and photo-oxidation pathways. When recovery is permitted only few PSII remained un-repaired, although more net PSII damage is recorded in plants placed in full light. Even in this experiment, in over saturating light the main dissipation pathway is the non photochemical quenching; at middle-low irradiance it seems to be pH limited and other transports, such as photorespiration and alternative transports, are used to support photoprotection and to contribute for creating the optimal trans-thylakoidal ΔpH for violaxanthin de-epoxidase. These alternative pathways become the main quenching mechanisms at very low light environment. Another aspect pointed out by this study is the role of NPQ as dissipative pathway when conductance becomes severely limiting. The evidence that in nature a small amount of damaged PSII is seen indicates the presence of an effective and efficient recovery mechanism that masks the real photodamage occurring during the day. At single leaf level, when repair is not allowed leaves in full light are two fold more photoinhibited than the shaded ones. Therefore light in excess of the photosynthetic optima does not promote net carboxylation but increases water loss and PSII damage. The more is photoinhibition the more must be the photosystems to be repaired and consequently the energy and dry matter to allocate in this essential activity. Since above the saturation point net photosynthesis is constant while photoinhibition increases it would be interesting to investigate how photodamage costs in terms of tree productivity. An other aspect of pivotal importance to be further widened is the combined influence of light and other environmental parameters, like water status, temperature and nutrition on peach light, water and phtosyntate management.