942 resultados para Close to Convex Function


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Field-Programmable Gate Arrays (FPGAs) are becoming increasingly important in embedded and high-performance computing systems. They allow performance levels close to the ones obtained with Application-Specific Integrated Circuits, while still keeping design and implementation flexibility. However, to efficiently program FPGAs, one needs the expertise of hardware developers in order to master hardware description languages (HDLs) such as VHDL or Verilog. Attempts to furnish a high-level compilation flow (e.g., from C programs) still have to address open issues before broader efficient results can be obtained. Bearing in mind an FPGA available resources, it has been developed LALP (Language for Aggressive Loop Pipelining), a novel language to program FPGA-based accelerators, and its compilation framework, including mapping capabilities. The main ideas behind LALP are to provide a higher abstraction level than HDLs, to exploit the intrinsic parallelism of hardware resources, and to allow the programmer to control execution stages whenever the compiler techniques are unable to generate efficient implementations. Those features are particularly useful to implement loop pipelining, a well regarded technique used to accelerate computations in several application domains. This paper describes LALP, and shows how it can be used to achieve high-performance computing solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background Some breeds of sheep are highly seasonal in terms of reproductive capability, and these changes are regulated by photoperiod and melatonin secretion. These changes affect the reproductive performance of rams, impairing semen quality and modifying hormonal profiles. Also, the antioxidant defence systems seem to be modulated by melatonin secretion, and shows seasonal variations. The aim of this study was to investigate the presence of melatonin and testosterone in ram seminal plasma and their variations between the breeding and non-breeding seasons. In addition, we analyzed the possible correlations between these hormones and the antioxidant enzyme defence system activity. Methods Seminal plasma from nine Rasa Aragonesa rams were collected for one year, and their levels of melatonin, testosterone, superoxide dismutase (SOD), glutathione reductase (GRD), glutathione peroxidase (GPX) and catalase (CAT) were measured. Results All samples presented measurable quantities of hormones and antioxidant enzymes. Both hormones showed monthly variations, with a decrease after the winter solstice and a rise after the summer solstice that reached the maximum levels in October-November, and a marked seasonal variation (P < 0.01) with higher levels in the breeding season. The yearly pattern of GRD and catalase was close to that of melatonin, and GRD showed a significant seasonal variation (P < 0.01) with a higher activity during the breeding season. Linear regression analysis between the studied hormones and antioxidant enzymes showed a significant correlation between melatonin and testosterone, GRD, SOD and catalase. Conclusions These results show the presence of melatonin and testosterone in ram seminal plasma, and that both hormones have seasonal variations, and support the idea that seasonal variations of fertility in the ram involve interplay between melatonin and the antioxidant defence system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The timing of larval release may greatly affect the survivorship and distribution of pelagic stages and reveal important aspects of life history tactics in marine invertebrates. Endogenous rhythms of breeding individuals and populations are valuable indicators of selected strategies because they are free of the neutral effect of stochastic environmental variation. The high-shore intertidal barnacle Chthamalus bisinuatus exhibits endogenous tidal and tidal amplitude rhythms in a way that larval release would more likely occur during fortnightly neap periods at high tide. Such timing would minimize larval loss due to stranding and promote larval retention close to shore. This fully explains temporal patterns in populations facing the open sea and inhabiting eutrophic areas. However, rhythmic activity breaks down to an irregular pattern in a population within the São Sebastião Channel subjected to large variation of food supply around a mesotrophic average. Peaks of chl a concentration precede release events by 6 d, suggesting resource limitation for egg production within the channel. Also, extreme daily temperatures imposing mortality risk correlate to release rate just 1 d ahead, suggesting a terminal reproductive strategy. Oceanographic conditions apparently dictate whether barnacles follow a rhythmic trend of larval release supported by endogenous timing or, alternatively, respond to the stochastic variation of key environmental factors, resulting in an erratic temporal pattern.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis consists of three independent parts. Part I: Polynomial amoebas We study the amoeba of a polynomial, as de ned by Gelfand, Kapranov and Zelevinsky. A central role in the treatment is played by a certain convex function which is linear in each complement component of the amoeba, which we call the Ronkin function. This function is used in two di erent ways. First, we use it to construct a polyhedral complex, which we call a spine, approximating the amoeba. Second, the Monge-Ampere measure of the Ronkin function has interesting properties which we explore. This measure can be used to derive an upper bound on the area of an amoeba in two dimensions. We also obtain results on the number of complement components of an amoeba, and consider possible extensions of the theory to varieties of codimension higher than 1. Part II: Differential equations in the complex plane We consider polynomials in one complex variable arising as eigenfunctions of certain differential operators, and obtain results on the distribution of their zeros. We show that in the limit when the degree of the polynomial approaches innity, its zeros are distributed according to a certain probability measure. This measure has its support on the union of nitely many curve segments, and can be characterized by a simple condition on its Cauchy transform. Part III: Radon transforms and tomography This part is concerned with different weighted Radon transforms in two dimensions, in particular the problem of inverting such transforms. We obtain stability results of this inverse problem for rather general classes of weights, including weights of attenuation type with data acquisition limited to a 180 degrees range of angles. We also derive an inversion formula for the exponential Radon transform, with the same restriction on the angle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Programmed cell death (PCD) is a widely spread phenomenon among multi-cellular organisms. Without the deletion of cells no longer needed, the organism will not be able to develop in a predicted way. It is now belived that all cells have the capacity to self-destruct and that the survival of the cells is depending on the repression of this suicidal programme. PCD has turned out to show similarities in many different species and there are strong indications that the mechanisms running the programme might, at least in some parts, be evolutionarily conserced. PCD is a generic term for different programmes of cell destruction, such as apoptosis and autophagic PCD. An important tool to determine if a cell is undergoing PCD is the transmitting electron microscope. The aims of my study were to find out if, and in what way, the suspensor and endosperm in Vicia faba (Broad bean), which are short-lived structures, undergoes PCD. The endosperm degradation preceed the suspensor cell death and they differ to some extent ultrastructurally. The cell death occurs in both tissues about 13-14 days after pollination when the embryo proper is mature enough to support itself. It was found that both tissues are committed to autophagic PCD, a cell death characteristic of conspicuous formations of autophagic vacuoles. It was shown by histochemical staining that acid phosphatases are accumulated in these vacuoles but are also present in the cytoplasm. These vacuoles are similar to autophagic vacuoles formed in rat liver cells, indicating that autophagy is a widely spread phenomenon. DNA fragmentation is the first visible sign of PCD in both tissues and it is demonstrated by a labelling technique (TUNEL). In the endosperm nuclei the heterochromatin subsequently appears in the form of a network, while in the suspensor it is more conspicuous, with heterochromatin that forms large electron dense aggregates located close to the nuclear envelope. In the suspensor, the plastids develop into chromoplasts with lycopene crystals at the same time or shortly after DNA fragmentation. This is probably due to the fact that the suspensor plastids function as hormone producing organelles and support the embryo proper with indispensable growth factors. Later the embryo will be able to produce its own growth factors and the synthesis of these, in particular gibberelines, might be suppressed in the suspensor. The precursors can then be used for synthesis of lycopene instead. Both the suspensor and endosperm are going through autophagic PCD, but the process differs in some respects. This is probably due the the different function of the two tissues, and that the signals that trigger the process presumably are different. The embryo proper is probably the source of the death signal affecting the suspensor. The endosperm, which has a different origin and function, might be controlling the death signal within its own cell. The death might in this case be related to the age of the cell.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN]Labile Fe(II) distributions were investigated in the Sub-Tropical South Atlantic and the Southern Ocean during the BONUS-GoodHope cruise from 34 to 57_ S (February? March 2008). Concentrations ranged from below the detection limit (0.009 nM) to values as high 5 as 0.125 nM. In the surface mixed layer, labile Fe(II) concentrations were always higher than the detection limit, with values higher than 0.060nM south of 47_ S, representing between 39% and 63% of dissolved Fe (DFe). Biological production was evidenced. At intermediate depth, local maxima were observed, with the highest values in the Sub-Tropical domain at around 200 m, and represented more than 70% of DFe. Remineralization processes were likely responsible for those sub-surface maxima. Below 1500 m, concentrations were close to or below the detection limit, except at two stations (at the vicinity of the Agulhas ridge and in the north of the Weddell Sea Gyre) where values remained as high as _0.030?0.050 nM. Hydrothermal or sediment inputs may provide Fe(II) to these deep waters. Fe(II) half life times (t1/2) at 4 _C were measured in the upper and deep waters and ranged from 2.9 to 11.3min, and from 10.0 to 72.3 min, respectively. Measured values compared quite well in the upper waters with theoretical values from two published models, but not in the deep waters. This may be due to the lack of knowledge for some parameters in the models and/or to organic complexation of Fe(II) that impact its oxidation rates. This study helped to considerably increase the Fe(II) data set in the Ocean and to better understand the Fe redox cycle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN] To determine central and peripheral hemodynamic responses to upright leg cycling exercise, nine physically active men underwent measurements of arterial blood pressure and gases, as well as femoral and subclavian vein blood flows and gases during incremental exercise to exhaustion (Wmax). Cardiac output (CO) and leg blood flow (BF) increased in parallel with exercise intensity. In contrast, arm BF remained at 0.8 l/min during submaximal exercise, increasing to 1.2 +/- 0.2 l/min at maximal exercise (P < 0.05) when arm O(2) extraction reached 73 +/- 3%. The leg received a greater percentage of the CO with exercise intensity, reaching a value close to 70% at 64% of Wmax, which was maintained until exhaustion. The percentage of CO perfusing the trunk decreased with exercise intensity to 21% at Wmax, i.e., to approximately 5.5 l/min. For a given local Vo(2), leg vascular conductance (VC) was five- to sixfold higher than arm VC, despite marked hemoglobin deoxygenation in the subclavian vein. At peak exercise, arm VC was not significantly different than at rest. Leg Vo(2) represented approximately 84% of the whole body Vo(2) at intensities ranging from 38 to 100% of Wmax. Arm Vo(2) contributed between 7 and 10% to the whole body Vo(2). From 20 to 100% of Wmax, the trunk Vo(2) (including the gluteus muscles) represented between 14 and 15% of the whole body Vo(2). In summary, vasoconstrictor signals efficiently oppose the vasodilatory metabolites in the arms, suggesting that during whole body exercise in the upright position blood flow is differentially regulated in the upper and lower extremities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN] Hypoxia-induced hyperventilation is critical to improve blood oxygenation, particularly when the arterial Po2 lies in the steep region of the O2 dissociation curve of the hemoglobin (ODC). Hyperventilation increases alveolar Po2 and, by increasing pH, left shifts the ODC, increasing arterial saturation (Sao2) 6 to 12 percentage units. Pulmonary gas exchange (PGE) is efficient at rest and, hence, the alveolar-arterial Po2 difference (Pao2-Pao2) remains close to 0 to 5mm Hg. The (Pao2-Pao2) increases with exercise duration and intensity and the level of hypoxia. During exercise in hypoxia, diffusion limitation explains most of the additional Pao2-Pao2. With altitude, acclimatization exercise (Pao2-Pao2) is reduced, but does not reach the low values observed in high altitude natives, who possess an exceptionally high DLo2. Convective O2 transport depends on arterial O2 content (Cao2), cardiac output (Q), and muscle blood flow (LBF). During whole-body exercise in severe acute hypoxia and in chronic hypoxia, peak Q and LBF are blunted, contributing to the limitation of maximal oxygen uptake (Vo2max). During small-muscle exercise in hypoxia, PGE is less perturbed, Cao2 is higher, and peak Q and LBF achieve values similar to normoxia. Although the Po2 gradient driving O2 diffusion into the muscles is reduced in hypoxia, similar levels of muscle O2 diffusion are observed during small-mass exercise in chronic hypoxia and in normoxia, indicating that humans have a functional reserve in muscle O2 diffusing capacity, which is likely utilized during exercise in hypoxia. In summary, hypoxia reduces Vo2max because it limits O2 diffusion in the lung.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in stem cell biology have challenged the notion that infarcted myocardium is irreparable. The pluripotent ability of stem cells to differentiate into specialized cell lines began to garner intense interest within cardiology when it was shown in animal models that intramyocardial injection of bone marrow stem cells (MSCs), or the mobilization of bone marrow stem cells with spontaneous homing to myocardium, could improve cardiac function and survival after induced myocardial infarction (MI) [1, 2]. Furthermore, the existence of stem cells in myocardium has been identified in animal heart [3, 4], and intense research is under way in an attempt to clarify their potential clinical application for patients with myocardial infarction. To date, in order to identify the best one, different kinds of stem cells have been studied; these have been derived from embryo or adult tissues (i.e. bone marrow, heart, peripheral blood etc.). Currently, three different biologic therapies for cardiovascular diseases are under investigation: cell therapy, gene therapy and the more recent “tissue-engineering” therapy . During my Ph.D. course, first I focalised my study on the isolation and characterization of Cardiac Stem Cells (CSCs) in wild-type and transgenic mice and for this purpose I attended, for more than one year, the Cardiovascular Research Institute of the New York Medical College, in Valhalla (NY, USA) under the direction of Doctor Piero Anversa. During this period I learnt different Immunohistochemical and Biomolecular techniques, useful for investigating the regenerative potential of stem cells. Then, during the next two years, I studied the new approach of cardiac regenerative medicine based on “tissue-engineering” in order to investigate a new strategy to regenerate the infracted myocardium. Tissue-engineering is a promising approach that makes possible the creation of new functional tissue to replace lost or failing tissue. This new discipline combines isolated functioning cells and biodegradable 3-dimensional (3D) polymeric scaffolds. The scaffold temporarily provides the biomechanical support for the cells until they produce their own extracellular matrix. Because tissue-engineering constructs contain living cells, they may have the potential for growth and cellular self-repair and remodeling. In the present study, I examined whether the tissue-engineering strategy within hyaluron-based scaffolds would result in the formation of alternative cardiac tissue that could replace the scar and improve cardiac function after MI in syngeneic heterotopic rat hearts. Rat hearts were explanted, subjected to left coronary descending artery occlusion, and then grafted into the abdomen (aorta-aorta anastomosis) of receiving syngeneic rat. After 2 weeks, a pouch of 3 mm2 was made in the thickness of the ventricular wall at the level of the post-infarction scar. The hyaluronic scaffold, previously engineered for 3 weeks with rat MSCs, was introduced into the pouch and the myocardial edges sutured with few stitches. Two weeks later we evaluated the cardiac function by M-Mode echocardiography and the myocardial morphology by microscope analysis. We chose bone marrow-derived mensenchymal stem cells (MSCs) because they have shown great signaling and regenerative properties when delivered to heart tissue following a myocardial infarction (MI). However, while the object of cell transplantation is to improve ventricular function, cardiac cell transplantation has had limited success because of poor graft viability and low cell retention, that’s why we decided to combine MSCs with a biopolimeric scaffold. At the end of the experiments we observed that the hyaluronan fibres had not been substantially degraded 2 weeks after heart-transplantation. Most MSCs had migrated to the surrounding infarcted area where they were especially found close to small-sized vessels. Scar tissue was moderated in the engrafted region and the thickness of the corresponding ventricular wall was comparable to that of the non-infarcted remote area. Also, the left ventricular shortening fraction, evaluated by M-Mode echocardiography, was found a little bit increased when compared to that measured just before construct transplantation. Therefore, this study suggests that post-infarction myocardial remodelling can be favourably affected by the grafting of MSCs delivered through a hyaluron-based scaffold

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis we focussed on the characterization of the reaction center (RC) protein purified from the photosynthetic bacterium Rhodobacter sphaeroides. In particular, we discussed the effects of native and artificial environment on the light-induced electron transfer processes. The native environment consist of the inner antenna LH1 complex that copurifies with the RC forming the so called core complex, and the lipid phase tightly associated with it. In parallel, we analyzed the role of saccharidic glassy matrices on the interplay between electron transfer processes and internal protein dynamics. As a different artificial matrix, we incorporated the RC protein in a layer-by-layer structure with a twofold aim: to check the behaviour of the protein in such an unusual environment and to test the response of the system to herbicides. By examining the RC in its native environment, we found that the light-induced charge separated state P+QB - is markedly stabilized (by about 40 meV) in the core complex as compared to the RC-only system over a physiological pH range. We also verified that, as compared to the average composition of the membrane, the core complex copurifies with a tightly bound lipid complement of about 90 phospholipid molecules per RC, which is strongly enriched in cardiolipin. In parallel, a large ubiquinone pool was found in association with the core complex, giving rise to a quinone concentration about ten times larger than the average one in the membrane. Moreover, this quinone pool is fully functional, i.e. it is promptly available at the QB site during multiple turnover excitation of the RC. The latter two observations suggest important heterogeneities and anisotropies in the native membranes which can in principle account for the stabilization of the charge separated state in the core complex. The thermodynamic and kinetic parameters obtained in the RC-LH1 complex are very close to those measured in intact membranes, indicating that the electron transfer properties of the RC in vivo are essentially determined by its local environment. The studies performed by incorporating the RC into saccharidic matrices evidenced the relevance of solvent-protein interactions and dynamical coupling in determining the kinetics of electron transfer processes. The usual approach when studying the interplay between internal motions and protein function consists in freezing the degrees of freedom of the protein at cryogenic temperature. We proved that the “trehalose approach” offers distinct advantages with respect to this traditional methodology. We showed, in fact, that the RC conformational dynamics, coupled to specific electron transfer processes, can be modulated by varying the hydration level of the trehalose matrix at room temperature, thus allowing to disentangle solvent from temperature effects. The comparison between different saccharidic matrices has revealed that the structural and dynamical protein-matrix coupling depends strongly upon the sugar. The analyses performed in RCs embedded in polyelectrolyte multilayers (PEM) structures have shown that the electron transfer from QA - to QB, a conformationally gated process extremely sensitive to the RC environment, can be strongly modulated by the hydration level of the matrix, confirming analogous results obtained for this electron transfer reaction in sugar matrices. We found that PEM-RCs are a very stable system, particularly suitable to study the thermodynamics and kinetics of herbicide binding to the QB site. These features make PEM-RC structures quite promising in the development of herbicide biosensors. The studies discussed in the present thesis have shown that, although the effects on electron transfer induced by the native and artificial environments tested are markedly different, they can be described on the basis of a common kinetic model which takes into account the static conformational heterogeneity of the RC and the interconversion between conformational substates. Interestingly, the same distribution of rate constants (i.e. a Gamma distribution function) can describe charge recombination processes in solutions of purified RC, in RC-LH1 complexes, in wet and dry RC-PEM structures and in glassy saccharidic matrices over a wide range of hydration levels. In conclusion, the results obtained for RCs in different physico-chemical environments emphasize the relevance of the structure/dynamics solvent/protein coupling in determining the energetics and the kinetics of electron transfer processes in a membrane protein complex.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the past decade, the advent of efficient genome sequencing tools and high-throughput experimental biotechnology has lead to enormous progress in the life science. Among the most important innovations is the microarray tecnology. It allows to quantify the expression for thousands of genes simultaneously by measurin the hybridization from a tissue of interest to probes on a small glass or plastic slide. The characteristics of these data include a fair amount of random noise, a predictor dimension in the thousand, and a sample noise in the dozens. One of the most exciting areas to which microarray technology has been applied is the challenge of deciphering complex disease such as cancer. In these studies, samples are taken from two or more groups of individuals with heterogeneous phenotypes, pathologies, or clinical outcomes. these samples are hybridized to microarrays in an effort to find a small number of genes which are strongly correlated with the group of individuals. Eventhough today methods to analyse the data are welle developed and close to reach a standard organization (through the effort of preposed International project like Microarray Gene Expression Data -MGED- Society [1]) it is not unfrequant to stumble in a clinician's question that do not have a compelling statistical method that could permit to answer it.The contribution of this dissertation in deciphering disease regards the development of new approaches aiming at handle open problems posed by clinicians in handle specific experimental designs. In Chapter 1 starting from a biological necessary introduction, we revise the microarray tecnologies and all the important steps that involve an experiment from the production of the array, to the quality controls ending with preprocessing steps that will be used into the data analysis in the rest of the dissertation. While in Chapter 2 a critical review of standard analysis methods are provided stressing most of problems that In Chapter 3 is introduced a method to adress the issue of unbalanced design of miacroarray experiments. In microarray experiments, experimental design is a crucial starting-point for obtaining reasonable results. In a two-class problem, an equal or similar number of samples it should be collected between the two classes. However in some cases, e.g. rare pathologies, the approach to be taken is less evident. We propose to address this issue by applying a modified version of SAM [2]. MultiSAM consists in a reiterated application of a SAM analysis, comparing the less populated class (LPC) with 1,000 random samplings of the same size from the more populated class (MPC) A list of the differentially expressed genes is generated for each SAM application. After 1,000 reiterations, each single probe given a "score" ranging from 0 to 1,000 based on its recurrence in the 1,000 lists as differentially expressed. The performance of MultiSAM was compared to the performance of SAM and LIMMA [3] over two simulated data sets via beta and exponential distribution. The results of all three algorithms over low- noise data sets seems acceptable However, on a real unbalanced two-channel data set reagardin Chronic Lymphocitic Leukemia, LIMMA finds no significant probe, SAM finds 23 significantly changed probes but cannot separate the two classes, while MultiSAM finds 122 probes with score >300 and separates the data into two clusters by hierarchical clustering. We also report extra-assay validation in terms of differentially expressed genes Although standard algorithms perform well over low-noise simulated data sets, multi-SAM seems to be the only one able to reveal subtle differences in gene expression profiles on real unbalanced data. In Chapter 4 a method to adress similarities evaluation in a three-class prblem by means of Relevance Vector Machine [4] is described. In fact, looking at microarray data in a prognostic and diagnostic clinical framework, not only differences could have a crucial role. In some cases similarities can give useful and, sometimes even more, important information. The goal, given three classes, could be to establish, with a certain level of confidence, if the third one is similar to the first or the second one. In this work we show that Relevance Vector Machine (RVM) [2] could be a possible solutions to the limitation of standard supervised classification. In fact, RVM offers many advantages compared, for example, with his well-known precursor (Support Vector Machine - SVM [3]). Among these advantages, the estimate of posterior probability of class membership represents a key feature to address the similarity issue. This is a highly important, but often overlooked, option of any practical pattern recognition system. We focused on Tumor-Grade-three-class problem, so we have 67 samples of grade I (G1), 54 samples of grade 3 (G3) and 100 samples of grade 2 (G2). The goal is to find a model able to separate G1 from G3, then evaluate the third class G2 as test-set to obtain the probability for samples of G2 to be member of class G1 or class G3. The analysis showed that breast cancer samples of grade II have a molecular profile more similar to breast cancer samples of grade I. Looking at the literature this result have been guessed, but no measure of significance was gived before.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Participation appeared in development discourses for the first time in the 1970s, as a generic call for the involvement of the poor in development initiatives. Over the last three decades, the initial perspectives on participation intended as a project method for poverty reduction have evolved into a coherent and articulated theoretical elaboration, in which participation figures among the paraphernalia of good governance promotion: participation has acquired the status of “new orthodoxy”. Nevertheless, the experience of the implementation of participatory approaches in development projects seemed to be in the majority of cases rather disappointing, since the transformative potential of ‘participation in development’ depends on a series of factors in which every project can actually differ from others: the ultimate aim of the approach promoted, its forms and contents and, last but not least, the socio-political context in which the participatory initiative is embedded. In Egypt, the signature of a project agreement between the Arab Republic of Egypt and the Federal Republic of Germany, in 1998, inaugurated a Participatory Urban Management Programme (PUMP) to be implemented in Greater Cairo by the German Technical Cooperation (Deutsche Gesellschaft für Technische Zusammenarbeit, GTZ) and the Ministry of Planning (now Ministry of Local Development) and the Governorates of Giza and Cairo as the main counterparts. Now, ten years after the beginning of the PUMP/PDP and close to its end (December 2010), it is possible to draw some conclusions about the scope, the significance and the effects of the participatory approach adopted by GTZ and appropriated by the Egyptian counterparts in dealing with the issue of informal areas and, more generally, of urban development. Our analysis follows three sets of questions: the first set regards the way ‘participation’ has been interpreted and concretised by PUMP and PDP. The second is about the emancipating potential of the ‘participatory approach’ and its ability to ‘empower’ the ‘marginalised’. The third focuses on one hand on the efficacy of GTZ strategy to lead to an improvement of the delivery service in informal areas (especially in terms of planning and policies), and on the other hand on the potential of GTZ development intervention to trigger an incremental process of ‘democratisation’ from below.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract. This thesis presents a discussion on a few specific topics regarding the low velocity impact behaviour of laminated composites. These topics were chosen because of their significance as well as the relatively limited attention received so far by the scientific community. The first issue considered is the comparison between the effects induced by a low velocity impact and by a quasi-static indentation experimental test. An analysis of both test conditions is presented, based on the results of experiments carried out on carbon fibre laminates and on numerical computations by a finite element model. It is shown that both quasi-static and dynamic tests led to qualitatively similar failure patterns; three characteristic contact force thresholds, corresponding to the main steps of damage progression, were identified and found to be equal for impact and indentation. On the other hand, an equal energy absorption resulted in a larger delaminated area in quasi-static than in dynamic tests, while the maximum displacement of the impactor (or indentor) was higher in the case of impact, suggesting a probably more severe fibre damage than in indentation. Secondly, the effect of different specimen dimensions and boundary conditions on its impact response was examined. Experimental testing showed that the relationships of delaminated area with two significant impact parameters, the absorbed energy and the maximum contact force, did not depend on the in-plane dimensions and on the support condition of the coupons. The possibility of predicting, by means of a simplified numerical computation, the occurrence of delaminations during a specific impact event is also discussed. A study about the compressive behaviour of impact damaged laminates is also presented. Unlike most of the contributions available about this subject, the results of compression after impact tests on thin laminates are described in which the global specimen buckling was not prevented. Two different quasi-isotropic stacking sequences, as well as two specimen geometries, were considered. It is shown that in the case of rectangular coupons the lay-up can significantly affect the damage induced by impact. Different buckling shapes were observed in laminates with different stacking sequences, in agreement with the results of numerical analysis. In addition, the experiments showed that impact damage can alter the buckling mode of the laminates in certain situations, whereas it did not affect the compressive strength in every case, depending on the buckling shape. Some considerations about the significance of the test method employed are also proposed. Finally, a comprehensive study is presented regarding the influence of pre-existing in-plane loads on the impact response of laminates. Impact events in several conditions, including both tensile and compressive preloads, both uniaxial and biaxial, were analysed by means of numerical finite element simulations; the case of laminates impacted in postbuckling conditions was also considered. The study focused on how the effect of preload varies with the span-to-thickness ratio of the specimen, which was found to be a key parameter. It is shown that a tensile preload has the strongest effect on the peak stresses at low span-to-thickness ratios, leading to a reduction of the minimum impact energy required to initiate damage, whereas this effect tends to disappear as the span-to-thickness ratio increases. On the other hand, a compression preload exhibits the most detrimental effects at medium span-to-thickness ratios, at which the laminate compressive strength and the critical instability load are close to each other, while the influence of preload can be negligible for thin plates or even beneficial for very thick plates. The possibility to obtain a better explanation of the experimental results described in the literature, in view of the present findings, is highlighted. Throughout the thesis the capabilities and limitations of the finite element model, which was implemented in an in-house program, are discussed. The program did not include any damage model of the material. It is shown that, although this kind of analysis can yield accurate results as long as damage has little effect on the overall mechanical properties of a laminate, it can be helpful in explaining some phenomena and also in distinguishing between what can be modelled without taking into account the material degradation and what requires an appropriate simulation of damage. Sommario. Questa tesi presenta una discussione su alcune tematiche specifiche riguardanti il comportamento dei compositi laminati soggetti ad impatto a bassa velocità. Tali tematiche sono state scelte per la loro importanza, oltre che per l’attenzione relativamente limitata ricevuta finora dalla comunità scientifica. La prima delle problematiche considerate è il confronto fra gli effetti prodotti da una prova sperimentale di impatto a bassa velocità e da una prova di indentazione quasi statica. Viene presentata un’analisi di entrambe le condizioni di prova, basata sui risultati di esperimenti condotti su laminati in fibra di carbonio e su calcoli numerici svolti con un modello ad elementi finiti. È mostrato che sia le prove quasi statiche sia quelle dinamiche portano a un danneggiamento con caratteristiche qualitativamente simili; tre valori di soglia caratteristici della forza di contatto, corrispondenti alle fasi principali di progressione del danno, sono stati individuati e stimati uguali per impatto e indentazione. D’altro canto lo stesso assorbimento di energia ha portato ad un’area delaminata maggiore nelle prove statiche rispetto a quelle dinamiche, mentre il massimo spostamento dell’impattatore (o indentatore) è risultato maggiore nel caso dell’impatto, indicando la probabilità di un danneggiamento delle fibre più severo rispetto al caso dell’indentazione. In secondo luogo è stato esaminato l’effetto di diverse dimensioni del provino e diverse condizioni al contorno sulla sua risposta all’impatto. Le prove sperimentali hanno mostrato che le relazioni fra l’area delaminata e due parametri di impatto significativi, l’energia assorbita e la massima forza di contatto, non dipendono dalle dimensioni nel piano dei provini e dalle loro condizioni di supporto. Viene anche discussa la possibilità di prevedere, per mezzo di un calcolo numerico semplificato, il verificarsi di delaminazioni durante un determinato caso di impatto. È presentato anche uno studio sul comportamento a compressione di laminati danneggiati da impatto. Diversamente della maggior parte della letteratura disponibile su questo argomento, vengono qui descritti i risultati di prove di compressione dopo impatto su laminati sottili durante le quali l’instabilità elastica globale dei provini non è stata impedita. Sono state considerate due differenti sequenze di laminazione quasi isotrope, oltre a due geometrie per i provini. Viene mostrato come nel caso di provini rettangolari la sequenza di laminazione possa influenzare sensibilmente il danno prodotto dall’impatto. Due diversi tipi di deformate in condizioni di instabilità sono stati osservati per laminati con diversa laminazione, in accordo con i risultati dell’analisi numerica. Gli esperimenti hanno mostrato inoltre che in certe situazioni il danno da impatto può alterare la deformata che il laminato assume in seguito ad instabilità; d’altra parte tale danno non ha sempre influenzato la resistenza a compressione, a seconda della deformata. Vengono proposte anche alcune considerazioni sulla significatività del metodo di prova utilizzato. Infine viene presentato uno studio esaustivo riguardo all’influenza di carichi membranali preesistenti sulla risposta all’impatto dei laminati. Sono stati analizzati con simulazioni numeriche ad elementi finiti casi di impatto in diverse condizioni di precarico, sia di trazione sia di compressione, sia monoassiali sia biassiali; è stato preso in considerazione anche il caso di laminati impattati in condizioni di postbuckling. Lo studio si è concentrato in particolare sulla dipendenza degli effetti del precarico dal rapporto larghezza-spessore del provino, che si è rivelato un parametro fondamentale. Viene illustrato che un precarico di trazione ha l’effetto più marcato sulle massime tensioni per bassi rapporti larghezza-spessore, portando ad una riduzione della minima energia di impatto necessaria per innescare il danneggiamento, mentre questo effetto tende a scomparire all’aumentare di tale rapporto. Il precarico di compressione evidenzia invece gli effetti più deleteri a rapporti larghezza-spessore intermedi, ai quali la resistenza a compressione del laminato e il suo carico critico di instabilità sono paragonabili, mentre l’influenza del precarico può essere trascurabile per piastre sottili o addirittura benefica per piastre molto spesse. Viene evidenziata la possibilità di trovare una spiegazione più soddisfacente dei risultati sperimentali riportati in letteratura, alla luce del presente contributo. Nel corso della tesi vengono anche discussi le potenzialità ed i limiti del modello ad elementi finiti utilizzato, che è stato implementato in un programma scritto in proprio. Il programma non comprende alcuna modellazione del danneggiamento del materiale. Viene però spiegato come, nonostante questo tipo di analisi possa portare a risultati accurati soltanto finché il danno ha scarsi effetti sulle proprietà meccaniche d’insieme del laminato, esso possa essere utile per spiegare alcuni fenomeni, oltre che per distinguere fra ciò che si può riprodurre senza tenere conto del degrado del materiale e ciò che invece richiede una simulazione adeguata del danneggiamento.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Seyfert galaxies are the closest active galactic nuclei. As such, we can use them to test the physical properties of the entire class of objects. To investigate their general properties, I took advantage of different methods of data analysis. In particular I used three different samples of objects, that, despite frequent overlaps, have been chosen to best tackle different topics: the heterogeneous BeppoS AX sample was thought to be optimized to test the average hard X-ray (E above 10 keV) properties of nearby Seyfert galaxies; the X-CfA was thought the be optimized to compare the properties of low-luminosity sources to the ones of higher luminosity and, thus, it was also used to test the emission mechanism models; finally, the XMM–Newton sample was extracted from the X-CfA sample so as to ensure a truly unbiased and well defined sample of objects to define the average properties of Seyfert galaxies. Taking advantage of the broad-band coverage of the BeppoS AX MECS and PDS instruments (between ~2-100 keV), I infer the average X-ray spectral propertiesof nearby Seyfert galaxies and in particular the photon index (~1.8), the high-energy cut-off (~290 keV), and the relative amount of cold reflection (~1.0). Moreover the unified scheme for active galactic nuclei was positively tested. The distribution of isotropic indicators used here (photon index, relative amount of reflection, high-energy cut-off and narrow FeK energy centroid) are similar in type I and type II objects while the absorbing column and the iron line equivalent width significantly differ between the two classes of sources with type II objects displaying larger absorbing columns. Taking advantage of the XMM–Newton and X–CfA samples I also deduced from measurements that 30 to 50% of type II Seyfert galaxies are Compton thick. Confirming previous results, the narrow FeK line is consistent, in Seyfert 2 galaxies, with being produced in the same matter responsible for the observed obscuration. These results support the basic picture of the unified model. Moreover, the presence of a X-ray Baldwin effect in type I sources has been measured using for the first time the 20-100 keV luminosity (EW proportional to L(20-100)^(−0.22±0.05)). This finding suggests that the torus covering factor may be a function of source luminosity, thereby suggesting a refinement of the baseline version of the unifed model itself. Using the BeppoSAX sample, it has been also recorded a possible correlation between the photon index and the amount of cold reflection in both type I and II sources. At a first glance this confirms the thermal Comptonization as the most likely origin of the high energy emission for the active galactic nuclei. This relation, in fact, naturally emerges supposing that the accretion disk penetrates, depending to the accretion rate, the central corona at different depths (Merloni et al. 2006): the higher accreting systems hosting disks down to the last stable orbit while the lower accreting systems hosting truncated disks. On the contrary, the study of the well defined X–C f A sample of Seyfert galaxies has proved that the intrinsic X-ray luminosity of nearby Seyfert galaxies can span values between 10^(38−43) erg s^−1, i.e. covering a huge range of accretion rates. The less efficient systems have been supposed to host ADAF systems without accretion disk. However, the study of the X–CfA sample has also proved the existence of correlations between optical emission lines and X-ray luminosity in the entire range of L_(X) covered by the sample. These relations are similar to the ones obtained if high-L objects are considered. Thus the emission mechanism must be similar in luminous and weak systems. A possible scenario to reconcile these somehow opposite indications is assuming that the ADAF and the two phase mechanism co-exist with different relative importance moving from low-to-high accretion systems (as suggested by the Gamma vs. R relation). The present data require that no abrupt transition between the two regimes is present. As mentioned above, the possible presence of an accretion disk has been tested using samples of nearby Seyfert galaxies. Here, to deeply investigate the flow patterns close to super-massive black-holes, three case study objects for which enough counts statistics is available have been analysed using deep X-ray observations taken with XMM–Newton. The obtained results have shown that the accretion flow can significantly differ between the objects when it is analyzed with the appropriate detail. For instance the accretion disk is well established down to the last stable orbit in a Kerr system for IRAS 13197-1627 where strong light bending effect have been measured. The accretion disk seems to be formed spiraling in the inner ~10-30 gravitational radii in NGC 3783 where time dependent and recursive modulation have been measured both in the continuum emission and in the broad emission line component. Finally, the accretion disk seems to be only weakly detectable in rk 509, with its weak broad emission line component. Finally, blueshifted resonant absorption lines have been detected in all three objects. This seems to demonstrate that, around super-massive black-holes, there is matter which is not confined in the accretion disk and moves along the line of sight with velocities as large as v~0.01-0.4c (whre c is the speed of light). Wether this matter forms winds or blobs is still matter of debate together with the assessment of the real statistical significance of the measured absorption lines. Nonetheless, if confirmed, these phenomena are of outstanding interest because they offer new potential probes for the dynamics of the innermost regions of accretion flows, to tackle the formation of ejecta/jets and to place constraints on the rate of kinetic energy injected by AGNs into the ISM and IGM. Future high energy missions (such as the planned Simbol-X and IXO) will likely allow an exciting step forward in our understanding of the flow dynamics around black holes and the formation of the highest velocity outflows.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Allergies are a complex of symptoms derived from altered IgE-mediated reactions of the immune system towards substances known as allergens. Allergic sensibilization can be of food or respiratory origin and, in particular, apple and hazelnut allergens have been identified in pollens or fruits. Allergic cross-reactivity can occur in a patient reacting to similar allergens from different origins, justifying the research in both systems as in Europe a greater number of people suffers from apple fruit allergy, but little evidence exists about pollen. Apple fruit allergies are due to four different classes of allergens (Mal d 1, 2, 3, 4), whose allergenicity is related both to genotype and tissue specificity; therefore I have investigated their presence also in pollen at different time of germination to clarify the apple pollen allergenic potential. I have observed that the same four classes of allergens found in fruit are expressed at different levels also in pollen, and their presence might support that the apple pollen can be considered allergenic as the fruit, deducing that apple allergy could also be indirectly caused by sensitization to pollen. Climate changes resulting from increases in temperature and air pollution influence pollen allergenicity, responsible for the dramatic raise in respiratory allergies (hay fever, bronchial asthma, conjunctivitis). Although the link between climate change and pollen allergenicity is proven, the underlying mechanism is little understood. Transglutaminases (TGases), a class of enzymes able to post-translationally modify proteins, are activated under stress and involved in some inflammatory responses, enhancing the activity of pro-inflammatory phospholipase A2, suggesting a role in allergies. Recently, a calcium-dependent TGase activity has been identified in the pollen cell wall, raising the possibility that pollen TGase may have a role in the modification of pollen allergens reported above, thus stabilizing them against proteases. This enzyme can be involved also in the transamidation of proteins present in the human mucosa interacting with surface pollen or, finally, the enzyme itself can represent an allergen, as suggested by studies on celiac desease. I have hypothesized that this pollen enzyme can be affected by climate changes and be involved in exhacerbating allergy response. The data presented in this thesis represent a scientific basis for future development of studies devoted to verify the hypothesis set out here. First, I have demonstrated the presence of an extracellular TGase on the surface of the grain observed either at the apical or the proximal parts of the pollen-tube by laser confocal microscopy (Iorio et al., 2008), that plays an essential role in apple pollen-tube growth, as suggested by the arrest of tube elongation by TGase inhibitors, such as EGTA or R281. Its involvement in pollen tube growth is mainly confirmed by the data of activity and gene expression, because TGase showed a peak between 15 min and 30 min of germination, when this process is well established, and an optimal pH around 6.5, which is close to that recorded for the germination medium. Moreover, data show that pollen TGase can be a glycoprotein as the glycosylation profile is linked both with the activation of the enzyme and with its localization at the pollen cell wall during germination, because from the data presented seems that the active form of TGase involved in pollen tube growth and pollen-stylar interaction is more exposed and more weakly bound to the cell wall. Interestingly, TGase interacts with fibronectin (FN), a putative SAMs or psECM component, inducing possibly intracellular signal transduction during the interaction between pollen-stylar occuring in the germination process, since a protein immunorecognised by anti-FN antibody is also present in pollen, in particular at the level of pollen grain cell wall in a punctuate pattern, but also along the shank of the pollen tube wall, in a similar pattern that recalls the signal obtained with the antibody anti TGase. FN represents a good substrate for the enzyme activity, better than DMC usually used as standard substrate for animal TGase. Thus, this pollen enzyme, necessary for its germination, is exposed on the pollen surface and consequently can easily interact with mucosal proteins, as it has been found germinated pollen in studies conducted on human mucus (Forlani, personal communication). I have obtained data that TGase activity increases in a very remarkable way when pollen is exposed to stressful conditions, such as climate changes and environmental pollution. I have used two different species of pollen, an aero allergenic (hazelnut, Corylus avellana) pollen, whose allergenicity is well documented, and an enthomophylus (apple, Malus domestica) pollen, which is not yet well characterized, to compare data on their mechanism of action in response to stressors. The two pollens have been exposed to climate changes (different temperatures, relative humidity (rH), acid rain at pH 5.6 and copper pollution (3.10 µg/l)) and showed an increase in pollen surface TGase activity that is not accompanied to an induced expression of TGase immunoreactive protein with AtPNG1p. Probably, climate change induce an alteration or damage to pollen cell wall that carries the pollen grains to release their content in the medium including TGase enzyme, that can be free to carry out its function as confirmed by the immunolocalisation and by the in situ TGase activity assay data; morphological examination indicated pollen damage, viability significantly reduced and in acid rain conditions an early germination of apple pollen, thus possibly enhancing the TGase exposure on pollen surface. Several pollen proteins were post-translationally modified, as well as mammalian sPLA2 especially with Corylus pollen, which results in its activation, potentially altering pollen allergenicity and inflammation. Pollen TGase activity mimicked the behaviour of gpl TGase and AtPNG1p in the stimulation of sPLA2, even if the regulatory mechanism seems different to gpl TGase, because pollen TGase favours an intermolecular cross-linking between various molecules of sPLA2, giving rise to high-molecular protein networks normally more stable. In general, pollens exhibited a significant endogenous phospholipase activity and it has been observed differences according to the allergenic (Corylus) or not-well characterized allergenic (Malus) attitude of the pollen. However, even if with a different intensity level in activation, pollen enzyme share the ability to activate the sPLA2, thus suggesting an important regulatory role for the activation of a key enzyme of the inflammatory response, among which my interest was addressed to pollen allergy. In conclusion, from all the data presented, mainly presence of allergens, presence of an extracellular TGase, increasing in its activity following exposure to environmental pollution and PLA2 activation, I can conclude that also Malus pollen can behave as potentially allergenic. The mechanisms described here that could affect the allergenicity of pollen, maybe could be the same occurring in fruit, paving the way for future studies in the identification of hyper- and hypo- allergenic cultivars, in preventing environmental stressor effects and, possibly, in the production of transgenic plants.