879 resultados para Next-generation sequencing


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Graphene has received great attention due to its exceptional properties, which include corners with zero effective mass, extremely large mobilities, this could render it the new template for the next generation of electronic devices. Furthermore it has weak spin orbit interaction because of the low atomic number of carbon atom in turn results in long spin coherence lengths. Therefore, graphene is also a promising material for future applications in spintronic devices - the use of electronic spin degrees of freedom instead of the electron charge. Graphene can be engineered to form a number of different structures. In particular, by appropriately cutting it one can obtain 1-D system -with only a few nanometers in width - known as graphene nanoribbon, which strongly owe their properties to the width of the ribbons and to the atomic structure along the edges. Those GNR-based systems have been shown to have great potential applications specially as connectors for integrated circuits. Impurities and defects might play an important role to the coherence of these systems. In particular, the presence of transition metal atoms can lead to significant spin-flip processes of conduction electrons. Understanding this effect is of utmost importance for spintronics applied design. In this work, we focus on electronic transport properties of armchair graphene nanoribbons with adsorbed transition metal atoms as impurities and taking into account the spin-orbit effect. Our calculations were performed using a combination of density functional theory and non-equilibrium Greens functions. Also, employing a recursive method we consider a large number of impurities randomly distributed along the nanoribbon in order to infer, for different concentrations of defects, the spin-coherence length.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Máster en Economía del Turismo, Transporte y Medio Ambiente

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It is not unknown that the evolution of firm theories has been developed along a path paved by an increasing awareness of the organizational structure importance. From the early “neoclassical” conceptualizations that intended the firm as a rational actor whose aim is to produce that amount of output, given the inputs at its disposal and in accordance to technological or environmental constraints, which maximizes the revenue (see Boulding, 1942 for a past mid century state of the art discussion) to the knowledge based theory of the firm (Nonaka & Takeuchi, 1995; Nonaka & Toyama, 2005), which recognizes in the firm a knnowledge creating entity, with specific organizational capabilities (Teece, 1996; Teece & Pisano, 1998) that allow to sustaine competitive advantages. Tracing back a map of the theory of the firm evolution, taking into account the several perspectives adopted in the history of thought, would take the length of many books. Because of that a more fruitful strategy is circumscribing the focus of the description of the literature evolution to one flow connected to a crucial question about the nature of firm’s behaviour and about the determinants of competitive advantages. In so doing I adopt a perspective that allows me to consider the organizational structure of the firm as an element according to which the different theories can be discriminated. The approach adopted starts by considering the drawbacks of the standard neoclassical theory of the firm. Discussing the most influential theoretical approaches I end up with a close examination of the knowledge based perspective of the firm. Within this perspective the firm is considered as a knowledge creating entity that produce and mange knowledge (Nonaka, Toyama, & Nagata, 2000; Nonaka & Toyama, 2005). In a knowledge intensive organization, knowledge is clearly embedded for the most part in the human capital of the individuals that compose such an organization. In a knowledge based organization, the management, in order to cope with knowledge intensive productions, ought to develop and accumulate capabilities that shape the organizational forms in a way that relies on “cross-functional processes, extensive delayering and empowerment” (Foss 2005, p.12). This mechanism contributes to determine the absorptive capacity of the firm towards specific technologies and, in so doing, it also shape the technological trajectories along which the firm moves. After having recognized the growing importance of the firm’s organizational structure in the theoretical literature concerning the firm theory, the subsequent point of the analysis is that of providing an overview of the changes that have been occurred at micro level to the firm’s organization of production. The economic actors have to deal with challenges posed by processes of internationalisation and globalization, increased and increasing competitive pressure of less developed countries on low value added production activities, changes in technologies and increased environmental turbulence and volatility. As a consequence, it has been widely recognized that the main organizational models of production that fitted well in the 20th century are now partially inadequate and processes aiming to reorganize production activities have been widespread across several economies in recent years. Recently, the emergence of a “new” form of production organization has been proposed both by scholars, practitioners and institutions: the most prominent characteristic of such a model is its recognition of the importance of employees commitment and involvement. As a consequence it is characterized by a strong accent on the human resource management and on those practices that aim to widen the autonomy and responsibility of the workers as well as increasing their commitment to the organization (Osterman, 1994; 2000; Lynch, 2007). This “model” of production organization is by many defined as High Performance Work System (HPWS). Despite the increasing diffusion of workplace practices that may be inscribed within the concept of HPWS in western countries’ companies, it is an hazard, to some extent, to speak about the emergence of a “new organizational paradigm”. The discussion about organizational changes and the diffusion of HPWP the focus cannot abstract from a discussion about the industrial relations systems, with a particular accent on the employment relationships, because of their relevance, in the same way as production organization, in determining two major outcomes of the firm: innovation and economic performances. The argument is treated starting from the issue of the Social Dialogue at macro level, both in an European perspective and Italian perspective. The model of interaction between the social parties has repercussions, at micro level, on the employment relationships, that is to say on the relations between union delegates and management or workers and management. Finding economic and social policies capable of sustaining growth and employment within a knowledge based scenario is likely to constitute the major challenge for the next generation of social pacts, which are the main social dialogue outcomes. As Acocella and Leoni (2007) put forward the social pacts may constitute an instrument to trade wage moderation for high intensity in ICT, organizational and human capital investments. Empirical evidence, especially focused on the micro level, about the positive relation between economic growth and new organizational designs coupled with ICT adoption and non adversarial industrial relations is growing. Partnership among social parties may become an instrument to enhance firm competitiveness. The outcome of the discussion is the integration of organizational changes and industrial relations elements within a unified framework: the HPWS. Such a choice may help in disentangling the potential existence of complementarities between these two aspects of the firm internal structure on economic and innovative performance. With the third chapter starts the more original part of the thesis. The data utilized in order to disentangle the relations between HPWS practices, innovation and economic performance refer to the manufacturing firms of the Reggio Emilia province with more than 50 employees. The data have been collected through face to face interviews both to management (199 respondents) and to union representatives (181 respondents). Coupled with the cross section datasets a further data source is constituted by longitudinal balance sheets (1994-2004). Collecting reliable data that in turn provide reliable results needs always a great effort to which are connected uncertain results. Data at micro level are often subjected to a trade off: the wider is the geographical context to which the population surveyed belong the lesser is the amount of information usually collected (low level of resolution); the narrower is the focus on specific geographical context, the higher is the amount of information usually collected (high level of resolution). For the Italian case the evidence about the diffusion of HPWP and their effects on firm performances is still scanty and usually limited to local level studies (Cristini, et al., 2003). The thesis is also devoted to the deepening of an argument of particular interest: the existence of complementarities between the HPWS practices. It has been widely shown by empirical evidence that when HPWP are adopted in bundles they are more likely to impact on firm’s performances than when adopted in isolation (Ichniowski, Prennushi, Shaw, 1997). Is it true also for the local production system of Reggio Emilia? The empirical analysis has the precise aim of providing evidence on the relations between the HPWS dimensions and the innovative and economic performances of the firm. As far as the first line of analysis is concerned it must to be stressed the fundamental role that innovation plays in the economy (Geroski & Machin, 1993; Stoneman & Kwoon 1994, 1996; OECD, 2005; EC, 2002). On this point the evidence goes from the traditional innovations, usually approximated by R&D investment expenditure or number of patents, to the introduction and adoption of ICT, in the recent years (Brynjolfsson & Hitt, 2000). If innovation is important then it is critical to analyse its determinants. In this work it is hypothesised that organizational changes and firm level industrial relations/employment relations aspects that can be put under the heading of HPWS, influence the propensity to innovate in product, process and quality of the firm. The general argument may goes as follow: changes in production management and work organization reconfigure the absorptive capacity of the firm towards specific technologies and, in so doing, they shape the technological trajectories along which the firm moves; cooperative industrial relations may lead to smother adoption of innovations, because not contrasted by unions. From the first empirical chapter emerges that the different types of innovations seem to respond in different ways to the HPWS variables. The underlying processes of product, process and quality innovations are likely to answer to different firm’s strategies and needs. Nevertheless, it is possible to extract some general results in terms of the most influencing HPWS factors on innovative performance. The main three aspects are training coverage, employees involvement and the diffusion of bonuses. These variables show persistent and significant relations with all the three innovation types. The same do the components having such variables at their inside. In sum the aspects of the HPWS influence the propensity to innovate of the firm. At the same time, emerges a quite neat (although not always strong) evidence of complementarities presence between HPWS practices. In terns of the complementarity issue it can be said that some specific complementarities exist. Training activities, when adopted and managed in bundles, are related to the propensity to innovate. Having a sound skill base may be an element that enhances the firm’s capacity to innovate. It may enhance both the capacity to absorbe exogenous innovation and the capacity to endogenously develop innovations. The presence and diffusion of bonuses and the employees involvement also spur innovative propensity. The former because of their incentive nature and the latter because direct workers participation may increase workers commitment to the organizationa and thus their willingness to support and suggest inovations. The other line of analysis provides results on the relation between HPWS and economic performances of the firm. There have been a bulk of international empirical studies on the relation between organizational changes and economic performance (Black & Lynch 2001; Zwick 2004; Janod & Saint-Martin 2004; Huselid 1995; Huselid & Becker 1996; Cappelli & Neumark 2001), while the works aiming to capture the relations between economic performance and unions or industrial relations aspects are quite scant (Addison & Belfield, 2001; Pencavel, 2003; Machin & Stewart, 1990; Addison, 2005). In the empirical analysis the integration of the two main areas of the HPWS represent a scarcely exploited approach in the panorama of both national and international empirical studies. As remarked by Addison “although most analysis of workers representation and employee involvement/high performance work practices have been conducted in isolation – while sometimes including the other as controls – research is beginning to consider their interactions” (Addison, 2005, p.407). The analysis conducted exploiting temporal lags between dependent and covariates, possibility given by the merger of cross section and panel data, provides evidence in favour of the existence of HPWS practices impact on firm’s economic performance, differently measured. Although it does not seem to emerge robust evidence on the existence of complementarities among HPWS aspects on performances there is evidence of a general positive influence of the single practices. The results are quite sensible to the time lags, inducing to hypothesize that time varying heterogeneity is an important factor in determining the impact of organizational changes on economic performance. The implications of the analysis can be of help both to management and local level policy makers. Although the results are not simply extendible to other local production systems it may be argued that for contexts similar to the Reggio Emilia province, characterized by the presence of small and medium enterprises organized in districts and by a deep rooted unionism, with strong supporting institutions, the results and the implications here obtained can also fit well. However, a hope for future researches on the subject treated in the present work is that of collecting good quality information over wider geographical areas, possibly at national level, and repeated in time. Only in this way it is possible to solve the Gordian knot about the linkages between innovation, performance, high performance work practices and industrial relations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The sustained demand for faster,more powerful chips has beenmet by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the on-chip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation focuses on all of the above points, by describing a NoC architectural implementation called ×pipes; a NoC simulation environment within a cycle-accurate MPSoC emulator called MPARM; a NoC design flow consisting of a front-end tool for optimal NoC instantiation, called SunFloor, and a set of back-end facilities for the study of NoC physical implementations. This dissertation proves the viability of NoCs for current and upcoming designs, by outlining their advantages (alongwith a fewtradeoffs) and by providing a full NoC implementation framework. It also presents some examples of additional extensions of NoCs, allowing e.g. for increased fault tolerance, and outlines where NoCsmay find further application scenarios, such as in stacked chips.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this Doctoral Thesis is to develop a genetic algorithm based optimization methods to find the best conceptual design architecture of an aero-piston-engine, for given design specifications. Nowadays, the conceptual design of turbine airplanes starts with the aircraft specifications, then the most suited turbofan or turbo propeller for the specific application is chosen. In the aeronautical piston engines field, which has been dormant for several decades, as interest shifted towards turboaircraft, new materials with increased performance and properties have opened new possibilities for development. Moreover, the engine’s modularity given by the cylinder unit, makes it possible to design a specific engine for a given application. In many real engineering problems the amount of design variables may be very high, characterized by several non-linearities needed to describe the behaviour of the phenomena. In this case the objective function has many local extremes, but the designer is usually interested in the global one. The stochastic and the evolutionary optimization techniques, such as the genetic algorithms method, may offer reliable solutions to the design problems, within acceptable computational time. The optimization algorithm developed here can be employed in the first phase of the preliminary project of an aeronautical piston engine design. It’s a mono-objective genetic algorithm, which, starting from the given design specifications, finds the engine propulsive system configuration which possesses minimum mass while satisfying the geometrical, structural and performance constraints. The algorithm reads the project specifications as input data, namely the maximum values of crankshaft and propeller shaft speed and the maximal pressure value in the combustion chamber. The design variables bounds, that describe the solution domain from the geometrical point of view, are introduced too. In the Matlab® Optimization environment the objective function to be minimized is defined as the sum of the masses of the engine propulsive components. Each individual that is generated by the genetic algorithm is the assembly of the flywheel, the vibration damper and so many pistons, connecting rods, cranks, as the number of the cylinders. The fitness is evaluated for each individual of the population, then the rules of the genetic operators are applied, such as reproduction, mutation, selection, crossover. In the reproduction step the elitist method is applied, in order to save the fittest individuals from a contingent mutation and recombination disruption, making it undamaged survive until the next generation. Finally, as the best individual is found, the optimal dimensions values of the components are saved to an Excel® file, in order to build a CAD-automatic-3D-model for each component of the propulsive system, having a direct pre-visualization of the final product, still in the engine’s preliminary project design phase. With the purpose of showing the performance of the algorithm and validating this optimization method, an actual engine is taken, as a case study: it’s the 1900 JTD Fiat Avio, 4 cylinders, 4T, Diesel. Many verifications are made on the mechanical components of the engine, in order to test their feasibility and to decide their survival through generations. A system of inequalities is used to describe the non-linear relations between the design variables, and is used for components checking for static and dynamic loads configurations. The design variables geometrical boundaries are taken from actual engines data and similar design cases. Among the many simulations run for algorithm testing, twelve of them have been chosen as representative of the distribution of the individuals. Then, as an example, for each simulation, the corresponding 3D models of the crankshaft and the connecting rod, have been automatically built. In spite of morphological differences among the component the mass is almost the same. The results show a significant mass reduction (almost 20% for the crankshaft) in comparison to the original configuration, and an acceptable robustness of the method have been shown. The algorithm here developed is shown to be a valid method for an aeronautical-piston-engine preliminary project design optimization. In particular the procedure is able to analyze quite a wide range of design solutions, rejecting the ones that cannot fulfill the feasibility design specifications. This optimization algorithm could increase the aeronautical-piston-engine development, speeding up the production rate and joining modern computation performances and technological awareness to the long lasting traditional design experiences.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Next generation electronic devices have to guarantee high performance while being less power-consuming and highly reliable for several application domains ranging from the entertainment to the business. In this context, multicore platforms have proven the most efficient design choice but new challenges have to be faced. The ever-increasing miniaturization of the components produces unexpected variations on technological parameters and wear-out characterized by soft and hard errors. Even though hardware techniques, which lend themselves to be applied at design time, have been studied with the objective to mitigate these effects, they are not sufficient; thus software adaptive techniques are necessary. In this thesis we focus on multicore task allocation strategies to minimize the energy consumption while meeting performance constraints. We firstly devise a technique based on an Integer Linear Problem formulation which provides the optimal solution but cannot be applied on-line since the algorithm it needs is time-demanding; then we propose a sub-optimal technique based on two steps which can be applied on-line. We demonstrate the effectiveness of the latter solution through an exhaustive comparison against the optimal solution, state-of-the-art policies, and variability-agnostic task allocations by running multimedia applications on the virtual prototype of a next generation industrial multicore platform. We also face the problem of the performance and lifetime degradation. We firstly focus on embedded multicore platforms and propose an idleness distribution policy that increases core expected lifetimes by duty cycling their activity; then, we investigate the use of micro thermoelectrical coolers in general-purpose multicore processors to control the temperature of the cores at runtime with the objective of meeting lifetime constraints without performance loss.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Questa tesi di laurea nasce dall’esperienza maturata presso la VM Motori S.p.A., ufficio CRM (Centro Ricerca Motori) di ingegneria, divisione del reparto R&D (Research and Development) situato a Cento di Ferrara. Durante tale esperienza sono state affrontate le problematiche inerenti al settore automotive riguardo la ricerca e lo sviluppo dei motori endotermici diesel. Dopo un approccio introduttivo, che definisce l’ambito lavorativo in cui opera VM Motori S.p.A. e l’oggetto della tesi, si passa alla definizione ed alla calibrazione di un circuito low pressure EGR di un propulsore diesel da 200HP@3800rpm di potenza e 500Nm@1600rpm di coppia per uso automobilistico, mediante l’ausilio dei software AδαMO, INCA, Controldesk Next Generation, DoE, DIAdem ed INDICOM, software per lo studio della calibrazione al banco sviluppo. Si analizzano gli aspetti che contraddistinguono la VM Motori S.p.A. dalle altre aziende specializzate nel settore automotive, facendo riferimento ai campi produttivi in cui si applica l’ingegneria meccanica ed analizzandone gli aspetti tecnici, metodici e gestionali. Da notare il ruolo fondamentale delle automotive nell’economia mondiale in riferimento alla produzione dei principali propulsori diesel, primarie fonti produttive della VM.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The next generation of vaccine adjuvant are represented by a wide ranging set of molecules called Toll like agonists (TLR’s). Although many of these molecules are complex structures extracted from microorganisms, small molecule TLR agonists have also been identified. However, delivery systems have not been optimized to allow their effective delivery in conjunction with antigens. Here we describe a novel approach in which a small molecule TLR agonist has been conjugated directly to antigens to ensure effective co delivery. We describe the conjugation of a relevant protein, a recombinant protective antigen from S.pneumoniae (RrgB), which is linked to a TLR7 agonist. Following thorough characterization to ensure there was no aggregation, the conjugate was evaluated in a murine infection model. Results showed that the conjugate extended animals’ survival after lethal challenge with S.pneumoniae. Comparable results were obtained with a 10 fold lower dose than that of the native unconjugated antigen. Notably, the animals immunized with the same dose of unconjugated TLR7 agonist and antigen showed no adjuvant effect. The increased immunogenicity was likely a consequence of the co-localization of TLR7 agonist and antigen by chemical binding and is was more effective than simple co-administration. Likely, this approach can be adopted to reduce the dose of antigen required to induce protective immunity, and potentially increase the safety of a broad variety of vaccine candidates

Relevância:

80.00% 80.00%

Publicador:

Resumo:

MultiProcessor Systems-on-Chip (MPSoC) are the core of nowadays and next generation computing platforms. Their relevance in the global market continuously increase, occupying an important role both in everydaylife products (e.g. smartphones, tablets, laptops, cars) and in strategical market sectors as aviation, defense, robotics, medicine. Despite of the incredible performance improvements in the recent years processors manufacturers have had to deal with issues, commonly called “Walls”, that have hindered the processors development. After the famous “Power Wall”, that limited the maximum frequency of a single core and marked the birth of the modern multiprocessors system-on-chip, the “Thermal Wall” and the “Utilization Wall” are the actual key limiter for performance improvements. The former concerns the damaging effects of the high temperature on the chip caused by the large power densities dissipation, whereas the second refers to the impossibility of fully exploiting the computing power of the processor due to the limitations on power and temperature budgets. In this thesis we faced these challenges by developing efficient and reliable solutions able to maximize performance while limiting the maximum temperature below a fixed critical threshold and saving energy. This has been possible by exploiting the Model Predictive Controller (MPC) paradigm that solves an optimization problem subject to constraints in order to find the optimal control decisions for the future interval. A fully-distributedMPC-based thermal controller with a far lower complexity respect to a centralized one has been developed. The control feasibility and interesting properties for the simplification of the control design has been proved by studying a partial differential equation thermal model. Finally, the controller has been efficiently included in more complex control schemes able to minimize energy consumption and deal with mixed-criticalities tasks

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this report a new automated optical test for next generation of photonic integrated circuits (PICs) is provided by the test-bed design and assessment. After a briefly analysis of critical problems of actual optical tests, the main test features are defined: automation and flexibility, relaxed alignment procedure, speed up of entire test and data reliability. After studying varied solutions, the test-bed components are defined to be lens array, photo-detector array, and software controller. Each device is studied and calibrated, the spatial resolution, and reliability against interference at the photo-detector array are studied. The software is programmed in order to manage both PIC input, and photo-detector array output as well as data analysis. The test is validated by analysing state-of-art 16 ports PIC: the waveguide location, current versus power, and time-spatial power distribution are measured as well as the optical continuity of an entire path of PIC. Complexity, alignment tolerance, time of measurement are also discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In many plant species, the genetic template of early life-stages is formed by animal-mediated pollination and seed dispersal and has profound impact on further recruitment and population dynamics. Understanding the impact of pollination and seed dispersal on genetic patterns is a central issue in plant population biology. In my thesis, I investigated (i) contemporary dispersal and gene flow distances as well as (ii) genetic diversity and spatial genetic structure (SGS) across subsequent recruitment stages in a population of the animal-pollinated and dispersed tree Prunus africana in Kakamega Forest, West Kenya. Using microsatellite markers and parentage analyses, I inferred distances of pollen dispersal (father-to-mother), seed dispersal/maternal gene flow (mother-to-offspring) as well as paternal gene flow (father-to-offspring) for four early life stages of the species (seeds and fruits, current year seedlings, seedlings ≤ 3yr, seedlings > 3yr). Distances of pollen and seed dispersal as well as paternal gene flow were significantly shorter than expected from the spatial arrangement of trees and sampling plots. They were not affected by the density of conspecific trees in the surrounding. At the propagule stage, mean pollen dispersal distances were considerably (23-fold) longer than seed dispersal distances, and paternal gene flow distances exceeded maternal gene flow by a factor of 25. Seed dispersal distances were remarkably restricted, potentially leading to a strong initial SGS. The initial genetic template created by pollination and seed dispersal was extensively altered during later recruitment stages. Potential Janzen-Connell effects led to markedly increasing distances between offspring and both parental trees in older life stages. This showed that distance and density-dependent mortality factors are not exclusively related to the mother tree, but also to the father. Across subsequent recruitment stages, the pollen to seed dispersal ratio and the paternal to maternal gene flow ratio dropped to 2.1 and 3.4, respectively, in seedlings > 3yr. The relative changes in effective pollen dispersal, seed dispersal, and paternal gene flow distances across recruitment stages elucidate the mechanisms affecting the contribution of the two processes pollen and seed dispersal to overall gene flow. Using the same six microsatellite loci, I analyzed genetic diversity and SGS across five life stages, from seed rain to adults. Levels of genetic diversity within the studied P. africana population were comparable to other Prunus species and did not vary across life stages. In congruence with the short seed dispersal distances, I found significant SGS in all life stages. SGS decreased from seed and early seedling stages to older juvenile stages, and it was higher in adults than in late juveniles of the next generation. A comparison of the data with direct assessments of contemporary gene flow patterns indicate that distance- or density-dependent mortality, potentially due to Janzen-Connell effects, led to the initial decrease in SGS. Intergeneration variation in SGS could have been driven by variation in demographic processes, the effect of overlapping generations, and local selection processes. Overall, my study showed that complex sequential processes during recruitment contribute to the spatial genetic structure of tree populations. It highlights the importance of a multistage perspective for a comprehensive understanding of the impact of animal-mediated pollen and seed dispersal on spatial population dynamics and genetic patterns of trees.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Chemotherapy is a mainstay of cancer treatment. Due to increased drug resistance and the severe side effects of currently used therapeutics, new candidate compounds are required for improvement of therapy success. Shikonin, a natural naphthoquinone, was used in traditional Chinese medicine for the treatment of different inflammatory diseases and recent studies revealed the anticancer activities of shikonin. We found that shikonin has strong cytotoxic effects on 15 cancer cell lines, including multidrug-resistant cell lines. Transcriptome-wide mRNA expression studies showed that shikonin induced genetic pathways regulating cell cycle, mitochondrial function, levels of reactive oxygen species, and cytoskeletal formation. Taking advantage of the inherent fluorescence of shikonin, we analyzed its uptake and distribution in live cells with high spatial and temporal resolution using flow cytometry and confocal microscopy. Shikonin was specifically accumulated in the mitochondria, and this accumulation was associated with a shikonin-dependent deregulation of cellular Ca(2+) and ROS levels. This deregulation led to a breakdown of the mitochondrial membrane potential, dysfunction of microtubules, cell-cycle arrest, and ultimately induction of apoptosis. Seeing as both the metabolism and the structure of mitochondria show marked differences between cancer cells and normal cells, shikonin is a promising candidate for the next generation of chemotherapy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A permanent electric dipole moment of the neutron violates time reversal as well as parity symmetry. Thus it also violates the combination of charge conjugation and parity symmetry if the combination of all three symmetries is a symmetry of nature. The violation of these symmetries could help to explain the observed baryon content of the Universe. The prediction of the Standard Model of particle physics for the neutron electric dipole moment is only about 10e−32 ecm. At the same time the combined violation of charge conjugation and parity symmetry in the Standard Model is insufficient to explain the observed baryon asymmetry of the Universe. Several extensions to the Standard Model can explain the observed baryon asymmetry and also predict values for the neutron electric dipole moment just below the current best experimental limit of d n < 2.9e−26 ecm, (90% C.L.) that has been obtained by the Sussex-RAL-ILL collaboration in 2006. The very same experiment that set the current best limit on the electric dipole moment has been upgraded and moved to the Paul Scherrer Institute. Now an international collaboration is aiming at increasing the sensitivity for an electric dipole moment by more than an order of magnitude. This thesis took place in the frame of this experiment and went along with the commissioning of the experiment until first data taking. After a short layout of the theoretical background in chapter 1, the experiment with all subsystems and their performance are described in detail in chapter 2. To reach the goal sensitivity the control of systematic errors is as important as an increase in statistical sensitivity. Known systematic efects are described and evaluated in chapter 3. During about ten days in 2012, a first set of data was measured with the experiment at the Paul Scherrer Institute. An analysis of this data is presented in chapter 4, together with general tools developed for future analysis eforts. The result for the upper limit of an electric dipole moment of the neutron is |dn| ≤ 6.4e−25 ecm (95%C.L.). Chapter 5 presents investigations for a next generation experiment, to build electrodes made partly from insulating material. Among other advantages, such electrodes would reduce magnetic noise, generated by the thermal movement of charge carriers. The last Chapter summarizes this work and gives an outlook.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The development of next generation microwave technology for backhauling systems is driven by an increasing capacity demand. In order to provide higher data rates and throughputs over a point-to-point link, a cost-effective performance improvement is enabled by an enhanced energy-efficiency of the transmit power amplification stage, whereas a combination of spectrally efficient modulation formats and wider bandwidths is supported by amplifiers that fulfil strict constraints in terms of linearity. An optimal trade-off between these conflicting requirements can be achieved by resorting to flexible digital signal processing techniques at baseband. In such a scenario, the adaptive digital pre-distortion is a well-known linearization method, that comes up to be a potentially widely-used solution since it can be easily integrated into base stations. Its operation can effectively compensate for the inter-modulation distortion introduced by the power amplifier, keeping up with the frequency-dependent time-varying behaviour of the relative nonlinear characteristic. In particular, the impact of the memory effects become more relevant and their equalisation become more challenging as the input discrete signal feature a wider bandwidth and a faster envelope to pre-distort. This thesis project involves the research, design and simulation a pre-distorter implementation at RTL based on a novel polyphase architecture, which makes it capable of operating over very wideband signals at a sampling rate that complies with the actual available clock speed of current digital devices. The motivation behind this structure is to carry out a feasible pre-distortion for the multi-band spectrally efficient complex signals carrying multiple channels that are going to be transmitted in near future high capacity and reliability microwave backhaul links.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Negli ultimi 10 anni i blooms attribuibili alla dinoflagellata bentonica Ostreopsis cf. ovata sono aumentati in termini di frequenza ed intensità lungo le coste del Mediterraneo, avendo ripercussioni negative sulla salute umana e forti impatti sulle comunità marine bentoniche, ciò a seguito della produzione di potenti tossine (composti palitossina-simili) da parte della microalga. Tra i fattori ecologici che innescano o regolano le dinamiche dei bloom tossici le interazioni tra microalghe e batteri sono in misura sempre maggiore oggetto di ricerca. In questo studio è stata analizzata la struttura filogenetica della comunità batterica associata ad O. cf. ovata in colture batch e valutate le dinamiche successionali della stessa in relazione alle differenti fasi di crescita della microalga (oltre che in relazione alle dinamiche di abbondanza virale). Lo studio filogenetico è stato effettuato tramite l’ausilio di metodiche molecolari di sequenziamento di next generation (Ion Torrent). Le abbondanze dei batteri e delle particelle virali sono state determinate tramite microscopia ad epifluorescenza; l’abbondanza cellulare algale è stata stimata tramite metodo Uthermohl. Il contributo della frazione batterica ad elevata attività respiratoria è stato determinato tramite doppia colorazione con coloranti DAPI e CTC. Dai dati emersi si evince che la comunità batterica attraversa due fasi di crescita distinte, una più marcata e concomitante con la fase esponenziale di O. cf. ovata, l'altra quando la microalga è in fase media stazionaria. Per quanto concerne la composizione filogenetica della comunità sono stati rilevati 12 phyla, 17 classi e 150 generi, sebbene i dati ottenuti abbiano rilevato una forte dominanza del phylum Proteobacteria con la classe Alphaproteobacteria, seguita dal phylum Bacteroidetes con la classe Sphingobacteria. Variazioni nella struttura filogenetica della comunità batterica, a livello di generi, tra le diverse fasi di crescita della microalga ha permesso di evidenziare ed ipotizzare particolari interazioni di tipo mutualistico e di tipo competitivo.