927 resultados para Intangible Assets. Strategy. Competitiveness. Means hosting small
Resumo:
Background: RNA interference (RNAi) is a post-transcriptional gene silencing process in which double-stranded RNA (dsRNA) directs the degradation of a specific corresponding target mRNA. The mediators of this process are small dsRNAs of approximately 21 to 23 bp in length, called small interfering RNAs (siRNAs), which can be prepared in vitro and used to direct the degradation of specific mRNAs inside cells. Hence, siRNAs represent a powerful tool to study and control gene and cell function. Rapid progress has been made in the use of siRNA as a means to attenuate the expression of any protein for which the cDNA sequence is known. Individual siRNAs can be chemically synthesized, in vitro-transcribed, or expressed in cells from siRNA expression vectors. However, screening for the most efficient siRNAs for post-transcriptional gene silencing in cells in culture is a laborious and expensive process. In this study, the effectiveness of two siRNA production strategies for the attenuation of abundant proteins for DNA repair were compared in human cells: (a) the in vitro production of siRNA mixtures by the Dicer enzyme (Diced siRNAs); and (b) the chemical synthesis of very specific and unique siRNA sequences (Stealth RNai (TM)). Materials, Methods & Results: For in vitro-produced siRNAs, two segments of the human Ku70 (167 bp in exon 5; and 249 bp in exon 13; NM001469) and Xrcc4 (172 bp in exon 2; and 108 bp in exon 6; NM003401) genes were chosen to generate dsRNA for subsequent "Dicing" to create mixtures of siRNAs. The Diced fragments of siRNA for each gene sequence were pooled and stored at -80 degrees C. Alternatively, chemically synthesized Stealth siRNAs were designed and generated to match two very specific gene sequence regions for each target gene of interest (Ku70 and Xrcc4). HCT116 cells were plated at 30% confluence in 24- or 6-well culture plates. The next day, cells were transfected by lipofection with either Diced or Stealth siRNAs for Ku70 or Xrcc4, in duplicate, at various doses, with blank and sham transfections used as controls. Cells were harvested at 0, 24, 48, 72 and 96 h post-transfection for protein determination. The knockdown of specific targeted gene products was quantified by Western blot using GAPDH as control. Transfection of gene-specific siRNA to either Ku70 or Xrcc4 with both Diced and Stealth siRNAs resulted in a down regulation of the targeted proteins to approximately 10 to 20% of control levels 48 h after transfection, with recovery to pre-treatment levels by 96 h. Discussion: By transfecting cells with Diced or chemically synthesized Stealth siRNAs, Ku70 and Xrcc4, two highly expressed proteins in cells, were effectively attenuated, demonstrating the great potential for the use of both siRNA production strategies as tools to perform loss of function experiments in mammalian cells. In fact, down-regulation of Ku70 and Xrcc4 has been shown to reduce the activity of the non-homologous end joining DNA pathway, a very desirable approach for the use of homologous recombination technology for gene targeting or knockout studies. Stealth RNAi (TM) was developed to achieve high specificity and greater stability when compared with mixtures of enzymatically-produced (Diced) siRNA fragments. In this study, both siRNA approaches inhibited the expression of Ku70 and Xrcc4 gene products, with no detectable toxic effects to the cells in culture. However, similar knockdown effects using Diced siRNAs were only attained at concentrations 10-fold higher than with Stealth siRNAs. The application of RNAi technology will expand and continue to provide new insights into gene regulation and as potential applications for new therapies, transgenic animal production and basic research.
Resumo:
Background. Previous knowledge of cervical lymph node compromise may be crucial to choose the best treatment strategy in oral squamous cell carcinoma (OSCC). Here we propose a set four genes, whose mRNA expression in the primary tumor predicts nodal status in OSCC, excluding tongue. Material and methods. We identified differentially expressed genes in OSCC with and without compromised lymph nodes using Differential Display RT-PCR. Known genes were chosen to be validated by means of Northern blotting or real time RT-PCR (qRT-PCR). Thereafter we constructed a Nodal Index (NI) using discriminant analysis in a learning set of 35 patients, which was further validated in a second independent group of 20 patients. Results. Of the 63 differentially expressed known genes identified comparing three lymph node positive (pN+) and three negative (pN0) primary tumors, 23 were analyzed by Northern analysis or RT-PCR in 49 primary tumors. Six genes confirmed as differentially expressed were used to construct a NI, as the best set predictive of lymph nodal status, with the final result including four genes. The NI was able to correctly classify 32 of 35 patients comprising the learning group (88.6%; p = 0.009). Casein kinase 1alpha1 and scavenger receptor class B, member 2 were found to be up regulated in pN + group in contrast to small proline-rich protein 2B and Ras-GTPase activating protein SH3 domain-binding protein 2 which were upregulated in the pN0 group. We validated further our NI in an independent set of 20 primary tumors, 11 of them pN0 and nine pN+ with an accuracy of 80.0% (p = 0.012). Conclusions. The NI was an independent predictor of compromised lymph nodes, taking into the consideration tumor size and histological grade. The genes identified here that integrate our "Nodal Index" model are predictive of lymph node metastasis in OSCC.
Resumo:
During the last century, great improvements have been made in rectal cancer management regarding preoperative staging, pathologic assessment, surgical technique, and multimodal therapies. Surgically, there was a move from a strategy characterized by simple perineal excision to complex procedures performed by means of a laparoscopic approach, and more recently with the aid of robotic systems. Perhaps the most important advance is that rectal cancer is no longer a fatal disease as it was at the beginning of the 20th century. This achievement is definitely due in part to Ernest Mile's contribution regarding lymphatic spread of tumor cells, which helped clarify the natural history of the disease and the proper treatment alternatives. He advocated a combined approach with the rationale to clear "the zone of upward spread." The aim of the present paper is to present a brief review concerning the evolution of rectal cancer surgery, focusing attention on Miles' abdominoperineal excision of the rectum (APR) and its controversies and refinements over time. Although APR has currently been restricted to a small proportion of patients with low rectal cancer, recent propositions to excise the rectum performing a wider perineal and a proper pelvic floor resection have renewed interest on this procedure, confirming that Ernest Miles' original ideas still influence rectal cancer management after more than 100 years.
Resumo:
Vortex-induced motion (VIM) is a specific way for naming the vortex-induced vibration (VIV) acting on floating units. The VIM phenomenon can occur in monocolumn production, storage and offloading system (MPSO) and spar platforms, structures presenting aspect ratio lower than 4 and unity mass ratio, i.e., structural mass equal to the displaced fluid mass. These platforms can experience motion amplitudes of approximately their characteristic diameters, and therefore, the fatigue life of mooring lines and risers can be greatly affected. Two degrees-of-freedom VIV model tests based on cylinders with low aspect ratio and small mass ratio have been carried out at the recirculating water channel facility available at NDF-EPUSP in order to better understand this hydro-elastic phenomenon. The tests have considered three circular cylinders of mass ratio equal to one and different aspect ratios, respectively L/D = 1.0, 1.7, and 2.0, as well as a fourth cylinder of mass ratio equal to 2.62 and aspect ratio of 2.0. The Reynolds number covered the range from 10 000 to 50 000, corresponding to reduced velocities from 1 to approximately 12. The results of amplitude and frequency in the transverse and in-line directions were analyzed by means of the Hilbert-Huang transform method (HHT) and then compared to those obtained from works found in the literature. The comparisons have shown similar maxima amplitudes for all aspect ratios and small mass ratio, featuring a decrease as the aspect ratio decreases. Moreover, some changes in the Strouhal number have been indirectly observed as a consequence of the decrease in the aspect ratio. In conclusion, it is shown that comparing results of small-scale platforms with those from bare cylinders, all of them presenting low aspect ratio and small mass ratio, the laboratory experiments may well be used in practical investigation, including those concerning the VIM phenomenon acting on platforms. [DOI: 10.1115/1.4006755]
Resumo:
This study reports on the successful use of magnetic albumin nanosphere (MAN), consisting of maghemite nanoparticles hosted by albumin-based nanosphere, to target different sites within the central nervous system (CNS). Ultrastructural analysis by transmission electron microscopy (TEM) of the material collected from the mice was performed in the time window of 30 minutes up to 30 days after administration. Evidence found that the administered MAN was initially internalized and transported by erythrocytes across the blood-brain-barrier and transferred to glial cells and neuropils before internalization by neurons, mainly in the cerebellum. We hypothesize that the efficiency of MAN in crossing the BBB with no pathological alterations is due to the synergistic effect of its two main components, the iron-based nanosized particles and the hosting albumin-based nanospheres. We found that the MAN in targeting the CNS represents an important step towards the design of nanosized materials for clinical and diagnostic applications.
Resumo:
Composites formed of a polymer-embedded layer of sub-10 nm gold nanoclusters were fabricated by very low energy (49 eV) gold ion implantation into polymethylmethacrylate. We used small angle x-ray scattering to investigate the structural properties of these metal-polymer composite layers that were fabricated at three different ion doses, both in their original form (as-implanted) and after annealing for 6 h well above the polymer glass transition temperature (150 degrees C). We show that annealing provides a simple means for modification of the structure of the composite by coarsening mechanisms, and thereby changes its properties. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4720464]
Resumo:
[EN] The seminal work of Horn and Schunck [8] is the first variational method for optical flow estimation. It introduced a novel framework where the optical flow is computed as the solution of a minimization problem. From the assumption that pixel intensities do not change over time, the optical flow constraint equation is derived. This equation relates the optical flow with the derivatives of the image. There are infinitely many vector fields that satisfy the optical flow constraint, thus the problem is ill-posed. To overcome this problem, Horn and Schunck introduced an additional regularity condition that restricts the possible solutions. Their method minimizes both the optical flow constraint and the magnitude of the variations of the flow field, producing smooth vector fields. One of the limitations of this method is that, typically, it can only estimate small motions. In the presence of large displacements, this method fails when the gradient of the image is not smooth enough. In this work, we describe an implementation of the original Horn and Schunck method and also introduce a multi-scale strategy in order to deal with larger displacements. For this multi-scale strategy, we create a pyramidal structure of downsampled images and change the optical flow constraint equation with a nonlinear formulation. In order to tackle this nonlinear formula, we linearize it and solve the method iteratively in each scale. In this sense, there are two common approaches: one that computes the motion increment in the iterations, like in ; or the one we follow, that computes the full flow during the iterations, like in. The solutions are incrementally refined ower the scales. This pyramidal structure is a standard tool in many optical flow methods.
Resumo:
Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.
Resumo:
In a global and increasingly competitive fresh produce market, more attention is being given to fruit quality traits and consumer satisfaction. Kiwifruit occupies a niche position in the worldwide market, when compared to apples, oranges or bananas. It is a fruit with extraordinarily good nutritional traits, and its benefits to human health have been widely described. Until recently, international trade in kiwifruit was restricted to a single cultivar, but different types of kiwifruit are now becoming available in the market. Effective programmes of kiwifruit improvement start by considering the requirements of consumers, and recent surveys indicate that sweeter fruit with better flavour are generally preferred. There is a strong correlation between at-harvest dry matter and starch content, and soluble solid concentration and flavour when fruit are eating ripe. This suggests that carbon accumulation strongly influences the development of kiwifruit taste. The overall aim of the present study was to determine what factors affect carbon accumulation during Actinidia deliciosa berry development. One way of doing this is by comparing kiwifruit genotypes that differ greatly in their ability to accumulate dry matter in their fruit. Starch is the major component of dry matter content. It was hypothesized that genotypes were different in sink strength. Sink strength, by definition, is the effect of sink size and sink activity. Chapter 1 reviews fruit growth, kiwifruit growth and development and carbon metabolism. Chapter 2 describes the materials and methods used. Chapter 3, 4, 5 and 6 describes different types of experimental work. Chapter 7 contains the final discussions and the conclusions Three Actinidia deliciosa breeding populations were analysed in detail to confirm that observed differences in dry matter content were genetically determined. Fruit of the different genotypes differed in dry matter content mainly because of differences in starch concentrations and dry weight accumulation rates, irrespective of fruit size. More detailed experiments were therefore carried out on genotypes which varied most in fruit starch concentrations to determine why sink strengths were so different. The kiwifruit berry comprises three tissues which differ in dry matter content. It was initially hypothesised that observed differences in starch content could be due to a larger proportion of one or other of these tissues, for example, of the central core which is highest in dry matter content. The study results showed that this was not the case. Sink size, intended as cell number or cell size, was then investigated. The outer pericarp makes up about 60% of berry weight in ‘Hayward’ kiwifruit. The outer pericarp contains two types of parenchyma cells: large cells with low starch concentration, and small cells with high starch concentration. Large cell, small cell and total cell densities in the outer pericarp were shown to be not correlated with either dry matter content or fruit size but further investigation of volume proportion among cell types seemed justified. It was then shown that genotypes with fruit having higher dry matter contents also had a higher proportion of small cells. However, the higher proportion of small cell volume could only explain half of the observed differences in starch content. So, sink activity, intended as sucrose to starch metabolism, was investigated. In transiently starch storing sinks, such as tomato fruit and potato tubers, a pivotal role in carbon metabolism has been attributed to sucrose cleaving enzymes (mainly sucrose synthase and cell wall invertase) and to ADP-glucose pyrophosphorylase (the committed step in starch synthesis). Studies on tomato and potato genotypes differing in starch content or in final fruit soluble solid concentrations have demonstrated a strong link with either sucrose synthase or ADP-glucose pyrophosphorylase, at both enzyme activity and gene expression levels, depending on the case. Little is known about sucrose cleaving enzyme and ADP-glucose pyrophosphorylase isoforms. The HortResearch Actinidia EST database was then screened to identify sequences putatively encoding for sucrose synthase, invertase and ADP-glucose pyrophosphorylase isoforms and specific primers were designed. Sucrose synthase, invertase and ADP-glucose pyrophosphorylase isoform transcript levels were anlayzed throughout fruit development of a selection of four genotypes (two high dry matter and two low dry matter). High dry matter genotypes showed higher amounts of sucrose synthase transcripts (SUS1, SUS2 or both) and higher ADP-glucose pyrophosphorylase (AGPL4, large subunit 4) gene expression, mainly early in fruit development. SUS1- like gene expression has been linked with starch biosynthesis in several crop (tomato, potato and maize). An enhancement of its transcript level early in fruit development of high dry matter genotypes means that more activated glucose (UDP-glucose) is available for starch synthesis. This can be then correlated to the higher starch observed since soon after the onset of net starch accumulation. The higher expression level of AGPL4 observed in high dry matter genotypes suggests an involvement of this subunit in drive carbon flux into starch. Changes in both enzymes (SUSY and AGPse) are then responsible of higher starch concentrations. Low dry matter genotypes showed generally higher vacuolar invertase gene expression (and also enzyme activity), early in fruit development. This alternative cleavage strategy can possibly contribute to energy loss, in that invertases’ products are not adenylated, and further reactions and transport are needed to convert carbon into starch. Although these elements match well with observed differences in starch contents, other factors could be involved in carbon metabolism control. From the microarray experiment, in fact, several kinases and transcription factors have been found to be differentially expressed. Sink strength is known to be modified by application of regulators. In ‘Hayward’ kiwifruit, the synthetic cytokinin CPPU (N-(2-Chloro-4-Pyridyl)-N-Phenylurea) promotes a dramatic increase in fruit size, whereas dry matter content decreases. The behaviour of CPPU-treated ‘Hayward’ kiwifruit was similar to that of fruit from low dry matter genotypes: dry matter and starch concentrations were lower. However, the CPPU effect was strongly source limited, whereas in genotype variation it was not. Moreover, CPPU-treated fruit gene expression (at sucrose cleavage and AGPase levels) was similar to that in high dry matter genotypes. It was therefore concluded that CPPU promotes both sink size and sink activity, but at different “speeds” and this ends in the observed decrease in dry matter content and starch concentration. The lower “speed” in sink activity is probably due to a differential partitioning of activated glucose between starch storage and cell wall synthesis to sustain cell expansion. Starch is the main carbohydrate accumulated in growing Actinidia deliciosa fruit. Results obtained in the present study suggest that sucrose synthase and AGPase enzymes contribute to sucrose to starch conversion, and differences in their gene expression levels, mainly early in fruit development, strongly affect the rate at which starch is therefore accumulated. This results are interesting in that starch and Actinidia deliciosa fruit quality are tightly connected.
Resumo:
It is not unknown that the evolution of firm theories has been developed along a path paved by an increasing awareness of the organizational structure importance. From the early “neoclassical” conceptualizations that intended the firm as a rational actor whose aim is to produce that amount of output, given the inputs at its disposal and in accordance to technological or environmental constraints, which maximizes the revenue (see Boulding, 1942 for a past mid century state of the art discussion) to the knowledge based theory of the firm (Nonaka & Takeuchi, 1995; Nonaka & Toyama, 2005), which recognizes in the firm a knnowledge creating entity, with specific organizational capabilities (Teece, 1996; Teece & Pisano, 1998) that allow to sustaine competitive advantages. Tracing back a map of the theory of the firm evolution, taking into account the several perspectives adopted in the history of thought, would take the length of many books. Because of that a more fruitful strategy is circumscribing the focus of the description of the literature evolution to one flow connected to a crucial question about the nature of firm’s behaviour and about the determinants of competitive advantages. In so doing I adopt a perspective that allows me to consider the organizational structure of the firm as an element according to which the different theories can be discriminated. The approach adopted starts by considering the drawbacks of the standard neoclassical theory of the firm. Discussing the most influential theoretical approaches I end up with a close examination of the knowledge based perspective of the firm. Within this perspective the firm is considered as a knowledge creating entity that produce and mange knowledge (Nonaka, Toyama, & Nagata, 2000; Nonaka & Toyama, 2005). In a knowledge intensive organization, knowledge is clearly embedded for the most part in the human capital of the individuals that compose such an organization. In a knowledge based organization, the management, in order to cope with knowledge intensive productions, ought to develop and accumulate capabilities that shape the organizational forms in a way that relies on “cross-functional processes, extensive delayering and empowerment” (Foss 2005, p.12). This mechanism contributes to determine the absorptive capacity of the firm towards specific technologies and, in so doing, it also shape the technological trajectories along which the firm moves. After having recognized the growing importance of the firm’s organizational structure in the theoretical literature concerning the firm theory, the subsequent point of the analysis is that of providing an overview of the changes that have been occurred at micro level to the firm’s organization of production. The economic actors have to deal with challenges posed by processes of internationalisation and globalization, increased and increasing competitive pressure of less developed countries on low value added production activities, changes in technologies and increased environmental turbulence and volatility. As a consequence, it has been widely recognized that the main organizational models of production that fitted well in the 20th century are now partially inadequate and processes aiming to reorganize production activities have been widespread across several economies in recent years. Recently, the emergence of a “new” form of production organization has been proposed both by scholars, practitioners and institutions: the most prominent characteristic of such a model is its recognition of the importance of employees commitment and involvement. As a consequence it is characterized by a strong accent on the human resource management and on those practices that aim to widen the autonomy and responsibility of the workers as well as increasing their commitment to the organization (Osterman, 1994; 2000; Lynch, 2007). This “model” of production organization is by many defined as High Performance Work System (HPWS). Despite the increasing diffusion of workplace practices that may be inscribed within the concept of HPWS in western countries’ companies, it is an hazard, to some extent, to speak about the emergence of a “new organizational paradigm”. The discussion about organizational changes and the diffusion of HPWP the focus cannot abstract from a discussion about the industrial relations systems, with a particular accent on the employment relationships, because of their relevance, in the same way as production organization, in determining two major outcomes of the firm: innovation and economic performances. The argument is treated starting from the issue of the Social Dialogue at macro level, both in an European perspective and Italian perspective. The model of interaction between the social parties has repercussions, at micro level, on the employment relationships, that is to say on the relations between union delegates and management or workers and management. Finding economic and social policies capable of sustaining growth and employment within a knowledge based scenario is likely to constitute the major challenge for the next generation of social pacts, which are the main social dialogue outcomes. As Acocella and Leoni (2007) put forward the social pacts may constitute an instrument to trade wage moderation for high intensity in ICT, organizational and human capital investments. Empirical evidence, especially focused on the micro level, about the positive relation between economic growth and new organizational designs coupled with ICT adoption and non adversarial industrial relations is growing. Partnership among social parties may become an instrument to enhance firm competitiveness. The outcome of the discussion is the integration of organizational changes and industrial relations elements within a unified framework: the HPWS. Such a choice may help in disentangling the potential existence of complementarities between these two aspects of the firm internal structure on economic and innovative performance. With the third chapter starts the more original part of the thesis. The data utilized in order to disentangle the relations between HPWS practices, innovation and economic performance refer to the manufacturing firms of the Reggio Emilia province with more than 50 employees. The data have been collected through face to face interviews both to management (199 respondents) and to union representatives (181 respondents). Coupled with the cross section datasets a further data source is constituted by longitudinal balance sheets (1994-2004). Collecting reliable data that in turn provide reliable results needs always a great effort to which are connected uncertain results. Data at micro level are often subjected to a trade off: the wider is the geographical context to which the population surveyed belong the lesser is the amount of information usually collected (low level of resolution); the narrower is the focus on specific geographical context, the higher is the amount of information usually collected (high level of resolution). For the Italian case the evidence about the diffusion of HPWP and their effects on firm performances is still scanty and usually limited to local level studies (Cristini, et al., 2003). The thesis is also devoted to the deepening of an argument of particular interest: the existence of complementarities between the HPWS practices. It has been widely shown by empirical evidence that when HPWP are adopted in bundles they are more likely to impact on firm’s performances than when adopted in isolation (Ichniowski, Prennushi, Shaw, 1997). Is it true also for the local production system of Reggio Emilia? The empirical analysis has the precise aim of providing evidence on the relations between the HPWS dimensions and the innovative and economic performances of the firm. As far as the first line of analysis is concerned it must to be stressed the fundamental role that innovation plays in the economy (Geroski & Machin, 1993; Stoneman & Kwoon 1994, 1996; OECD, 2005; EC, 2002). On this point the evidence goes from the traditional innovations, usually approximated by R&D investment expenditure or number of patents, to the introduction and adoption of ICT, in the recent years (Brynjolfsson & Hitt, 2000). If innovation is important then it is critical to analyse its determinants. In this work it is hypothesised that organizational changes and firm level industrial relations/employment relations aspects that can be put under the heading of HPWS, influence the propensity to innovate in product, process and quality of the firm. The general argument may goes as follow: changes in production management and work organization reconfigure the absorptive capacity of the firm towards specific technologies and, in so doing, they shape the technological trajectories along which the firm moves; cooperative industrial relations may lead to smother adoption of innovations, because not contrasted by unions. From the first empirical chapter emerges that the different types of innovations seem to respond in different ways to the HPWS variables. The underlying processes of product, process and quality innovations are likely to answer to different firm’s strategies and needs. Nevertheless, it is possible to extract some general results in terms of the most influencing HPWS factors on innovative performance. The main three aspects are training coverage, employees involvement and the diffusion of bonuses. These variables show persistent and significant relations with all the three innovation types. The same do the components having such variables at their inside. In sum the aspects of the HPWS influence the propensity to innovate of the firm. At the same time, emerges a quite neat (although not always strong) evidence of complementarities presence between HPWS practices. In terns of the complementarity issue it can be said that some specific complementarities exist. Training activities, when adopted and managed in bundles, are related to the propensity to innovate. Having a sound skill base may be an element that enhances the firm’s capacity to innovate. It may enhance both the capacity to absorbe exogenous innovation and the capacity to endogenously develop innovations. The presence and diffusion of bonuses and the employees involvement also spur innovative propensity. The former because of their incentive nature and the latter because direct workers participation may increase workers commitment to the organizationa and thus their willingness to support and suggest inovations. The other line of analysis provides results on the relation between HPWS and economic performances of the firm. There have been a bulk of international empirical studies on the relation between organizational changes and economic performance (Black & Lynch 2001; Zwick 2004; Janod & Saint-Martin 2004; Huselid 1995; Huselid & Becker 1996; Cappelli & Neumark 2001), while the works aiming to capture the relations between economic performance and unions or industrial relations aspects are quite scant (Addison & Belfield, 2001; Pencavel, 2003; Machin & Stewart, 1990; Addison, 2005). In the empirical analysis the integration of the two main areas of the HPWS represent a scarcely exploited approach in the panorama of both national and international empirical studies. As remarked by Addison “although most analysis of workers representation and employee involvement/high performance work practices have been conducted in isolation – while sometimes including the other as controls – research is beginning to consider their interactions” (Addison, 2005, p.407). The analysis conducted exploiting temporal lags between dependent and covariates, possibility given by the merger of cross section and panel data, provides evidence in favour of the existence of HPWS practices impact on firm’s economic performance, differently measured. Although it does not seem to emerge robust evidence on the existence of complementarities among HPWS aspects on performances there is evidence of a general positive influence of the single practices. The results are quite sensible to the time lags, inducing to hypothesize that time varying heterogeneity is an important factor in determining the impact of organizational changes on economic performance. The implications of the analysis can be of help both to management and local level policy makers. Although the results are not simply extendible to other local production systems it may be argued that for contexts similar to the Reggio Emilia province, characterized by the presence of small and medium enterprises organized in districts and by a deep rooted unionism, with strong supporting institutions, the results and the implications here obtained can also fit well. However, a hope for future researches on the subject treated in the present work is that of collecting good quality information over wider geographical areas, possibly at national level, and repeated in time. Only in this way it is possible to solve the Gordian knot about the linkages between innovation, performance, high performance work practices and industrial relations.
Resumo:
In this thesis we will disclose the results obtained from the diastereoisomeric salt formation (n salt, p salt and p1,n1 salt) between non-racemic trans-chrysanthemic acid (trans-ChA) and pure enantiomers of threo-2-dimethylamino-1-phenyl-1,3-propanediol (DMPP). The occurrence of p1,n1 salt formation can have profound effects on enantiomer separation of scalemic (non-racemic) mixtures. This phenomenon when accompanied by substrate self-association impedes the complete recovery of the major enantiomer through formation of an inescapable racemate cage. A synthetic sequence for the asymmetric synthesis of bicyclo[3.2.0]heptanones and bicyclo[3.2.0]hept-3-en-6-ones through a cycloaddition strategy is reported. The fundamental step is a [2+2]-cycloaddition of an enantiopure amide derived from the reaction between a set of acids and an oxazolidinone as the chiral auxiliary. The inter- and intramolecular cycloaddition of in situ-generated keteniminium salts gives bicycles with a good enantioselection. A key intermediate of Iloprost, a chemically stable and biologically active mimic of prostacyclin PGI2 is synthesized following a ‘green approach’. An example of simple optical resolution of this racemic intermediate involving the diastereoisomeric salt formation is described.
Resumo:
In 1995, the European Union (EU) Member States and 12 Mediterranean countries launched in Barcelona a liberalization process that aims at establishing a free trade area (to be realized by 2010) and at promoting a sustainable and balanced economic development by the adoption of a new generation of Agreements: the Euro-Mediterranean Agreements (EMA). For the Mediterranean partner countries, the main concern is a better access for their fruit and vegetable exports to the European market. These products represent the main exports of these countries, and the EU is their first trading partner. On the other side, for the EU the main issue is not only the promotion of its products, but also the protection of its fruit and vegetables producers. Moreover, the trade with third countries is the key element of the Common Market Organization of the sector. Fruit and vegetables represent a very sensitive sector since their high seasonality, high perishability, and especially since the production of the Mediterranean countries is often similar to the European Mediterranean’s countries one. In fact, the agreements define preferences at the entrance of the EU market providing limited concessions for each partner, for specific products, limited quantities and calendars. This research tries to analyze the bilateral trade volume for fresh fruit and vegetables in the European and Italian markets in order to assess the effects of Mediterranean liberalization on this sector. Free trade of agricultural products represents a very actual topic in international trade and the Mediterranean countries, recognised as big producers of fruit and vegetables, as big exporters of their crops and actually significantly present on the European market, could be high competitors with the inward production because the outlet could be the same. The goal of this study is to provide some considerations about the competitiveness of mediterranean fruit and vegetables productions after Barcelona Process, in a first step for the European market and then also for the Italian one. The aim is to discuss the influence of the euro-mediterranean agreements on the fruit and vegetables trade between 10 foreign Mediterranean countries (Algeria, Egypt, Israel, Jordan, Libya, Lebanon, Morocco, Tunisia, Syria, and Turkey) and 15 EU countries in the period 1995-2007, by means of a gravity model, which is a widespread methodology in international trade analysis. The basic idea of gravity models is that bilateral trade from one country to another (as the dependent variable) can be explained by a set of factors: - factors that capture the potential of a country to export goods and services; - factors that capture the propensity of a country to imports goods and services; - any other forces that either attract or inhibit bilateral trade. This analysis compares only imports’ flows by Europe and by Italy (in volumes) from Mediterranean countries, since the exports’ flows toward those foreign countries are not significant, especially for Italy. The market of fruit and vegetables appears as a high heterogeneous group so it is very difficult to show a synthesis of the analysis performed and the related results. In fact, this sector includes the so called “poor products” (such as potatoes and legumes), and the “rich product”, such as nuts or exotic fruit, and there are a lot of different goods that arouse a dissimilar consumer demand which directly influence the import requirements. Fruit and vegetables sector includes products with extremely different biological cycles, leading to a very unlike seasonality. Moreover, the Mediterranean area appears as a highly heterogeneous bloc, including countries which differ from the others for economic size, production potential, capability to export and for the relationships with the EU. The econometric estimation includes 68 analyses, 34 of which considering the European import and 34 the Italian import and the products are examined in their aggregated form and in their disaggregated level. The analysis obtains a very high R2 coefficient, which means that the methodology is able to assess the import effects on fruit and vegetables associated to the Association Agreements, preferential tariffs, regional integration, and others information involved in the equation. The empirical analysis suggests that fruits and vegetables trade flows are well explained by some parameters: size of the involved countries (especially GDP and population of the Mediterranean countries); distances; prices of imported products; local production for the aggregated products; preferential expressed tariffs like duty free; sub-regional agreements that enforce the export capability. The euro-mediterranean agreements are significant in some of the performed analysis, confirming the slow and gradual evolution of euro- Mediterranean liberalization. The euro-mediterranean liberalization provides opportunities from one side, and imposes a new important challenge from the other side. For the EU the chance is that fruit and vegetables imported from the mediterranean area represent a support for local supply and a possibility to increase the range of products existing on the market. The challenge regards the competition of foreign products with the local ones since the types of productions are similar and markets coincide, especially in the Italian issue. We need to apply a strategy based not on a trade antagonism, but on the realization of a common plane market with the Mediterranean countries. This goal could be achieved enhancing the industrial cooperation in addition to commercial relationships, and increasing investments’ flows in the Mediterranean countries aiming at transforming those countries from potential competitors to trade partners and creating new commercial policies to export towards extra European countries.
Resumo:
The irrigation scheme Eduardo Mondlane, situated in Chókwè District - in the Southern part of the Gaza province and within the Limpopo River Basin - is the largest in the country, covering approximately 30,000 hectares of land. Built by the Portuguese colonial administration in the 1950s to exploit the agricultural potential of the area through cash-cropping, after Independence it became one of Frelimo’s flagship projects aiming at the “socialization of the countryside” and at agricultural economic development through the creation of a state farm and of several cooperatives. The failure of Frelimo’s economic reforms, several infrastructural constraints and local farmers resistance to collective forms of production led to scheme to a state of severe degradation aggravated by the floods of the year 2000. A project of technical rehabilitation initiated after the floods is currently accompanied by a strong “efficiency” discourse from the managing institution that strongly opposes the use of irrigated land for subsistence agriculture, historically a major livelihood strategy for smallfarmers, particularly for women. In fact, the area has been characterized, since the end of the XIX century, by a stable pattern of male migration towards South African mines, that has resulted in an a steady increase of women-headed households (both de jure and de facto). The relationship between land reform, agricultural development, poverty alleviation and gender equality in Southern Africa is long debated in academic literature. Within this debate, the role of agricultural activities in irrigation schemes is particularly interesting considering that, in a drought-prone area, having access to water for irrigation means increased possibilities of improving food and livelihood security, and income levels. In the case of Chókwè, local governments institutions are endorsing the development of commercial agriculture through initiatives such as partnerships with international cooperation agencies or joint-ventures with private investors. While these business models can sometimes lead to positive outcomes in terms of poverty alleviation, it is important to recognize that decentralization and neoliberal reforms occur in the context of financial and political crisis of the State that lacks the resources to efficiently manage infrastructures such as irrigation systems. This kind of institutional and economic reforms risk accelerating processes of social and economic marginalisation, including landlessness, in particular for poor rural women that mainly use irrigated land for subsistence production. The study combines an analysis of the historical and geographical context with the study of relevant literature and original fieldwork. Fieldwork was conducted between February and June 2007 (where I mainly collected secondary data, maps and statistics and conducted preliminary visit to Chókwè) and from October 2007 to March 2008. Fieldwork methodology was qualitative and used semi-structured interviews with central and local Government officials, technical experts of the irrigation scheme, civil society organisations, international NGOs, rural extensionists, and water users from the irrigation scheme, in particular those women smallfarmers members of local farmers’ associations. Thanks to the collaboration with the Union of Farmers’ Associations of Chókwè, she has been able to participate to members’ meeting, to education and training activities addressed to women farmers members of the Union and to organize a group discussion. In Chókwè irrigation scheme, women account for the 32% of water users of the familiar sector (comprising plot-holders with less than 5 hectares of land) and for just 5% of the private sector. If one considers farmers’ associations of the familiar sector (a legacy of Frelimo’s cooperatives), women are 84% of total members. However, the security given to them by the land title that they have acquired through occupation is severely endangered by the use that they make of land, that is considered as “non efficient” by the irrigation scheme authority. Due to a reduced access to marketing possibilities and to inputs, training, information and credit women, in actual fact, risk to see their right to access land and water revoked because they are not able to sustain the increasing cost of the water fee. The myth of the “efficient producer” does not take into consideration the characteristics of inequality and gender discrimination of the neo-liberal market. Expecting small-farmers, and in particular women, to be able to compete in the globalized agricultural market seems unrealistic, and can perpetuate unequal gendered access to resources such as land and water.
Resumo:
Synthetic Biology is a relatively new discipline, born at the beginning of the New Millennium, that brings the typical engineering approach (abstraction, modularity and standardization) to biotechnology. These principles aim to tame the extreme complexity of the various components and aid the construction of artificial biological systems with specific functions, usually by means of synthetic genetic circuits implemented in bacteria or simple eukaryotes like yeast. The cell becomes a programmable machine and its low-level programming language is made of strings of DNA. This work was performed in collaboration with researchers of the Department of Electrical Engineering of the University of Washington in Seattle and also with a student of the Corso di Laurea Magistrale in Ingegneria Biomedica at the University of Bologna: Marilisa Cortesi. During the collaboration I contributed to a Synthetic Biology project already started in the Klavins Laboratory. In particular, I modeled and subsequently simulated a synthetic genetic circuit that was ideated for the implementation of a multicelled behavior in a growing bacterial microcolony. In the first chapter the foundations of molecular biology are introduced: structure of the nucleic acids, transcription, translation and methods to regulate gene expression. An introduction to Synthetic Biology completes the section. In the second chapter is described the synthetic genetic circuit that was conceived to make spontaneously emerge, from an isogenic microcolony of bacteria, two different groups of cells, termed leaders and followers. The circuit exploits the intrinsic stochasticity of gene expression and intercellular communication via small molecules to break the symmetry in the phenotype of the microcolony. The four modules of the circuit (coin flipper, sender, receiver and follower) and their interactions are then illustrated. In the third chapter is derived the mathematical representation of the various components of the circuit and the several simplifying assumptions are made explicit. Transcription and translation are modeled as a single step and gene expression is function of the intracellular concentration of the various transcription factors that act on the different promoters of the circuit. A list of the various parameters and a justification for their value closes the chapter. In the fourth chapter are described the main characteristics of the gro simulation environment, developed by the Self Organizing Systems Laboratory of the University of Washington. Then, a sensitivity analysis performed to pinpoint the desirable characteristics of the various genetic components is detailed. The sensitivity analysis makes use of a cost function that is based on the fraction of cells in each one of the different possible states at the end of the simulation and the wanted outcome. Thanks to a particular kind of scatter plot, the parameters are ranked. Starting from an initial condition in which all the parameters assume their nominal value, the ranking suggest which parameter to tune in order to reach the goal. Obtaining a microcolony in which almost all the cells are in the follower state and only a few in the leader state seems to be the most difficult task. A small number of leader cells struggle to produce enough signal to turn the rest of the microcolony in the follower state. It is possible to obtain a microcolony in which the majority of cells are followers by increasing as much as possible the production of signal. Reaching the goal of a microcolony that is split in half between leaders and followers is comparatively easy. The best strategy seems to be increasing slightly the production of the enzyme. To end up with a majority of leaders, instead, it is advisable to increase the basal expression of the coin flipper module. At the end of the chapter, a possible future application of the leader election circuit, the spontaneous formation of spatial patterns in a microcolony, is modeled with the finite state machine formalism. The gro simulations provide insights into the genetic components that are needed to implement the behavior. In particular, since both the examples of pattern formation rely on a local version of Leader Election, a short-range communication system is essential. Moreover, new synthetic components that allow to reliably downregulate the growth rate in specific cells without side effects need to be developed. In the appendix are listed the gro code utilized to simulate the model of the circuit, a script in the Python programming language that was used to split the simulations on a Linux cluster and the Matlab code developed to analyze the data.
Resumo:
Critical lower limb ischemia is a severe disease. A common approach is infrainguinal bypass. Synthetic vascular prosthesis, are good conduits in high-flow low-resistance conditions but have difficulty in their performance as small diameter vessel grafts. A new approach is the use of native decellularized vascular tissues. Cell-free vessels are expected to have improved biocompatibility when compared to synthetic and are optimal natural 3D matrix templates for driving stem cell growth and tissue assembly in vivo. Decellularization of tissues represent a promising field for regenerative medicine, with the aim to develop a methodology to obtain small-diameter allografts to be used as a natural scaffold suited for in vivo cell growth and pseudo-tissue assembly, eliminating failure caused from immune response activation. Material and methods. Umbilical cord-derived mesenchymal cells isolated from human umbilical cord tissue were expanded in advanced DMEM. Immunofluorescence and molecular characterization revealed a stem cell profile. A non-enzymatic protocol, that associate hypotonic shock and low-concentration ionic detergent, was used to decellularize vessel segments. Cells were seeded cell-free scaffolds using a compound of fibrin and thrombin and incubated in DMEM, after 4 days of static culture they were placed for 2 weeks in a flow-bioreactor, mimicking the cardiovascular pulsatile flow. After dynamic culture, samples were processed for histological, biochemical and ultrastructural analysis. Discussion. Histology showed that the dynamic culture cells initiate to penetrate the extracellular matrix scaffold and to produce components of the ECM, as collagen fibres. Sirius Red staining showed layers of immature collagen type III and ultrastructural analysis revealed 30 nm thick collagen fibres, presumably corresponding to the immature collagen. These data confirm the ability of cord-derived cells to adhere and penetrate a natural decellularized tissue and to start to assembly into new tissue. This achievement makes natural 3D matrix templates prospectively valuable candidates for clinical bypass procedures