877 resultados para next generation matrix


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although equines have participated in the forming and development of several civilizations around the world since their domestication 6,000 years ago in comparison to other species that have zootechnical interest, few researches have been done related to animal breeding area, especially in Brazil. Some reasons for that are difficulties associated with the species as well as operational aspects. However, developments in genetics in the last decades contributed to a better understanding of the traits related to reproduction, heath, behavior and performance of domestic animals, including equines. Recent technologies as next generation sequencing methods and the high density chips of SNPs for genotyping allowed some advances in the researches already done. These researches used basically the candidate gene strategy, and identified genomic regions related to diseases and syndromes and, more recently, the performance in sport competition and specific abilities. Using these genomic analysis tools, some regions related to race performance have been identified and based on this information; genetic tests to select superior animals for racing performance have started to be available in the market.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: The intestinal microbiome (IM) has extensively been studied in the search for a link of bacteria with the cause of Crohn`s disease (CD). The association might result from the action of a specific pathogen and/or an eventual imbalance in bacterial species composition of the gut. The innumerous virulence associated markers and strategies described for adherent and invasive Escherichia coli (AIEC) have made them putative candidate pathogens for CD. IM of CD patients shows dysbiosis, manifested by the proliferation of bacterial groups such as Enterobacteriaceae and reduction of others such as Lactobacillus and Bifidobacterium. The augmented bacterial population comprising of commensal and/or pathogenic organisms super stimulates the immune system, triggering the inflammatory reactions responsible for the clinical manifestations of the disease. Considering the role played by IM in CD and the multiple variables influencing its species composition, resulting in differences among populations, the objective of this study was to determine the bacterial biodiversity in the mucosa associated microbiome of CD patients from a population not previously subject to this analysis, living in the middle west region of Sao Paulo state. Methods: A total of 4 CD patients and 5 controls subjects attending the Botucatu Medical School of the Sao Paulo State University (UNESP) for routine colonoscopy and who signed an informed consent were included in the study. A number of 2 biopsies, one from the ileum and other from any part of the terminal colon, were taken from each subject and immediately frozen at -70[degrees]C until DNA purification. The bacterial biodiversity was assessed by next generation (ion torrent) sequencing of PCR amplicons of the ribosomal DNA 16S V6 region (16S V6 rDNA). The bacterial identification was performed at the genus level, by alignment of the generated DNA sequences with those available at the ribosomal database project (RDP) website. Results: The overall DNA sequence output was based on an average number of 526,427 reads per run, matching 50 bacterial genus 16SrDNA sequences available at the RDB website, and 22 non matching sequences. Over 95% of the sequences corresponded to taxa belonging to the major phyla: Firmicutes, Bacterioidetes, Proteobacteria and Actinobacteria. Irrespective of the intestinal site analyzed, no case-control differences could be observed in the prevalence of Actinobacteria and Firmicutes. The prevalence of Proteobacteria was higher (40%) in the biopsies of control subjects as compared to that of DC patients (16%). For Bacterioidetes, the higher prevalence was observed among DC patients (33% as opposed to 14,5% in controls). The significance for all comparisons considered a p value < 0,05 in a Chi2 test. No mucosal site specific differences could be observed in IM comparisons of CD and control subjects. Conclusions: The rise in the number of Bacterioidetes observed here among CD patients seems to be in agreement with most of studies published thus far. Yet, the reduction in the number of Proteobacteria along with an apparently unaltered population of Actinobacteria and Firmicutes, which include the so called "beneficial" organisms Bifidobacterium and Lactobacillus were rather surprising. These data suggest that the analyses on the role of IM in CD should consider the multiple variables that may influence its species composition.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Computação - IBILCE

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The next-generation SONET metro network is evolving into a service-rich infrastructure. At the edge of such a network, multi-service provisioning platforms (MSPPs) provide efficient data mapping enabled by Generic Framing Procedure (GFP) and Virtual Concatenation (VC). The core of the network tends to be a meshed architecture equipped with Multi-Service Switches (MSSs). In the context of these emerging technologies, we propose a load-balancing spare capacity reallocation approach to improve network utilization in the next-generation SONET metro networks. Using our approach, carriers can postpone network upgrades, resulting in increased revenue with reduced capital expenditures (CAPEX). For the first time, we consider the spare capacity reallocation problem from a capacity upgrade and network planning perspective. Our approach can operate in the context of shared-path protection (with backup multiplexing) because it reallocates spare capacity without disrupting working services. Unlike previous spare capacity reallocation approaches which aim at minimizing total spare capacity, our load-balancing approach minimizes the network load vector (NLV), which is a novel metric that reflects the network load distribution. Because NLV takes into consideration both uniform and non-uniform link capacity distribution, our approach can benefit both uniform and non-uniform networks. We develop a greedy loadbalancing spare capacity reallocation (GLB-SCR) heuristic algorithm to implement this approach. Our experimental results show that GLB-SCR outperforms a previously proposed algorithm (SSR) in terms of established connection capacity and total network capacity in both uniform and non-uniform networks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Wavelength-routed networks (WRN) are very promising candidates for next-generation Internet and telecommunication backbones. In such a network, optical-layer protection is of paramount importance due to the risk of losing large amounts of data under a failure. To protect the network against this risk, service providers usually provide a pair of risk-independent working and protection paths for each optical connection. However, the investment made for the optical-layer protection increases network cost. To reduce the capital expenditure, service providers need to efficiently utilize their network resources. Among all the existing approaches, shared-path protection has proven to be practical and cost-efficient [1]. In shared-path protection, several protection paths can share a wavelength on a fiber link if their working paths are risk-independent. In real-world networks, provisioning is usually implemented without the knowledge of future network resource utilization status. As the network changes with the addition and deletion of connections, the network utilization will become sub-optimal. Reconfiguration, which is referred to as the method of re-provisioning the existing connections, is an attractive solution to fill in the gap between the current network utilization and its optimal value [2]. In this paper, we propose a new shared-protection-path reconfiguration approach. Unlike some of previous reconfiguration approaches that alter the working paths, our approach only changes protection paths, and hence does not interfere with the ongoing services on the working paths, and is therefore risk-free. Previous studies have verified the benefits arising from the reconfiguration of existing connections [2] [3] [4]. Most of them are aimed at minimizing the total used wavelength-links or ports. However, this objective does not directly relate to cost saving because minimizing the total network resource consumption does not necessarily maximize the capability of accommodating future connections. As a result, service providers may still need to pay for early network upgrades. Alternatively, our proposed shared-protection-path reconfiguration approach is based on a load-balancing objective, which minimizes the network load distribution vector (LDV, see Section 2). This new objective is designed to postpone network upgrades, thus bringing extra cost savings to service providers. In other words, by using the new objective, service providers can establish as many connections as possible before network upgrades, resulting in increased revenue. We develop a heuristic load-balancing (LB) reconfiguration approach based on this new objective and compare its performance with an approach previously introduced in [2] and [4], whose objective is minimizing the total network resource consumption.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Lightpath scheduling is an important capability in next-generation wavelength-division multiplexing (WDM) optical networks to reserve resources in advance for a specified time period while provisioning end-to-end lightpaths. In a dynamic environment, the end user requests for dynamic scheduled lightpath demands (D-SLDs) need to be serviced without the knowledge of future requests. Even though the starting time of the request may be hours or days from the current time, the end-user however expects a quick response as to whether the request could be satisfied. We propose a two-phase approach to dynamically schedule and provision D-SLDs. In the first phase, termed the deterministic lightpath scheduling phase, upon arrival of a lightpath request, the network control plane schedules a path with guaranteed resources so that the user can get a quick response with a deterministic lightpath schedule. In the second phase, termed the lightpath re-optimization phase, we re-provision some already scheduled lightpaths to re-optimize for improving network performance. We study two reoptimization scenarios to reallocate network resources while maintaining the existing lightpath schedules. Experimental results show that our proposed two-phase dynamic lightpath scheduling approach can greatly reduce network blocking.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Great efforts have been made to increase accessibility of HIV antiretroviral therapy (ART) in low and middle-income countries. The threat of wide-scale emergence of drug resistance could severely hamper ART scale-up efforts. Population-based surveillance of transmitted HIV drug resistance ensures the use of appropriate first-line regimens to maximize efficacy of ART programs where drug options are limited. However, traditional HIV genotyping is extremely expensive, providing a cost barrier to wide-scale and frequent HIV drug resistance surveillance. Methods/Results: We have developed a low-cost laboratory-scale next-generation sequencing-based genotyping method to monitor drug resistance. We designed primers specifically to amplify protease and reverse transcriptase from Brazilian HIV subtypes and developed a multiplexing scheme using multiplex identifier tags to minimize cost while providing more robust data than traditional genotyping techniques. Using this approach, we characterized drug resistance from plasma in 81 HIV infected individuals collected in Sao Paulo, Brazil. We describe the complexities of analyzing next-generation sequencing data and present a simplified open-source workflow to analyze drug resistance data. From this data, we identified drug resistance mutations in 20% of treatment naive individuals in our cohort, which is similar to frequencies identified using traditional genotyping in Brazilian patient samples. Conclusion: The developed ultra-wide sequencing approach described here allows multiplexing of at least 48 patient samples per sequencing run, 4 times more than the current genotyping method. This method is also 4-fold more sensitive (5% minimal detection frequency vs. 20%) at a cost 3-5 x less than the traditional Sanger-based genotyping method. Lastly, by using a benchtop next-generation sequencer (Roche/454 GS Junior), this approach can be more easily implemented in low-resource settings. This data provides proof-of-concept that next-generation HIV drug resistance genotyping is a feasible and low-cost alternative to current genotyping methods and may be particularly beneficial for in-country surveillance of transmitted drug resistance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The time is ripe for a comprehensive mission to explore and document Earth's species. This calls for a campaign to educate and inspire the next generation of professional and citizen species explorers, investments in cyber-infrastructure and collections to meet the unique needs of the producers and consumers of taxonomic information, and the formation and coordination of a multi-institutional, international, transdisciplinary community of researchers, scholars and engineers with the shared objective of creating a comprehensive inventory of species and detailed map of the biosphere. We conclude that an ambitious goal to describe 10 million species in less than 50 years is attainable based on the strength of 250 years of progress, worldwide collections, existing experts, technological innovation and collaborative teamwork. Existing digitization projects are overcoming obstacles of the past, facilitating collaboration and mobilizing literature, data, images and specimens through cyber technologies. Charting the biosphere is enormously complex, yet necessary expertise can be found through partnerships with engineers, information scientists, sociologists, ecologists, climate scientists, conservation biologists, industrial project managers and taxon specialists, from agrostologists to zoophytologists. Benefits to society of the proposed mission would be profound, immediate and enduring, from detection of early responses of flora and fauna to climate change to opening access to evolutionary designs for solutions to countless practical problems. The impacts on the biodiversity, environmental and evolutionary sciences would be transformative, from ecosystem models calibrated in detail to comprehensive understanding of the origin and evolution of life over its 3.8 billion year history. The resultant cyber-enabled taxonomy, or cybertaxonomy, would open access to biodiversity data to developing nations, assure access to reliable data about species, and change how scientists and citizens alike access, use and think about biological diversity information.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Understanding alternative splicing is crucial to elucidate the mechanisms behind several biological phenomena, including diseases. The huge amount of expressed sequences available nowadays represents an opportunity and a challenge to catalog and display alternative splicing events (ASEs). Although several groups have faced this challenge with relative success, we still lack a computational tool that uses a simple and straightforward method to retrieve, name and present ASEs. Here we present SPLOOCE, a portal for the analysis of human splicing variants. SPLOOCE uses a method based on regular expressions for retrieval of ASEs. We propose a simple syntax that is able to capture the complexity of ASEs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract Background The implication of post-transcriptional regulation by microRNAs in molecular mechanisms underlying cancer disease is well documented. However, their interference at the cellular level is not fully explored. Functional in vitro studies are fundamental for the comprehension of their role; nevertheless results are highly dependable on the adopted cellular model. Next generation small RNA transcriptomic sequencing data of a tumor cell line and keratinocytes derived from primary culture was generated in order to characterize the microRNA content of these systems, thus helping in their understanding. Both constitute cell models for functional studies of microRNAs in head and neck squamous cell carcinoma (HNSCC), a smoking-related cancer. Known microRNAs were quantified and analyzed in the context of gene regulation. New microRNAs were investigated using similarity and structural search, ab initio classification, and prediction of the location of mature microRNAs within would-be precursor sequences. Results were compared with small RNA transcriptomic sequences from HNSCC samples in order to access the applicability of these cell models for cancer phenotype comprehension and for novel molecule discovery. Results Ten miRNAs represented over 70% of the mature molecules present in each of the cell types. The most expressed molecules were miR-21, miR-24 and miR-205, Accordingly; miR-21 and miR-205 have been previously shown to play a role in epithelial cell biology. Although miR-21 has been implicated in cancer development, and evaluated as a biomarker in HNSCC progression, no significant expression differences were seen between cell types. We demonstrate that differentially expressed mature miRNAs target cell differentiation and apoptosis related biological processes, indicating that they might represent, with acceptable accuracy, the genetic context from which they derive. Most miRNAs identified in the cancer cell line and in keratinocytes were present in tumor samples and cancer-free samples, respectively, with miR-21, miR-24 and miR-205 still among the most prevalent molecules at all instances. Thirteen miRNA-like structures, containing reads identified by the deep sequencing, were predicted from putative miRNA precursor sequences. Strong evidences suggest that one of them could be a new miRNA. This molecule was mostly expressed in the tumor cell line and HNSCC samples indicating a possible biological function in cancer. Conclusions Critical biological features of cells must be fully understood before they can be chosen as models for functional studies. Expression levels of miRNAs relate to cell type and tissue context. This study provides insights on miRNA content of two cell models used for cancer research. Pathways commonly deregulated in HNSCC might be targeted by most expressed and also by differentially expressed miRNAs. Results indicate that the use of cell models for cancer research demands careful assessment of underlying molecular characteristics for proper data interpretation. Additionally, one new miRNA-like molecule with a potential role in cancer was identified in the cell lines and clinical samples.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Genome-wide association studies have failed to establish common variant risk for the majority of common human diseases. The underlying reasons for this failure are explained by recent studies of resequencing and comparison of over 1200 human genomes and 10 000 exomes, together with the delineation of DNA methylation patterns (epigenome) and full characterization of coding and noncoding RNAs (transcriptome) being transcribed. These studies have provided the most comprehensive catalogues of functional elements and genetic variants that are now available for global integrative analysis and experimental validation in prospective cohort studies. With these datasets, researchers will have unparalleled opportunities for the alignment, mining, and testing of hypotheses for the roles of specific genetic variants, including copy number variations, single nucleotide polymorphisms, and indels as the cause of specific phenotypes and diseases. Through the use of next-generation sequencing technologies for genotyping and standardized ontological annotation to systematically analyze the effects of genomic variation on humans and model organism phenotypes, we will be able to find candidate genes and new clues for disease’s etiology and treatment. This article describes essential concepts in genetics and genomic technologies as well as the emerging computational framework to comprehensively search websites and platforms available for the analysis and interpretation of genomic data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Graphene has received great attention due to its exceptional properties, which include corners with zero effective mass, extremely large mobilities, this could render it the new template for the next generation of electronic devices. Furthermore it has weak spin orbit interaction because of the low atomic number of carbon atom in turn results in long spin coherence lengths. Therefore, graphene is also a promising material for future applications in spintronic devices - the use of electronic spin degrees of freedom instead of the electron charge. Graphene can be engineered to form a number of different structures. In particular, by appropriately cutting it one can obtain 1-D system -with only a few nanometers in width - known as graphene nanoribbon, which strongly owe their properties to the width of the ribbons and to the atomic structure along the edges. Those GNR-based systems have been shown to have great potential applications specially as connectors for integrated circuits. Impurities and defects might play an important role to the coherence of these systems. In particular, the presence of transition metal atoms can lead to significant spin-flip processes of conduction electrons. Understanding this effect is of utmost importance for spintronics applied design. In this work, we focus on electronic transport properties of armchair graphene nanoribbons with adsorbed transition metal atoms as impurities and taking into account the spin-orbit effect. Our calculations were performed using a combination of density functional theory and non-equilibrium Greens functions. Also, employing a recursive method we consider a large number of impurities randomly distributed along the nanoribbon in order to infer, for different concentrations of defects, the spin-coherence length.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Máster en Economía del Turismo, Transporte y Medio Ambiente

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It is not unknown that the evolution of firm theories has been developed along a path paved by an increasing awareness of the organizational structure importance. From the early “neoclassical” conceptualizations that intended the firm as a rational actor whose aim is to produce that amount of output, given the inputs at its disposal and in accordance to technological or environmental constraints, which maximizes the revenue (see Boulding, 1942 for a past mid century state of the art discussion) to the knowledge based theory of the firm (Nonaka & Takeuchi, 1995; Nonaka & Toyama, 2005), which recognizes in the firm a knnowledge creating entity, with specific organizational capabilities (Teece, 1996; Teece & Pisano, 1998) that allow to sustaine competitive advantages. Tracing back a map of the theory of the firm evolution, taking into account the several perspectives adopted in the history of thought, would take the length of many books. Because of that a more fruitful strategy is circumscribing the focus of the description of the literature evolution to one flow connected to a crucial question about the nature of firm’s behaviour and about the determinants of competitive advantages. In so doing I adopt a perspective that allows me to consider the organizational structure of the firm as an element according to which the different theories can be discriminated. The approach adopted starts by considering the drawbacks of the standard neoclassical theory of the firm. Discussing the most influential theoretical approaches I end up with a close examination of the knowledge based perspective of the firm. Within this perspective the firm is considered as a knowledge creating entity that produce and mange knowledge (Nonaka, Toyama, & Nagata, 2000; Nonaka & Toyama, 2005). In a knowledge intensive organization, knowledge is clearly embedded for the most part in the human capital of the individuals that compose such an organization. In a knowledge based organization, the management, in order to cope with knowledge intensive productions, ought to develop and accumulate capabilities that shape the organizational forms in a way that relies on “cross-functional processes, extensive delayering and empowerment” (Foss 2005, p.12). This mechanism contributes to determine the absorptive capacity of the firm towards specific technologies and, in so doing, it also shape the technological trajectories along which the firm moves. After having recognized the growing importance of the firm’s organizational structure in the theoretical literature concerning the firm theory, the subsequent point of the analysis is that of providing an overview of the changes that have been occurred at micro level to the firm’s organization of production. The economic actors have to deal with challenges posed by processes of internationalisation and globalization, increased and increasing competitive pressure of less developed countries on low value added production activities, changes in technologies and increased environmental turbulence and volatility. As a consequence, it has been widely recognized that the main organizational models of production that fitted well in the 20th century are now partially inadequate and processes aiming to reorganize production activities have been widespread across several economies in recent years. Recently, the emergence of a “new” form of production organization has been proposed both by scholars, practitioners and institutions: the most prominent characteristic of such a model is its recognition of the importance of employees commitment and involvement. As a consequence it is characterized by a strong accent on the human resource management and on those practices that aim to widen the autonomy and responsibility of the workers as well as increasing their commitment to the organization (Osterman, 1994; 2000; Lynch, 2007). This “model” of production organization is by many defined as High Performance Work System (HPWS). Despite the increasing diffusion of workplace practices that may be inscribed within the concept of HPWS in western countries’ companies, it is an hazard, to some extent, to speak about the emergence of a “new organizational paradigm”. The discussion about organizational changes and the diffusion of HPWP the focus cannot abstract from a discussion about the industrial relations systems, with a particular accent on the employment relationships, because of their relevance, in the same way as production organization, in determining two major outcomes of the firm: innovation and economic performances. The argument is treated starting from the issue of the Social Dialogue at macro level, both in an European perspective and Italian perspective. The model of interaction between the social parties has repercussions, at micro level, on the employment relationships, that is to say on the relations between union delegates and management or workers and management. Finding economic and social policies capable of sustaining growth and employment within a knowledge based scenario is likely to constitute the major challenge for the next generation of social pacts, which are the main social dialogue outcomes. As Acocella and Leoni (2007) put forward the social pacts may constitute an instrument to trade wage moderation for high intensity in ICT, organizational and human capital investments. Empirical evidence, especially focused on the micro level, about the positive relation between economic growth and new organizational designs coupled with ICT adoption and non adversarial industrial relations is growing. Partnership among social parties may become an instrument to enhance firm competitiveness. The outcome of the discussion is the integration of organizational changes and industrial relations elements within a unified framework: the HPWS. Such a choice may help in disentangling the potential existence of complementarities between these two aspects of the firm internal structure on economic and innovative performance. With the third chapter starts the more original part of the thesis. The data utilized in order to disentangle the relations between HPWS practices, innovation and economic performance refer to the manufacturing firms of the Reggio Emilia province with more than 50 employees. The data have been collected through face to face interviews both to management (199 respondents) and to union representatives (181 respondents). Coupled with the cross section datasets a further data source is constituted by longitudinal balance sheets (1994-2004). Collecting reliable data that in turn provide reliable results needs always a great effort to which are connected uncertain results. Data at micro level are often subjected to a trade off: the wider is the geographical context to which the population surveyed belong the lesser is the amount of information usually collected (low level of resolution); the narrower is the focus on specific geographical context, the higher is the amount of information usually collected (high level of resolution). For the Italian case the evidence about the diffusion of HPWP and their effects on firm performances is still scanty and usually limited to local level studies (Cristini, et al., 2003). The thesis is also devoted to the deepening of an argument of particular interest: the existence of complementarities between the HPWS practices. It has been widely shown by empirical evidence that when HPWP are adopted in bundles they are more likely to impact on firm’s performances than when adopted in isolation (Ichniowski, Prennushi, Shaw, 1997). Is it true also for the local production system of Reggio Emilia? The empirical analysis has the precise aim of providing evidence on the relations between the HPWS dimensions and the innovative and economic performances of the firm. As far as the first line of analysis is concerned it must to be stressed the fundamental role that innovation plays in the economy (Geroski & Machin, 1993; Stoneman & Kwoon 1994, 1996; OECD, 2005; EC, 2002). On this point the evidence goes from the traditional innovations, usually approximated by R&D investment expenditure or number of patents, to the introduction and adoption of ICT, in the recent years (Brynjolfsson & Hitt, 2000). If innovation is important then it is critical to analyse its determinants. In this work it is hypothesised that organizational changes and firm level industrial relations/employment relations aspects that can be put under the heading of HPWS, influence the propensity to innovate in product, process and quality of the firm. The general argument may goes as follow: changes in production management and work organization reconfigure the absorptive capacity of the firm towards specific technologies and, in so doing, they shape the technological trajectories along which the firm moves; cooperative industrial relations may lead to smother adoption of innovations, because not contrasted by unions. From the first empirical chapter emerges that the different types of innovations seem to respond in different ways to the HPWS variables. The underlying processes of product, process and quality innovations are likely to answer to different firm’s strategies and needs. Nevertheless, it is possible to extract some general results in terms of the most influencing HPWS factors on innovative performance. The main three aspects are training coverage, employees involvement and the diffusion of bonuses. These variables show persistent and significant relations with all the three innovation types. The same do the components having such variables at their inside. In sum the aspects of the HPWS influence the propensity to innovate of the firm. At the same time, emerges a quite neat (although not always strong) evidence of complementarities presence between HPWS practices. In terns of the complementarity issue it can be said that some specific complementarities exist. Training activities, when adopted and managed in bundles, are related to the propensity to innovate. Having a sound skill base may be an element that enhances the firm’s capacity to innovate. It may enhance both the capacity to absorbe exogenous innovation and the capacity to endogenously develop innovations. The presence and diffusion of bonuses and the employees involvement also spur innovative propensity. The former because of their incentive nature and the latter because direct workers participation may increase workers commitment to the organizationa and thus their willingness to support and suggest inovations. The other line of analysis provides results on the relation between HPWS and economic performances of the firm. There have been a bulk of international empirical studies on the relation between organizational changes and economic performance (Black & Lynch 2001; Zwick 2004; Janod & Saint-Martin 2004; Huselid 1995; Huselid & Becker 1996; Cappelli & Neumark 2001), while the works aiming to capture the relations between economic performance and unions or industrial relations aspects are quite scant (Addison & Belfield, 2001; Pencavel, 2003; Machin & Stewart, 1990; Addison, 2005). In the empirical analysis the integration of the two main areas of the HPWS represent a scarcely exploited approach in the panorama of both national and international empirical studies. As remarked by Addison “although most analysis of workers representation and employee involvement/high performance work practices have been conducted in isolation – while sometimes including the other as controls – research is beginning to consider their interactions” (Addison, 2005, p.407). The analysis conducted exploiting temporal lags between dependent and covariates, possibility given by the merger of cross section and panel data, provides evidence in favour of the existence of HPWS practices impact on firm’s economic performance, differently measured. Although it does not seem to emerge robust evidence on the existence of complementarities among HPWS aspects on performances there is evidence of a general positive influence of the single practices. The results are quite sensible to the time lags, inducing to hypothesize that time varying heterogeneity is an important factor in determining the impact of organizational changes on economic performance. The implications of the analysis can be of help both to management and local level policy makers. Although the results are not simply extendible to other local production systems it may be argued that for contexts similar to the Reggio Emilia province, characterized by the presence of small and medium enterprises organized in districts and by a deep rooted unionism, with strong supporting institutions, the results and the implications here obtained can also fit well. However, a hope for future researches on the subject treated in the present work is that of collecting good quality information over wider geographical areas, possibly at national level, and repeated in time. Only in this way it is possible to solve the Gordian knot about the linkages between innovation, performance, high performance work practices and industrial relations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The sustained demand for faster,more powerful chips has beenmet by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the on-chip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation focuses on all of the above points, by describing a NoC architectural implementation called ×pipes; a NoC simulation environment within a cycle-accurate MPSoC emulator called MPARM; a NoC design flow consisting of a front-end tool for optimal NoC instantiation, called SunFloor, and a set of back-end facilities for the study of NoC physical implementations. This dissertation proves the viability of NoCs for current and upcoming designs, by outlining their advantages (alongwith a fewtradeoffs) and by providing a full NoC implementation framework. It also presents some examples of additional extensions of NoCs, allowing e.g. for increased fault tolerance, and outlines where NoCsmay find further application scenarios, such as in stacked chips.