816 resultados para New Venture Performance
Resumo:
In the past decade, the advent of efficient genome sequencing tools and high-throughput experimental biotechnology has lead to enormous progress in the life science. Among the most important innovations is the microarray tecnology. It allows to quantify the expression for thousands of genes simultaneously by measurin the hybridization from a tissue of interest to probes on a small glass or plastic slide. The characteristics of these data include a fair amount of random noise, a predictor dimension in the thousand, and a sample noise in the dozens. One of the most exciting areas to which microarray technology has been applied is the challenge of deciphering complex disease such as cancer. In these studies, samples are taken from two or more groups of individuals with heterogeneous phenotypes, pathologies, or clinical outcomes. these samples are hybridized to microarrays in an effort to find a small number of genes which are strongly correlated with the group of individuals. Eventhough today methods to analyse the data are welle developed and close to reach a standard organization (through the effort of preposed International project like Microarray Gene Expression Data -MGED- Society [1]) it is not unfrequant to stumble in a clinician's question that do not have a compelling statistical method that could permit to answer it.The contribution of this dissertation in deciphering disease regards the development of new approaches aiming at handle open problems posed by clinicians in handle specific experimental designs. In Chapter 1 starting from a biological necessary introduction, we revise the microarray tecnologies and all the important steps that involve an experiment from the production of the array, to the quality controls ending with preprocessing steps that will be used into the data analysis in the rest of the dissertation. While in Chapter 2 a critical review of standard analysis methods are provided stressing most of problems that In Chapter 3 is introduced a method to adress the issue of unbalanced design of miacroarray experiments. In microarray experiments, experimental design is a crucial starting-point for obtaining reasonable results. In a two-class problem, an equal or similar number of samples it should be collected between the two classes. However in some cases, e.g. rare pathologies, the approach to be taken is less evident. We propose to address this issue by applying a modified version of SAM [2]. MultiSAM consists in a reiterated application of a SAM analysis, comparing the less populated class (LPC) with 1,000 random samplings of the same size from the more populated class (MPC) A list of the differentially expressed genes is generated for each SAM application. After 1,000 reiterations, each single probe given a "score" ranging from 0 to 1,000 based on its recurrence in the 1,000 lists as differentially expressed. The performance of MultiSAM was compared to the performance of SAM and LIMMA [3] over two simulated data sets via beta and exponential distribution. The results of all three algorithms over low- noise data sets seems acceptable However, on a real unbalanced two-channel data set reagardin Chronic Lymphocitic Leukemia, LIMMA finds no significant probe, SAM finds 23 significantly changed probes but cannot separate the two classes, while MultiSAM finds 122 probes with score >300 and separates the data into two clusters by hierarchical clustering. We also report extra-assay validation in terms of differentially expressed genes Although standard algorithms perform well over low-noise simulated data sets, multi-SAM seems to be the only one able to reveal subtle differences in gene expression profiles on real unbalanced data. In Chapter 4 a method to adress similarities evaluation in a three-class prblem by means of Relevance Vector Machine [4] is described. In fact, looking at microarray data in a prognostic and diagnostic clinical framework, not only differences could have a crucial role. In some cases similarities can give useful and, sometimes even more, important information. The goal, given three classes, could be to establish, with a certain level of confidence, if the third one is similar to the first or the second one. In this work we show that Relevance Vector Machine (RVM) [2] could be a possible solutions to the limitation of standard supervised classification. In fact, RVM offers many advantages compared, for example, with his well-known precursor (Support Vector Machine - SVM [3]). Among these advantages, the estimate of posterior probability of class membership represents a key feature to address the similarity issue. This is a highly important, but often overlooked, option of any practical pattern recognition system. We focused on Tumor-Grade-three-class problem, so we have 67 samples of grade I (G1), 54 samples of grade 3 (G3) and 100 samples of grade 2 (G2). The goal is to find a model able to separate G1 from G3, then evaluate the third class G2 as test-set to obtain the probability for samples of G2 to be member of class G1 or class G3. The analysis showed that breast cancer samples of grade II have a molecular profile more similar to breast cancer samples of grade I. Looking at the literature this result have been guessed, but no measure of significance was gived before.
Resumo:
The study is aimed to calculate an innovative numerical index for bit performance evaluation called Bit Index (BI), applied on a new type of bit database named Formation Drillability Catalogue (FDC). A dedicated research programme (developed by Eni E&P and the University of Bologna) studied a drilling model for bit performance evaluation named BI, derived from data recorded while drilling (bit records, master log, wireline log, etc.) and dull bit evaluation. This index is calculated with data collected inside the FDC, a novel classification of Italian formations aimed to the geotechnical and geomechanical characterization and subdivisions of the formations, called Minimum Interval (MI). FDC was conceived and prepared at Eni E&P Div., and contains a large number of significant drilling parameters. Five wells have been identified inside the FDC and have been tested for bit performance evaluation. The values of BI are calculated for each bit run and are compared with the values of the cost per metre. The case study analyzes bits of the same type, diameters and run in the same formation. The BI methodology implemented on MI classification of FDC can improve consistently the bit performances evaluation, and it helps to identify the best performer bits. Moreover, FDC turned out to be functional to BI, since it discloses and organizes formation details that are not easily detectable or usable from bit records or master logs, allowing for targeted bit performance evaluations. At this stage of development, the BI methodology proved to be economic and reliable. The quality of bit performance analysis obtained with BI seems also more effective than the traditional “quick look” analysis, performed on bit records, or on the pure cost per metre evaluation.
Resumo:
The need for high bandwidth, due to the explosion of new multi\-media-oriented IP-based services, as well as increasing broadband access requirements is leading to the need of flexible and highly reconfigurable optical networks. While transmission bandwidth does not represent a limit due to the huge bandwidth provided by optical fibers and Dense Wavelength Division Multiplexing (DWDM) technology, the electronic switching nodes in the core of the network represent the bottleneck in terms of speed and capacity for the overall network. For this reason DWDM technology must be exploited not only for data transport but also for switching operations. In this Ph.D. thesis solutions for photonic packet switches, a flexible alternative with respect to circuit-switched optical networks are proposed. In particular solutions based on devices and components that are expected to mature in the near future are proposed, with the aim to limit the employment of complex components. The work presented here is the result of part of the research activities performed by the Networks Research Group at the Department of Electronics, Computer Science and Systems (DEIS) of the University of Bologna, Italy. In particular, the work on optical packet switching has been carried on within three relevant research projects: the e-Photon/ONe and e-Photon/ONe+ projects, funded by the European Union in the Sixth Framework Programme, and the national project OSATE funded by the Italian Ministry of Education, University and Scientific Research. The rest of the work is organized as follows. Chapter 1 gives a brief introduction to network context and contention resolution in photonic packet switches. Chapter 2 presents different strategies for contention resolution in wavelength domain. Chapter 3 illustrates a possible implementation of one of the schemes proposed in chapter 2. Then, chapter 4 presents multi-fiber switches, which employ jointly wavelength and space domains to solve contention. Chapter 5 shows buffered switches, to solve contention in time domain besides wavelength domain. Finally chapter 6 presents a cost model to compare different switch architectures in terms of cost.
Resumo:
[EN]A new parallel algorithm for simultaneous untangling and smoothing of tetrahedral meshes is proposed in this paper. We provide a detailed analysis of its performance on shared-memory many-core computer architectures. This performance analysis includes the evaluation of execution time, parallel scalability, load balancing, and parallelism bottlenecks. Additionally, we compare the impact of three previously published graph coloring procedures on the performance of our parallel algorithm. We use six benchmark meshes with a wide range of sizes. Using these experimental data sets, we describe the behavior of the parallel algorithm for different data sizes. We demonstrate that this algorithm is highly scalable when it runs on two different high-performance many-core computers with up to 128 processors...
Resumo:
The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip. The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain the increasing tight bandwidth and latency constraints that these systems demand. The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter- connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors. On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top boxes etc... SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip. In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose: • a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance; • a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip; • a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing; • a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to integrate shared-memory and message-passing based components on a single System on Chip; • a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs; • a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced. This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.
Resumo:
In the early 1970 the community has started to realize that have as a main principle the industry one, with the oblivion of the people and health conditions and of the world in general, it could not be a guideline principle. The sea, as an energy source, has the characteristic of offering different types of exploitation, in this project the focus is on the wave energy. Over the last 15 years the Countries interested in the renewable energies grew. Therefore many devices have came out, first in the world of research, then in the commercial one; these converters are able to achieve an energy transformation into electrical energy. The purpose of this work is to analyze the efficiency of a new wave energy converter, called WavePiston, with the aim of determine the feasibility of its actual application in different wave conditions: from the energy sea state of the North Sea, to the more quiet of the Mediterranean Sea. The evaluation of the WavePiston is based on the experimental investigation conducted at the University of Aalborg, in Denmark; and on a numerical modelling of the device in question, to ascertain its efficiency regardless the laboratory results. The numerical model is able to predict the laboratory condition, but it is not yet a model which can be used for any installation, in fact no mooring or economical aspect are included yet. È dai primi anni del 1970 che si è iniziato a capire che il solo principio dell’industria con l’incuranza delle condizioni salutari delle persone e del mondo in generale non poteva essere un principio guida. Il mare, come fonte energetica, ha la caratteristica di offrire diverse tipologie di sfruttamento, in questo progetto è stata analizzata l’energia da onda. Negli ultimi 15 anni sono stati sempre più in aumento i Paesi interessati in questo ambito e di conseguenza, si sono affacciati, prima nel mondo della ricerca, poi in quello commerciale, sempre più dispositivi atti a realizzare questa trasformazione energetica. Di tali convertitori di energia ondosa ne esistono diverse classificazioni. Scopo di tale lavoro è analizzare l’efficienza di un nuovo convertitore di energia ondosa, chiamato WavePiston, al fine si stabilire la fattibilità di una sua reale applicazione in diverse condizioni ondose: dalle più energetiche del Mare del Nord, alle più quiete del Mar Mediterraneo. La valutazione sul WavePiston è basata sullo studio sperimentale condotto nell’Università di Aalborg, in Danimarca; e su di una modellazione numerica del dispositivo stesso, al fine di conoscerne l’efficienza a prescindere dalla possibilità di avere risultati di laboratorio. Il modello numerico è in grado di predirre le condizioni di laboratorio, ma non considera ancora elementi come gli ancoraggi o valutazione dei costi.
Resumo:
Clusters have increasingly become an essential part of policy discourses at all levels, EU, national, regional, dealing with regional development, competitiveness, innovation, entrepreneurship, SMEs. These impressive efforts in promoting the concept of clusters on the policy-making arena have been accompanied by much less academic and scientific research work investigating the actual economic performance of firms in clusters, the design and execution of cluster policies and going beyond singular case studies to a more methodologically integrated and comparative approach to the study of clusters and their real-world impact. The theoretical background is far from being consolidated and there is a variety of methodologies and approaches for studying and interpreting this phenomenon while at the same time little comparability among studies on actual cluster performances. The conceptual framework of clustering suggests that they affect performance but theory makes little prediction as to the ultimate distribution of the value being created by clusters. This thesis takes the case of Eastern European countries for two reasons. One is that clusters, as coopetitive environments, are a new phenomenon as the previous centrally-based system did not allow for such types of firm organizations. The other is that, as new EU member states, they have been subject to the increased popularization of the cluster policy approach by the European Commission, especially in the framework of the National Reform Programmes related to the Lisbon objectives. The originality of the work lays in the fact that starting from an overview of theoretical contributions on clustering, it offers a comparative empirical study of clusters in transition countries. There have been very few examples in the literature that attempt to examine cluster performance in a comparative cross-country perspective. It adds to this an analysis of cluster policies and their implementation or lack of such as a way to analyse the way the cluster concept has been introduced to transition economies. Our findings show that the implementation of cluster policies does vary across countries with some countries which have embraced it more than others. The specific modes of implementation, however, are very similar, based mostly on soft measures such as funding for cluster initiatives, usually directed towards the creation of cluster management structures or cluster facilitators. They are essentially founded on a common assumption that the added values of clusters is in the creation of linkages among firms, human capital, skills and knowledge at the local level, most often perceived as the regional level. Often times geographical proximity is not a necessary element in the application process and cluster application are very similar to network membership. Cluster mapping is rarely a factor in the selection of cluster initiatives for funding and the relative question about critical mass and expected outcomes is not considered. In fact, monitoring and evaluation are not elements of the cluster policy cycle which have received a lot of attention. Bulgaria and the Czech Republic are the countries which have implemented cluster policies most decisively, Hungary and Poland have made significant efforts, while Slovakia and Romania have only sporadically and not systematically used cluster initiatives. When examining whether, in fact, firms located within regional clusters perform better and are more efficient than similar firms outside clusters, we do find positive results across countries and across sectors. The only country with negative impact from being located in a cluster is the Czech Republic.
Resumo:
This doctoral dissertation is triggered by an emergent problem: how can firms reinvent themselves? Continuity- and change-oriented decisions fundamentally shape overtime the activities and potential revenues of organizations and other adaptive systems, but both types of actions draw upon limited resources and rely on different organizational routines and capabilities. Most organizations appear to have difficulties in making tradeoffs, so that it is easier to overinvest in one of them than to successfully achieve a mixture of both. Nevertheless, theory and empirical evidence suggest that too little of either may reduce performance, indicating a need to learn more about how organizations reconcile these tensions. In the first paper, I moved from the consideration that rapid changes in competitive environments increasingly require firms to be “ambidextrous” implementing organizational mechanisms and structures that allow continuity- and change-oriented activities to be engaged at the same time. More specifically, I show that continuity- and change-related decisions can’t be confined either inside or outside the firm, but span overtime across distinct decision domains located within and beyond the organizational boundaries. Reconciling static and dynamic perspectives of ambidexterity, I conceptualize a firm’s strategy as a bundle of decisions about product attributes and components of the production team, proposing a multidimensional and dynamic model of structural ambidexterity that explains why and how firms could manage conflicting pressures for continuity and change in the context of new products. In the second study I note how rigorous systematic evidence documenting the success of ambidextrous organizations is lacking, and there has been very little investigation of how firms deal with continuity and change in new products. How to manage the transition form a successful product to another? What to change and what to keep? Incumbents that deal with series of products over time need to update their offerings in order to have the most relevant attributes to prospect clients without disappoint the current customer base. They need to both match and anticipate consumers’ preferences, blending something old with something new to satisfy the current demand and enlarge the herd by appealing to newer audiences. This paper contributes to strategic renewal and ambidexterity-related research with the first empirically assessment of a positive consumer response to ambidexterity in new products. Also, this study provides a practical method to monitor overtime the degree to which a brand or a firm is continuity- or change- oriented and evaluate different strategy profiles across two decision domains that play a pivotal role in new products: product attributes and components of the production team.
Resumo:
My project explores and compares different forms of gender performance in contemporary art and visual culture according to a perspective centered on photography. Thanks to its attesting power this medium can work as a ready-made. In fact during the 20th century it played a key role in the cultural emancipation of the body which (using a Michel Foucault’s expression) has now become «the zero point of the world». Through performance the body proves to be a living material of expression and communication while photography ensures the recording of any ephemeral event that happens in time and space. My questioning approach considers the gender constructed imagery from the 1990s to the present in order to investigate how photography’s strong aura of realism promotes and allows fantasies of transformation. The contemporary fascination with gender (especially for art and fashion) represents a crucial issue in the global context of postmodernity and is manifested in a variety of visual media, from photography to video and film. Moreover the internet along with its digital transmission of images has deeply affected our world (from culture to everyday life) leading to a postmodern preference for performativity over the more traditional and linear forms of narrativity. As a consequence individual borders get redefined by the skin itself which (dissected through instant vision) turns into a ductile material of mutation and hybridation in the service of identity. My critical assumptions are taken from the most relevant changes occurred in philosophy during the last two decades as a result of the contributions by Jacques Lacan, Michel Foucault, Jacques Derrida, Gilles Deleuze who developed a cross-disciplinary and comparative approach to interpret the crisis of modernity. They have profoundly influenced feminist studies so that the category of gender has been reassessed in contrast with sex (as a biological connotation) and in relation to history, culture, society. The ideal starting point of my research is the year 1990. I chose it as the approximate historical moment when the intersection of race, class and gender were placed at the forefront of international artistic production concerned with identity, diversity and globalization. Such issues had been explored throughout the 1970s but it was only from the mid-1980s onward that they began to be articulated more consistently. Published in 1990, the book "Gender trouble: feminism and the subversion of identity" by Judith Butler marked an important breakthrough by linking gender to performance as well as investigating the intricate connections between theory and practice, embodiment and representation. It inspired subsequent research in a variety of disciplines, art history included. In the same year Teresa de Lauretis launched the definition of queer theory to challenge the academic perspective in gay and lesbian studies. In the meantime the rise of Third Wave Feminism in the US introduced a racially and sexually inclusive vision over the global situation in order to reflect on subjectivity, new technologies and popular culture in connection with gender representation. These conceptual tools have enabled prolific readings of contemporary cultural production whether fine arts or mass media. After discussing the appropriate framework of my project and taking into account the postmodern globalization of the visual, I have turned to photography to map gender representation both in art and in fashion. Therefore I have been creating an archive of images around specific topics. I decided to include fashion photography because in the 1990s this genre moved away from the paradigm of an idealized and classical beauty toward a new vernacular allied with lifestyles, art practices, pop and youth culture; as one might expect the dominant narrative modes in fashion photography are now mainly influenced by cinema and snapshot. These strategies originate story lines and interrupted narratives using models’ performance to convey a particular imagery where identity issues emerge as an essential part of fashion spectacle. Focusing on the intersections of gender identities with socially and culturally produced identities, my approach intends to underline how the fashion world has turned to current trends in art photography and in some case turned to the artists themselves. The growing fluidity of the categories that distinguish art from fashion photography represents a particularly fruitful moment of visual exchange. Varying over time the dialogue between these two fields has always been vital; nowadays it can be studied as a result of this close relationship between contemporary art world and consumer culture. Due to the saturation of postmodern imagery the feedback between art and fashion has become much more immediate and then increasingly significant for anyone who wants to investigate the construction of gender identity through performance. In addition to that a lot of magazines founded in the 1990s bridged the worlds of art and fashion because some of their designers and even editors were art-school graduates encouraging innovation. The inclusion of art within such magazines aimed at validating them as a form of art in themselves supporting a dynamic intersection for music, fashion, design and youth culture: an intersection that also contributed to create and spread different gender stereotypes. This general interest in fashion produced many exhibitions of and about fashion itself at major international venues such as the Victoria and Albert Museum in London, the Metropolitan Museum of Art and the Solomon R. Guggenheim Museum in New York. Since then this celebrated success of fashion has been regarded as a typical element of postmodern culture. Owing to that I have also based my analysis on some important exhibitions dealing with gender performance like "Féminin-Masculin" at the Centre Pompidou of Paris (1995), "Rrose is a Rrose is a Rrose. Gender performance in photography" at the Solomon R. Guggenheim Museum of New York (1997), "Global Feminisms" at the Brooklyn Museum (2007), "Female Trouble" at the Pinakothek der Moderne in München together with the workshops dedicated to "Performance: gender and identity" in June 2005 at the Tate Modern of London. Since 2003 in Italy we have had Gender Bender - an international festival held annually in Bologna - to explore the gender imagery stemming from contemporary culture. In few days this festival offers a series of events ranging from visual arts, performance, cinema, literature to conferences and music. Being aware that any method of research is neither race nor gender neutral I have traced these critical paths to question gender identity in a multicultural perspective taking account of the political implications too. In fact, if visibility may be equated with exposure, we can also read these images as points of intersection of visibility with social power. Since gender assignations rely so heavily on the visual, the postmodern dismantling of gender certainty through performance has wide-ranging effects that need to be analyzed. In some sense this practice can even contest the dominance of visual within postmodernism. My visual map in contemporary art and fashion photography includes artists like Nan Goldin, Cindy Sherman, Hellen van Meene, Rineke Dijkstra, Ed Templeton, Ryan McGinley, Anne Daems, Miwa Yanagi, Tracey Moffat, Catherine Opie, Tomoko Sawada, Vanessa Beecroft, Yasumasa Morimura, Collier Schorr among others.
Resumo:
Electronic applications are nowadays converging under the umbrella of the cloud computing vision. The future ecosystem of information and communication technology is going to integrate clouds of portable clients and embedded devices exchanging information, through the internet layer, with processing clusters of servers, data-centers and high performance computing systems. Even thus the whole society is waiting to embrace this revolution, there is a backside of the story. Portable devices require battery to work far from the power plugs and their storage capacity does not scale as the increasing power requirement does. At the other end processing clusters, such as data-centers and server farms, are build upon the integration of thousands multiprocessors. For each of them during the last decade the technology scaling has produced a dramatic increase in power density with significant spatial and temporal variability. This leads to power and temperature hot-spots, which may cause non-uniform ageing and accelerated chip failure. Nonetheless all the heat removed from the silicon translates in high cooling costs. Moreover trend in ICT carbon footprint shows that run-time power consumption of the all spectrum of devices accounts for a significant slice of entire world carbon emissions. This thesis work embrace the full ICT ecosystem and dynamic power consumption concerns by describing a set of new and promising system levels resource management techniques to reduce the power consumption and related issues for two corner cases: Mobile Devices and High Performance Computing.
Resumo:
To date the hospital radiological workflow is completing a transition from analog to digital technology. Since the X-rays digital detection technologies have become mature, hospitals are trading on the natural devices turnover to replace the conventional screen film devices with digital ones. The transition process is complex and involves not just the equipment replacement but also new arrangements for image transmission, display (and reporting) and storage. This work is focused on 2D digital detector’s characterization with a concern to specific clinical application; the systems features linked to the image quality are analyzed to assess the clinical performances, the conversion efficiency, and the minimum dose necessary to get an acceptable image. The first section overviews the digital detector technologies focusing on the recent and promising technological developments. The second section contains a description of the characterization methods considered in this thesis categorized in physical, psychophysical and clinical; theory, models and procedures are described as well. The third section contains a set of characterizations performed on new equipments that appears to be some of the most advanced technologies available to date. The fourth section deals with some procedures and schemes employed for quality assurance programs.
Resumo:
The purpose of this research is to contribute to the literature on organizational demography and new product development by investigating how diverse individual career histories impact team performance. Moreover we highlighted the importance of considering also the institutional context and the specific labour market arrangements in which a team is embedded, in order to interpret correctly the effect of career-related diversity measures on performance. The empirical setting of the study is the videogame industry, and the teams in charge of the development of new game titles. Video games development teams are the ideal setting to investigate the influence of career histories on team performance, since the development of videogames is performed by multidisciplinary teams composed by specialists with a wide variety of technical and artistic backgrounds, who execute a significant amounts of creative thinking. We investigate our research question both with quantitative methods and with a case study on the Japanese videogame industry: one of the most innovative in this sector. Our results show how career histories in terms of occupational diversity, prior functional diversity and prior product diversity, usually have a positive influence on team performance. However, when the moderating effect of the institutional setting is taken in to account, career diversity has different or even opposite effect on team performance, according to the specific national context in which a team operates.
Resumo:
In this thesis we made the first steps towards the systematic application of a methodology for automatically building formal models of complex biological systems. Such a methodology could be useful also to design artificial systems possessing desirable properties such as robustness and evolvability. The approach we follow in this thesis is to manipulate formal models by means of adaptive search methods called metaheuristics. In the first part of the thesis we develop state-of-the-art hybrid metaheuristic algorithms to tackle two important problems in genomics, namely, the Haplotype Inference by parsimony and the Founder Sequence Reconstruction Problem. We compare our algorithms with other effective techniques in the literature, we show strength and limitations of our approaches to various problem formulations and, finally, we propose further enhancements that could possibly improve the performance of our algorithms and widen their applicability. In the second part, we concentrate on Boolean network (BN) models of gene regulatory networks (GRNs). We detail our automatic design methodology and apply it to four use cases which correspond to different design criteria and address some limitations of GRN modeling by BNs. Finally, we tackle the Density Classification Problem with the aim of showing the learning capabilities of BNs. Experimental evaluation of this methodology shows its efficacy in producing network that meet our design criteria. Our results, coherently to what has been found in other works, also suggest that networks manipulated by a search process exhibit a mixture of characteristics typical of different dynamical regimes.
Resumo:
Motivated by the need to understand which are the underlying forces that trigger network evolution, we develop a multilevel theoretical and empirically testable model to examine the relationship between changes in the external environment and network change. We refer to network change as the dissolution or replacement of an interorganizational tie, adding also the case of the formation of new ties with new or preexisting partners. Previous research has paid scant attention to the organizational consequences of quantum change enveloping entire industries in favor of an emphasis on continuous change. To highlight radical change we introduce the concept of environmental jolt. The September 11 terrorist attacks provide us with a natural experiment to test our hypotheses on the antecedents and the consequences of network change. Since network change can be explained at multiple levels, we incorporate firm-level variables as moderators. The empirical setting is the global airline industry, which can be regarded as a constantly changing network of alliances. The study reveals that firms react to environmental jolts by forming homophilous ties and transitive triads as opposed to the non jolt periods. Moreover, we find that, all else being equal, firms that adopt a brokerage posture will have positive returns. However, we find that in the face of an environmental jolt brokerage relates negatively to firm performance. Furthermore, we find that the negative relationship between brokerage and performance during an environmental jolt is more significant for larger firms. Our findings suggest that jolts are an important predictor of network change, that they significantly affect operational returns and should be thus incorporated in studies of network dynamics.
Resumo:
Critical lower limb ischemia is a severe disease. A common approach is infrainguinal bypass. Synthetic vascular prosthesis, are good conduits in high-flow low-resistance conditions but have difficulty in their performance as small diameter vessel grafts. A new approach is the use of native decellularized vascular tissues. Cell-free vessels are expected to have improved biocompatibility when compared to synthetic and are optimal natural 3D matrix templates for driving stem cell growth and tissue assembly in vivo. Decellularization of tissues represent a promising field for regenerative medicine, with the aim to develop a methodology to obtain small-diameter allografts to be used as a natural scaffold suited for in vivo cell growth and pseudo-tissue assembly, eliminating failure caused from immune response activation. Material and methods. Umbilical cord-derived mesenchymal cells isolated from human umbilical cord tissue were expanded in advanced DMEM. Immunofluorescence and molecular characterization revealed a stem cell profile. A non-enzymatic protocol, that associate hypotonic shock and low-concentration ionic detergent, was used to decellularize vessel segments. Cells were seeded cell-free scaffolds using a compound of fibrin and thrombin and incubated in DMEM, after 4 days of static culture they were placed for 2 weeks in a flow-bioreactor, mimicking the cardiovascular pulsatile flow. After dynamic culture, samples were processed for histological, biochemical and ultrastructural analysis. Discussion. Histology showed that the dynamic culture cells initiate to penetrate the extracellular matrix scaffold and to produce components of the ECM, as collagen fibres. Sirius Red staining showed layers of immature collagen type III and ultrastructural analysis revealed 30 nm thick collagen fibres, presumably corresponding to the immature collagen. These data confirm the ability of cord-derived cells to adhere and penetrate a natural decellularized tissue and to start to assembly into new tissue. This achievement makes natural 3D matrix templates prospectively valuable candidates for clinical bypass procedures