15 resultados para asset renewal, asset management, asset decision framework, decision modelling, life cycle management, multi-attribute utility theory, option theory, real options analysis, process modelling, front end engineering, engineering philosophy

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In questo lavoro di tesi si è elaborato un quadro di riferimento per l’utilizzo combinato di due metodologie di valutazione di impatti LCA e RA, per tecnologie emergenti. L’originalità dello studio sta nell’aver proposto e anche applicato il quadro di riferimento ad un caso studio, in particolare ad una tecnologia innovativa di refrigerazione, basata su nanofluidi (NF), sviluppata da partner del progetto Europeo Nanohex che hanno collaborato all’elaborazione degli studi soprattutto per quanto riguarda l’inventario dei dati necessari. La complessità dello studio è da ritrovare tanto nella difficile integrazione di due metodologie nate per scopi differenti e strutturate per assolvere a quegli scopi, quanto nel settore di applicazione che seppur in forte espansione ha delle forti lacune di informazioni circa processi di produzione e comportamento delle sostanze. L’applicazione è stata effettuata sulla produzione di nanofluido (NF) di allumina secondo due vie produttive (single-stage e two-stage) per valutare e confrontare gli impatti per la salute umana e l’ambiente. Occorre specificare che il LCA è stato quantitativo ma non ha considerato gli impatti dei NM nelle categorie di tossicità. Per quanto concerne il RA è stato sviluppato uno studio di tipo qualitativo, a causa della problematica di carenza di parametri tossicologici e di esposizione su citata avente come focus la categoria dei lavoratori, pertanto è stata fatta l’assunzione che i rilasci in ambiente durante la fase di produzione sono trascurabili. Per il RA qualitativo è stato utilizzato un SW specifico, lo Stoffenmanger-Nano che rende possibile la prioritizzazione dei rischi associati ad inalazione in ambiente di lavoro. Il quadro di riferimento prevede una procedura articolata in quattro fasi: DEFINIZIONE SISTEMA TECNOLOGICO, RACCOLTA DATI, VALUTAZIONE DEL RISCHIO E QUANTIFICAZIONE DEGLI IMPATTI, INTERPRETAZIONE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The sustained demand for faster,more powerful chips has beenmet by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the on-chip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation focuses on all of the above points, by describing a NoC architectural implementation called ×pipes; a NoC simulation environment within a cycle-accurate MPSoC emulator called MPARM; a NoC design flow consisting of a front-end tool for optimal NoC instantiation, called SunFloor, and a set of back-end facilities for the study of NoC physical implementations. This dissertation proves the viability of NoCs for current and upcoming designs, by outlining their advantages (alongwith a fewtradeoffs) and by providing a full NoC implementation framework. It also presents some examples of additional extensions of NoCs, allowing e.g. for increased fault tolerance, and outlines where NoCsmay find further application scenarios, such as in stacked chips.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Asset Management (AM) is a set of procedures operable at the strategic-tacticaloperational level, for the management of the physical asset’s performance, associated risks and costs within its whole life-cycle. AM combines the engineering, managerial and informatics points of view. In addition to internal drivers, AM is driven by the demands of customers (social pull) and regulators (environmental mandates and economic considerations). AM can follow either a top-down or a bottom-up approach. Considering rehabilitation planning at the bottom-up level, the main issue would be to rehabilitate the right pipe at the right time with the right technique. Finding the right pipe may be possible and practicable, but determining the timeliness of the rehabilitation and the choice of the techniques adopted to rehabilitate is a bit abstruse. It is a truism that rehabilitating an asset too early is unwise, just as doing it late may have entailed extra expenses en route, in addition to the cost of the exercise of rehabilitation per se. One is confronted with a typical ‘Hamlet-isque dilemma’ – ‘to repair or not to repair’; or put in another way, ‘to replace or not to replace’. The decision in this case is governed by three factors, not necessarily interrelated – quality of customer service, costs and budget in the life cycle of the asset in question. The goal of replacement planning is to find the juncture in the asset’s life cycle where the cost of replacement is balanced by the rising maintenance costs and the declining level of service. System maintenance aims at improving performance and maintaining the asset in good working condition for as long as possible. Effective planning is used to target maintenance activities to meet these goals and minimize costly exigencies. The main objective of this dissertation is to develop a process-model for asset replacement planning. The aim of the model is to determine the optimal pipe replacement year by comparing, temporally, the annual operating and maintenance costs of the existing asset and the annuity of the investment in a new equivalent pipe, at the best market price. It is proposed that risk cost provide an appropriate framework to decide the balance between investment for replacing or operational expenditures for maintaining an asset. The model describes a practical approach to estimate when an asset should be replaced. A comprehensive list of criteria to be considered is outlined, the main criteria being a visà- vis between maintenance and replacement expenditures. The costs to maintain the assets should be described by a cost function related to the asset type, the risks to the safety of people and property owing to declining condition of asset, and the predicted frequency of failures. The cost functions reflect the condition of the existing asset at the time the decision to maintain or replace is taken: age, level of deterioration, risk of failure. The process model is applied in the wastewater network of Oslo, the capital city of Norway, and uses available real-world information to forecast life-cycle costs of maintenance and rehabilitation strategies and support infrastructure management decisions. The case study provides an insight into the various definitions of ‘asset lifetime’ – service life, economic life and physical life. The results recommend that one common value for lifetime should not be applied to the all the pipelines in the stock for investment planning in the long-term period; rather it would be wiser to define different values for different cohorts of pipelines to reduce the uncertainties associated with generalisations for simplification. It is envisaged that more criteria the municipality is able to include, to estimate maintenance costs for the existing assets, the more precise will the estimation of the expected service life be. The ability to include social costs enables to compute the asset life, not only based on its physical characterisation, but also on the sensitivity of network areas to social impact of failures. The type of economic analysis is very sensitive to model parameters that are difficult to determine accurately. The main value of this approach is the effort to demonstrate that it is possible to include, in decision-making, factors as the cost of the risk associated with a decline in level of performance, the level of this deterioration and the asset’s depreciation rate, without looking at age as the sole criterion for making decisions regarding replacements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I rifiuti come oggetti impegnano tutte le istituzioni umane in una lotta di definizione del posto che occupano e quindi del valore che assumono. In tale dinamica la gestione dei rifiuti diventa un fatto sociale totale che coinvolge tutte le istituzioni umane in una lotta di definizione territorializzata. La storia del movimento ambientalista ci mostra come partendo dal disagio nei confronti dell’oggetto si è passati ad un disagio nei confronti delle idee che lo generano. Modernizzazione ecologica e modernizzazione democratica sembrano andare per un certo periodo d’accordo. Nei casi di conflittualità recente, e nello studio di caso approfondito di un piano provinciale della gestione rifiuti, il carattere anticipatore dell’attivismo ambientalista, sta rendendo sempre più costosi e incerti, investimenti e risultati strategici . Anche i principi delle politiche sono messi in discussione. La sostenibilità è da ricercare in una relativizzazione dei principi di policy e degli strumenti tecnici di valutazione (e.g. LCA) verso una maggiore partecipazione di tutti gli attori. Si propone un modello di governance che parta da un coordinamento amministrativo territoriale sulle reti logistiche, quindi un adeguamento geografico degli ATO, e un loro maggior ruolo nella gestione del processo di coordinamento e pianificazione. Azioni queste che devono a loro volta aprirsi ai flussi (ecologici ed economici) e ai loro attori di riferimento: dalle aziende multiutility agli ambientalisti. Infine è necessario un momento di controllo democratico che può avere una funzione arbitrale nei conflitti tra gli attori o di verifica. La ricerca si muove tra la storia e la filosofia, la ricerca empirica e la riflessione teorica. Sono state utilizzate anche tecniche di indagine attiva, come il focus group e l’intervista.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The advent of distributed and heterogeneous systems has laid the foundation for the birth of new architectural paradigms, in which many separated and autonomous entities collaborate and interact to the aim of achieving complex strategic goals, impossible to be accomplished on their own. A non exhaustive list of systems targeted by such paradigms includes Business Process Management, Clinical Guidelines and Careflow Protocols, Service-Oriented and Multi-Agent Systems. It is largely recognized that engineering these systems requires novel modeling techniques. In particular, many authors are claiming that an open, declarative perspective is needed to complement the closed, procedural nature of the state of the art specification languages. For example, the ConDec language has been recently proposed to target the declarative and open specification of Business Processes, overcoming the over-specification and over-constraining issues of classical procedural approaches. On the one hand, the success of such novel modeling languages strongly depends on their usability by non-IT savvy: they must provide an appealing, intuitive graphical front-end. On the other hand, they must be prone to verification, in order to guarantee the trustworthiness and reliability of the developed model, as well as to ensure that the actual executions of the system effectively comply with it. In this dissertation, we claim that Computational Logic is a suitable framework for dealing with the specification, verification, execution, monitoring and analysis of these systems. We propose to adopt an extended version of the ConDec language for specifying interaction models with a declarative, open flavor. We show how all the (extended) ConDec constructs can be automatically translated to the CLIMB Computational Logic-based language, and illustrate how its corresponding reasoning techniques can be successfully exploited to provide support and verification capabilities along the whole life cycle of the targeted systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The question “artificial nutrition and hydration (ANH) is therapy or not?” is one of the key point of end-of-life issues in Italy, since it was (and it is also nowadays) a strategic and crucial point of the Italian Bioethics discussion about the last phases of human life: determining if ANH is therapy implies the possibility of being included in the list of treatments that could be mentioned for refusal within the living will document. But who is entitled to decide and judge if ANH is a therapy or not? Scientists? The Legislator? Judges? Patients? This issue at first sight seems just a matter of science, but at stake there is more than a scientific definition. According to several scholars, we are in the era of post-academic Science, in which Science broaden discussion, production, negotation and decision to other social groups that are not just the scientific communities. In this process, called co-production, on one hand scientific knowledge derives from the interaction between scientists and society at large. On the other hand, science is functional to co-production of social order. The continuous negotation on which science has to be used in social decisions is just the evidence of the mirroring negotation for different way to structure and interpret society. Thus, in the interaction between Science and Law, deciding what kind of Science could be suitable for a specific kind of Law, envisages a well defined idea of society behind this choice. I have analysed both the legislative path (still in progress) in the living will act production in Italy and Eluana Englaro’s judicial case (that somehow collapsed in the living will act negotiation), using official documents (hearings, texts of the official conference, committees comments and ruling texts) and interviewing key actors in the two processes from the science communication point of view (who talks in the name of science? Who defines what is a therapy? And how do they do?), finding support on the theoretical framework of the Science&Technologies Studies (S&TS).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This PhD thesis reports on car fluff management, recycling and recovery. Car fluff is the residual waste produced by car recycling operations, particularly from hulk shredding. Car fluff is known also as Automotive Shredder Residue (ASR) and it is made of plastics, rubbers, textiles, metals and other materials, and it is very heterogeneous both in its composition and in its particle size. In fact, fines may amount to about 50%, making difficult to sort out recyclable materials or exploit ASR heat value by energy recovery. This 3 years long study started with the definition of the Italian End-of-Life Vehicles (ELVs) recycling state of the art. A national recycling trial revealed Italian recycling rate to be around 81% in 2008, while European Community recycling target are set to 85% by 2015. Consequently, according to Industrial Ecology framework, a life cycle assessment (LCA) has been conducted revealing that sorting and recycling polymers and metals contained in car fluff, followed by recovering residual energy, is the route which has the best environmental perspective. This results led the second year investigation that involved pyrolysis trials on pretreated ASR fractions aimed at investigating which processes could be suitable for an industrial scale ASR treatment plant. Sieving followed by floatation reported good result in thermochemical conversion of polymers with polyolefins giving excellent conversion rate. This factor triggered ecodesign considerations. Ecodesign, together with LCA, is one of the Industrial Ecology pillars and it consists of design for recycling and design for disassembly, both aimed at the improvement of car components dismantling speed and the substitution of non recyclable material. Finally, during the last year, innovative plants and technologies for metals recovery from car fluff have been visited and tested worldwide in order to design a new car fluff treatment plant aimed at ASR energy and material recovery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last three decades, international agricultural trade has grown significantly. Technological advances in transportation logistics and storage have created opportunities to ship anything almost anywhere. Bilateral and multilateral trade agreements have also opened new pathways to an increasingly global market place. Yet, international agricultural trade is often constrained by differences in regulatory regimes. The impact of “regulatory asymmetry” is particularly acute for small and medium sized enterprises (SMEs) that lack resources and expertise to successfully operate in markets that have substantially different regulatory structures. As governments seek to encourage the development of SMEs, policy makers often confront the critical question of what ultimately motivates SME export behavior. Specifically, there is considerable interest in understanding how SMEs confront the challenges of regulatory asymmetry. Neoclassical models of the firm generally emphasize expected profit maximization under uncertainty, however these approaches do not adequately explain the entrepreneurial decision under regulatory asymmetry. Behavioral theories of the firm offer a far richer understanding of decision making by taking into account aspirations and adaptive performance in risky environments. This paper develops an analytical framework for decision making of a single agent. Considering risk, uncertainty and opportunity cost, the analysis focuses on the export behavior response of an SME in a situation of regulatory asymmetry. Drawing on the experience of fruit processor in Muzaffarpur, India, who must consider different regulatory environments when shipping fruit treated with sulfur dioxide, the study dissects the firm-level decision using @Risk, a Monte Carlo computational tool.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electronic business surely represents the new development perspective for world-wide trade. Together with the idea of ebusiness, and the exigency to exchange business messages between trading partners, the concept of business-to-business (B2B) integration arouse. B2B integration is becoming necessary to allow partners to communicate and exchange business documents, like catalogues, purchase orders, reports and invoices, overcoming architectural, applicative, and semantic differences, according to the business processes implemented by each enterprise. Business relationships can be very heterogeneous, and consequently there are variousways to integrate enterprises with each other. Moreover nowadays not only large enterprises, but also the small- and medium- enterprises are moving towards ebusiness: more than two-thirds of Small and Medium Enterprises (SMEs) use the Internet as a business tool. One of the business areas which is actively facing the interoperability problem is that related with the supply chain management. In order to really allow the SMEs to improve their business and to fully exploit ICT technologies in their business transactions, there are three main players that must be considered and joined: the new emerging ICT technologies, the scenario and the requirements of the enterprises and the world of standards and standardisation bodies. This thesis presents the definition and the development of an interoperability framework (and the bounded standardisation intiatives) to provide the Textile/Clothing sectorwith a shared set of business documents and protocols for electronic transactions. Considering also some limitations, the thesis proposes a ontology-based approach to improve the functionalities of the developed framework and, exploiting the technologies of the semantic web, to improve the standardisation life-cycle, intended as the development, dissemination and adoption of B2B protocols for specific business domain. The use of ontologies allows the semantic modellisation of knowledge domains, upon which it is possible to develop a set of components for a better management of B2B protocols, and to ease their comprehension and adoption for the target users.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clusters have increasingly become an essential part of policy discourses at all levels, EU, national, regional, dealing with regional development, competitiveness, innovation, entrepreneurship, SMEs. These impressive efforts in promoting the concept of clusters on the policy-making arena have been accompanied by much less academic and scientific research work investigating the actual economic performance of firms in clusters, the design and execution of cluster policies and going beyond singular case studies to a more methodologically integrated and comparative approach to the study of clusters and their real-world impact. The theoretical background is far from being consolidated and there is a variety of methodologies and approaches for studying and interpreting this phenomenon while at the same time little comparability among studies on actual cluster performances. The conceptual framework of clustering suggests that they affect performance but theory makes little prediction as to the ultimate distribution of the value being created by clusters. This thesis takes the case of Eastern European countries for two reasons. One is that clusters, as coopetitive environments, are a new phenomenon as the previous centrally-based system did not allow for such types of firm organizations. The other is that, as new EU member states, they have been subject to the increased popularization of the cluster policy approach by the European Commission, especially in the framework of the National Reform Programmes related to the Lisbon objectives. The originality of the work lays in the fact that starting from an overview of theoretical contributions on clustering, it offers a comparative empirical study of clusters in transition countries. There have been very few examples in the literature that attempt to examine cluster performance in a comparative cross-country perspective. It adds to this an analysis of cluster policies and their implementation or lack of such as a way to analyse the way the cluster concept has been introduced to transition economies. Our findings show that the implementation of cluster policies does vary across countries with some countries which have embraced it more than others. The specific modes of implementation, however, are very similar, based mostly on soft measures such as funding for cluster initiatives, usually directed towards the creation of cluster management structures or cluster facilitators. They are essentially founded on a common assumption that the added values of clusters is in the creation of linkages among firms, human capital, skills and knowledge at the local level, most often perceived as the regional level. Often times geographical proximity is not a necessary element in the application process and cluster application are very similar to network membership. Cluster mapping is rarely a factor in the selection of cluster initiatives for funding and the relative question about critical mass and expected outcomes is not considered. In fact, monitoring and evaluation are not elements of the cluster policy cycle which have received a lot of attention. Bulgaria and the Czech Republic are the countries which have implemented cluster policies most decisively, Hungary and Poland have made significant efforts, while Slovakia and Romania have only sporadically and not systematically used cluster initiatives. When examining whether, in fact, firms located within regional clusters perform better and are more efficient than similar firms outside clusters, we do find positive results across countries and across sectors. The only country with negative impact from being located in a cluster is the Czech Republic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This Doctoral Dissertation is triggered by an emergent trend: firms are increasingly referring to investments in corporate venture capital (CVC) as means to create new competencies and foster the search for competitive advantage through the use of external resources. CVC is generally defined as the practice by non-financial firms of placing equity investments in entrepreneurial companies. Thus, CVC can be interpreted (i) as a key component of corporate entrepreneurship - acts of organizational creation, renewal, or innovation that occur within or outside an existing organization– and (ii) as a particular form of venture capital (VC) investment where the investor is not a traditional and financial institution, but an established corporation. My Dissertation, thus, simultaneously refers to two streams of research: corporate strategy and venture capital. In particular, I directed my attention to three topics of particular relevance for better understanding the role of CVC. In the first study, I moved from the consideration that competitive environments with rapid technological changes increasingly force established corporations to access knowledge from external sources. Firms, thus, extensively engage in external business development activities through different forms of collaboration with partners. While the underlying process common to these mechanisms is one of knowledge access, they are substantially different. The aim of the first study is to figure out how corporations choose among CVC, alliance, joint venture and acquisition. I addressed this issue adopting a multi-theoretical framework where the resource-based view and real options theory are integrated. While the first study mainly looked into the use of external resources for corporate growth, in the second work, I combined an internal and an external perspective to figure out the relationship between CVC investments (exploiting external resources) and a more traditional strategy to create competitive advantage, that is, corporate diversification (based on internal resources). Adopting an explorative lens, I investigated how these different modes to renew corporate current capabilities interact to each other. More precisely, is CVC complementary or substitute to corporate diversification? Finally, the third study focused on the more general field of VC to investigate (i) how VC firms evaluate the patent portfolios of their potential investee companies and (ii) whether the ability to evaluate technology and intellectual property varies depending on the type of investors, in particular for what concern the distinction between specialized versus generalist VCs and independent versus corporate VCs. This topic is motivated by two observations. First, it is not clear yet which determinants of patent value are primarily considered by VCs in their investment decisions. Second, VCs are not all alike in terms of technological experiences and these differences need to be taken into account.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neuronal networks exhibit diverse types of plasticity, including the activity-dependent regulation of synaptic functions and refinement of synaptic connections. In addition, continuous generation of new neurons in the “adult” brain (adult neurogenesis) represents a powerful form of structural plasticity establishing new connections and possibly implementing pre-existing neuronal circuits (Kempermann et al, 2000; Ming and Song, 2005). Neurotrophins, a family of neuronal growth factors, are crucially involved in the modulation of activity-dependent neuronal plasticity. The first evidence for the physiological importance of this role evolved from the observations that the local administration of neurotrophins has dramatic effects on the activity-dependent refinement of synaptic connections in the visual cortex (McAllister et al, 1999; Berardi et al, 2000; Thoenen, 1995). Moreover, the local availability of critical amounts of neurotrophins appears to be relevant for the ability of hippocampal neurons to undergo long-term potentiation (LTP) of the synaptic transmission (Lu, 2004; Aicardi et al, 2004). To achieve a comprehensive understanding of the modulatory role of neurotrophins in integrated neuronal systems, informations on the mechanisms about local neurotrophins synthesis and secretion as well as ditribution of their cognate receptors are of crucial importance. In the first part of this doctoral thesis I have used electrophysiological approaches and real-time imaging tecniques to investigate additional features about the regulation of neurotrophins secretion, namely the capability of the neurotrophin brain-derived neurotrophic factor (BDNF) to undergo synaptic recycling. In cortical and hippocampal slices as well as in dissociated cell cultures, neuronal activity rapidly enhances the neuronal expression and secretion of BDNF which is subsequently taken up by neurons themselves but also by perineuronal astrocytes, through the selective activation of BDNF receptors. Moreover, internalized BDNF becomes part of the releasable source of the neurotrophin, which is promptly recruited for activity-dependent recycling. Thus, we described for the first time that neurons and astrocytes contain an endocytic compartment competent for BDNF recycling, suggesting a specialized form of bidirectional communication between neurons and glia. The mechanism of BDNF recycling is reminiscent of that for neurotransmitters and identifies BDNF as a new modulator implicated in neuro- and glio-transmission. In the second part of this doctoral thesis I addressed the role of BDNF signaling in adult hippocampal neurogenesis. I have generated a transgenic mouse model to specifically investigate the influence of BDNF signaling on the generation, differentiation, survival and connectivity of newborn neurons into the adult hippocampal network. I demonstrated that the survival of newborn neurons critically depends on the activation of the BDNF receptor TrkB. The TrkB-dependent decision regarding life or death in these newborn neurons takes place right at the transition point of their morphological and functional maturation Before newborn neurons start to die, they exhibit a drastic reduction in dendritic complexity and spine density compared to wild-type newborn neurons, indicating that this receptor is required for the connectivity of newborn neurons. Both the failure to become integrated and subsequent dying lead to impaired LTP. Finally, mice lacking a functional TrkB in the restricted population of newborn neurons show behavioral deficits, namely increased anxiety-like behavior. These data suggest that the integration and establishment of proper connections by newly generated neurons into the pre-existing network are relevant features for regulating the emotional state of the animal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Negli ultimi decenni la Politica Agricola Comune (PAC) è stata sottoposta a diverse revisioni, più o meno programmate, che ne hanno modificato gli obiettivi operativi e gli strumenti per perseguirli. In letteratura economica agraria sono state eseguite diverse ricerche che affrontano analisi ex-ante sui possibili impatti delle riforme politiche, in particolare al disaccoppiamento, riguardo all’allocazione dei terreni alle diverse colture e all’adozione di tecniche di coltivazione più efficienti. Ma tale argomento, nonostante sia di grande importanza, non è stato finora affrontato come altri temi del mondo agricolo. Le principali lacune si riscontrano infatti nella carenza di analisi ex-ante, di modelli che includano le preferenze e le aspettative degli agricoltori. Questo studio valuta le scelte di investimento in terreno di un’azienda agricola di fronte a possibili scenari PAC post-2013, in condizioni di incertezza circa le specifiche condizioni in cui ciascuno scenario verrebbe a verificarsi. L’obiettivo è di ottenere indicazioni utili in termini di comprensione delle scelte di investimento dell’agricoltore in presenza di incertezza sul futuro. L’elemento maggiormente innovativo della ricerca consiste nell’applicazione di un approccio real options e nell’interazione tra la presenza di diversi scenari sul futuro del settore agricolo post-2013, e la componente di incertezza che incide e gravita su di essi. La metodologia adottata nel seguente lavoro si basa sulla modellizzazione di un’azienda agricola, in cui viene simulato il comportamento dell’azienda agricola in reazione alle riforme della PAC e alla variazione dei prezzi dei prodotti in presenza di incertezza. Mediante un modello di Real Option viene valutata la scelta della tempistica ottimale per investire nell’acquisto di terreno (caratterizzato da incertezza e irreversibilità). Dai risultati emerge come in presenza di incertezza all’agricoltore convenga rimandare la decisione a dopo il 2013 e in base alle maggiori informazioni disponibili eseguire l’investimento solo in presenza di condizioni favorevoli. La variazione dei prezzi dei prodotti influenza le scelte più dell’incertezza dei contributi PAC. Il Real Option sembra interpretare meglio il comportamento dell’agricoltore rispetto all’approccio classico del Net Present Value.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The prospect of the continuous multiplication of life styles, the obsolescence of the traditional typological diagrams, the usability of spaces on different territorial scales, imposes on contemporary architecture the search for new models of living. Limited densities in urban development have produced the erosion of territory, the increase of the harmful emissions and energy consumption. High density housing cannot refuse the social emergency to ensure high quality and low cost dwellings, to a new people target: students, temporary workers, key workers, foreign, young couples without children, large families and, in general, people who carry out public services. Social housing strategies have become particularly relevant in regenerating high density urban outskirts. The choice of this research topic derives from the desire to deal with the recent accommodation emergency, according to different perspectives, with a view to give a contribution to the current literature, by proposing some tools for a correct design of the social housing, by ensuring good quality, cost-effective, and eco-sustainable solutions, from the concept phase, through management and maintenance, until the end of the building life cycle. The purpose of the thesis is defining a framework of guidelines that become effective instruments to be used in designing the social housing. They should also integrate the existing regulations and are mainly thought for those who work in this sector. They would aim at supporting students who have to cope with this particular residential theme, and also the users themselves. The scientific evidence of either the recent specialized literature or the solutions adopted in some case studies within the selected metropolitan areas of Milan, London and São Paulo, it is possible to identify the principles of this new design approach, in which the connection between typology, morphology and technology pursues the goal of a high living standard.