904 resultados para License to market


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Software product line (SPL) engineering offers several advantages in the development of families of software products such as reduced costs, high quality and a short time to market. A software product line is a set of software intensive systems, each of which shares a common core set of functionalities, but also differs from the other products through customization tailored to fit the needs of individual groups of customers. The differences between products within the family are well-understood and organized into a feature model that represents the variability of the SPL. Products can then be built by generating and composing features described in the feature model. Testing of software product lines has become a bottleneck in the SPL development lifecycle, since many of the techniques used in their testing have been borrowed from traditional software testing and do not directly take advantage of the similarities between products. This limits the overall gains that can be achieved in SPL engineering. Recent work proposed by both industry and the research community for improving SPL testing has begun to consider this problem, but there is still a need for better testing techniques that are tailored to SPL development. In this thesis, I make two primary contributions to software product line testing. First I propose a new definition for testability of SPLs that is based on the ability to re-use test cases between products without a loss of fault detection effectiveness. I build on this idea to identify elements of the feature model that contribute positively and/or negatively towards SPL testability. Second, I provide a graph based testing approach called the FIG Basis Path method that selects products and features for testing based on a feature dependency graph. This method should increase our ability to re-use results of test cases across successive products in the family and reduce testing effort. I report the results of a case study involving several non-trivial SPLs and show that for these objects, the FIG Basis Path method is as effective as testing all products, but requires us to test no more than 24% of the products in the SPL.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Agronegócio e Desenvolvimento - Tupã

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A pressão por inovação em empresas de base tecnológica, em ciclos cada vez mais curtos e demandando competências complexas, tem levado as empresas à busca de uma estruturação sob a forma de redes de cooperação empresarial, a fim de mitigar riscos e reduzir custos enquanto acelera o time to market. Entretanto, apesar de todas as vantagens e oportunidades reais, há grandes barreiras a serem transpostas. O objetivo dessa pesquisa é compreender o processo de estruturação de uma rede de cooperação no contexto de empresas de base tecnológica. A abordagem metodológica utilizada foi a pesquisa-ação desenvolvida em 30 empresas do setor eletroeletrônico. Os resultados apontam o temor ao comportamento oportunista como o item mais crítico à formação da rede de cooperação empresarial.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Os princípios e as diretrizes do Sistema Único de Saúde (SUS) impõem uma estrutura de assistência baseada em redes de políticas públicas que, combinada ao modelo de financiamento adotado, conduz a falhas de mercado. Isso impõe barreiras à gestão do sistema público de saúde e à concretização dos objetivos do SUS. As características institucionais e a heterogeneidade dos atores, aliadas à existência de diferentes redes de atenção à saúde, geram complexidade analítica no estudo da dinâmica global da rede do SUS. Há limitações ao emprego de métodos quantitativos baseados em análise estática com dados retrospectivos do sistema público de saúde. Assim, propõe-se a abordagem do SUS como sistema complexo, a partir da utilização de metodologia quantitativa inovadora baseada em simulação computacional. O presente artigo buscou analisar desafios e potencialidades na utilização de modelagem com autômatos celulares combinada com modelagem baseada em agentes para simulação da evolução da rede de serviços do SUS. Tal abordagem deve permitir melhor compreensão da organização, heterogeneidade e dinâmica estrutural da rede de serviços do SUS e possibilitar minimização dos efeitos das falhas de mercado no sistema de saúde brasileiro.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Negli ultimi anni, un crescente numero di studiosi ha focalizzato la propria attenzione sullo sviluppo di strategie che permettessero di caratterizzare le proprietà ADMET dei farmaci in via di sviluppo, il più rapidamente possibile. Questa tendenza origina dalla consapevolezza che circa la metà dei farmaci in via di sviluppo non viene commercializzato perché ha carenze nelle caratteristiche ADME, e che almeno la metà delle molecole che riescono ad essere commercializzate, hanno comunque qualche problema tossicologico o ADME [1]. Infatti, poco importa quanto una molecola possa essere attiva o specifica: perché possa diventare farmaco è necessario che venga ben assorbita, distribuita nell’organismo, metabolizzata non troppo rapidamente, ne troppo lentamente e completamente eliminata. Inoltre la molecola e i suoi metaboliti non dovrebbero essere tossici per l’organismo. Quindi è chiaro come una rapida determinazione dei parametri ADMET in fasi precoci dello sviluppo del farmaco, consenta di risparmiare tempo e denaro, permettendo di selezionare da subito i composti più promettenti e di lasciar perdere quelli con caratteristiche negative. Questa tesi si colloca in questo contesto, e mostra l’applicazione di una tecnica semplice, la biocromatografia, per caratterizzare rapidamente il legame di librerie di composti alla sieroalbumina umana (HSA). Inoltre mostra l’utilizzo di un’altra tecnica indipendente, il dicroismo circolare, che permette di studiare gli stessi sistemi farmaco-proteina, in soluzione, dando informazioni supplementari riguardo alla stereochimica del processo di legame. La HSA è la proteina più abbondante presente nel sangue. Questa proteina funziona da carrier per un gran numero di molecole, sia endogene, come ad esempio bilirubina, tiroxina, ormoni steroidei, acidi grassi, che xenobiotici. Inoltre aumenta la solubilità di molecole lipofile poco solubili in ambiente acquoso, come ad esempio i tassani. Il legame alla HSA è generalmente stereoselettivo e ad avviene a livello di siti di legame ad alta affinità. Inoltre è ben noto che la competizione tra farmaci o tra un farmaco e metaboliti endogeni, possa variare in maniera significativa la loro frazione libera, modificandone l’attività e la tossicità. Per queste sue proprietà la HSA può influenzare sia le proprietà farmacocinetiche che farmacodinamiche dei farmaci. Non è inusuale che un intero progetto di sviluppo di un farmaco possa venire abbandonato a causa di un’affinità troppo elevata alla HSA, o a un tempo di emivita troppo corto, o a una scarsa distribuzione dovuta ad un debole legame alla HSA. Dal punto di vista farmacocinetico, quindi, la HSA è la proteina di trasporto del plasma più importante. Un gran numero di pubblicazioni dimostra l’affidabilità della tecnica biocromatografica nello studio dei fenomeni di bioriconoscimento tra proteine e piccole molecole [2-6]. Il mio lavoro si è focalizzato principalmente sull’uso della biocromatografia come metodo per valutare le caratteristiche di legame di alcune serie di composti di interesse farmaceutico alla HSA, e sul miglioramento di tale tecnica. Per ottenere una miglior comprensione dei meccanismi di legame delle molecole studiate, gli stessi sistemi farmaco-HSA sono stati studiati anche con il dicroismo circolare (CD). Inizialmente, la HSA è stata immobilizzata su una colonna di silice epossidica impaccata 50 x 4.6 mm di diametro interno, utilizzando una procedura precedentemente riportata in letteratura [7], con alcune piccole modifiche. In breve, l’immobilizzazione è stata effettuata ponendo a ricircolo, attraverso una colonna precedentemente impaccata, una soluzione di HSA in determinate condizioni di pH e forza ionica. La colonna è stata quindi caratterizzata per quanto riguarda la quantità di proteina correttamente immobilizzata, attraverso l’analisi frontale di L-triptofano [8]. Di seguito, sono stati iniettati in colonna alcune soluzioni raceme di molecole note legare la HSA in maniera enantioselettiva, per controllare che la procedura di immobilizzazione non avesse modificato le proprietà di legame della proteina. Dopo essere stata caratterizzata, la colonna è stata utilizzata per determinare la percentuale di legame di una piccola serie di inibitori della proteasi HIV (IPs), e per individuarne il sito(i) di legame. La percentuale di legame è stata calcolata attraverso il fattore di capacità (k) dei campioni. Questo parametro in fase acquosa è stato estrapolato linearmente dal grafico log k contro la percentuale (v/v) di 1-propanolo presente nella fase mobile. Solamente per due dei cinque composti analizzati è stato possibile misurare direttamente il valore di k in assenza di solvente organico. Tutti gli IPs analizzati hanno mostrato un’elevata percentuale di legame alla HSA: in particolare, il valore per ritonavir, lopinavir e saquinavir è risultato maggiore del 95%. Questi risultati sono in accordo con dati presenti in letteratura, ottenuti attraverso il biosensore ottico [9]. Inoltre, questi risultati sono coerenti con la significativa riduzione di attività inibitoria di questi composti osservata in presenza di HSA. Questa riduzione sembra essere maggiore per i composti che legano maggiormente la proteina [10]. Successivamente sono stati eseguiti degli studi di competizione tramite cromatografia zonale. Questo metodo prevede di utilizzare una soluzione a concentrazione nota di un competitore come fase mobile, mentre piccole quantità di analita vengono iniettate nella colonna funzionalizzata con HSA. I competitori sono stati selezionati in base al loro legame selettivo ad uno dei principali siti di legame sulla proteina. In particolare, sono stati utilizzati salicilato di sodio, ibuprofene e valproato di sodio come marker dei siti I, II e sito della bilirubina, rispettivamente. Questi studi hanno mostrato un legame indipendente dei PIs ai siti I e II, mentre è stata osservata una debole anticooperatività per il sito della bilirubina. Lo stesso sistema farmaco-proteina è stato infine investigato in soluzione attraverso l’uso del dicroismo circolare. In particolare, è stato monitorata la variazione del segnale CD indotto di un complesso equimolare [HSA]/[bilirubina], a seguito dell’aggiunta di aliquote di ritonavir, scelto come rappresentante della serie. I risultati confermano la lieve anticooperatività per il sito della bilirubina osservato precedentemente negli studi biocromatografici. Successivamente, lo stesso protocollo descritto precedentemente è stato applicato a una colonna di silice epossidica monolitica 50 x 4.6 mm, per valutare l’affidabilità del supporto monolitico per applicazioni biocromatografiche. Il supporto monolitico monolitico ha mostrato buone caratteristiche cromatografiche in termini di contropressione, efficienza e stabilità, oltre che affidabilità nella determinazione dei parametri di legame alla HSA. Questa colonna è stata utilizzata per la determinazione della percentuale di legame alla HSA di una serie di poliamminochinoni sviluppati nell’ambito di una ricerca sulla malattia di Alzheimer. Tutti i composti hanno mostrato una percentuale di legame superiore al 95%. Inoltre, è stata osservata una correlazione tra percentuale di legame è caratteristiche della catena laterale (lunghezza e numero di gruppi amminici). Successivamente sono stati effettuati studi di competizione dei composti in esame tramite il dicroismo circolare in cui è stato evidenziato un effetto anticooperativo dei poliamminochinoni ai siti I e II, mentre rispetto al sito della bilirubina il legame si è dimostrato indipendente. Le conoscenze acquisite con il supporto monolitico precedentemente descritto, sono state applicate a una colonna di silice epossidica più corta (10 x 4.6 mm). Il metodo di determinazione della percentuale di legame utilizzato negli studi precedenti si basa su dati ottenuti con più esperimenti, quindi è necessario molto tempo prima di ottenere il dato finale. L’uso di una colonna più corta permette di ridurre i tempi di ritenzione degli analiti, per cui la determinazione della percentuale di legame alla HSA diventa molto più rapida. Si passa quindi da una analisi a medio rendimento a una analisi di screening ad alto rendimento (highthroughput- screening, HTS). Inoltre, la riduzione dei tempi di analisi, permette di evitare l’uso di soventi organici nella fase mobile. Dopo aver caratterizzato la colonna da 10 mm con lo stesso metodo precedentemente descritto per le altre colonne, sono stati iniettati una serie di standard variando il flusso della fase mobile, per valutare la possibilità di utilizzare flussi elevati. La colonna è stata quindi impiegata per stimare la percentuale di legame di una serie di molecole con differenti caratteristiche chimiche. Successivamente è stata valutata la possibilità di utilizzare una colonna così corta, anche per studi di competizione, ed è stata indagato il legame di una serie di composti al sito I. Infine è stata effettuata una valutazione della stabilità della colonna in seguito ad un uso estensivo. L’uso di supporti cromatografici funzionalizzati con albumine di diversa origine (ratto, cane, guinea pig, hamster, topo, coniglio), può essere proposto come applicazione futura di queste colonne HTS. Infatti, la possibilità di ottenere informazioni del legame dei farmaci in via di sviluppo alle diverse albumine, permetterebbe un migliore paragone tra i dati ottenuti tramite esperimenti in vitro e i dati ottenuti con esperimenti sull’animale, facilitando la successiva estrapolazione all’uomo, con la velocità di un metodo HTS. Inoltre, verrebbe ridotto anche il numero di animali utilizzati nelle sperimentazioni. Alcuni lavori presenti in letteratura dimostrano l’affidabilita di colonne funzionalizzate con albumine di diversa origine [11-13]: l’utilizzo di colonne più corte potrebbe aumentarne le applicazioni.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The present work tries to display a comprehensive and comparative study of the different legal and regulatory problems involved in international securitization transactions. First, an introduction to securitization is provided, with the basic elements of the transaction, followed by the different varieties of it, including dynamic securitization and synthetic securitization structures. Together with this introduction to the intricacies of the structure, a insight into the influence of securitization in the financial and economic crisis of 2007-2009 is provided too; as well as an overview of the process of regulatory competition and cooperation that constitutes the framework for the international aspects of securitization. The next Chapter focuses on the aspects that constitute the foundations of structured finance: the inception of the vehicle, and the transfer of risks associated to the securitized assets, with particular emphasis on the validity of those elements, and how a securitization transaction could be threatened at its root. In this sense, special importance is given to the validity of the trust as an instrument of finance, to the assignment of future receivables or receivables in block, and to the importance of formalities for the validity of corporations, trusts, assignments, etc., and the interaction of such formalities contained in general corporate, trust and assignment law with those contemplated under specific securitization regulations. Then, the next Chapter (III) focuses on creditor protection aspects. As such, we provide some insights on the debate on the capital structure of the firm, and its inadequacy to assess the financial soundness problems inherent to securitization. Then, we proceed to analyze the importance of rules on creditor protection in the context of securitization. The corollary is in the rules in case of insolvency. In this sense, we divide the cases where a party involved in the transaction goes bankrupt, from those where the transaction itself collapses. Finally, we focus on the scenario where a substance over form analysis may compromise some of the elements of the structure (notably the limited liability of the sponsor, and/or the transfer of assets) by means of veil piercing, substantive consolidation, or recharacterization theories. Once these elements have been covered, the next Chapters focus on the regulatory aspects involved in the transaction. Chapter IV is more referred tomarket” regulations, i.e. those concerned with information disclosure and other rules (appointment of the indenture trustee, and elaboration of a rating by a rating agency) concerning the offering of asset-backed securities to the public. Chapter V, on the other hand, focuses on “prudential” regulation of the entity entrusted with securitizing assets (the so-called Special Purpose vehicle), and other entities involved in the process. Regarding the SPV, a reference is made to licensing requirements, restriction of activities and governance structures to prevent abuses. Regarding the sponsor of the transaction, a focus is made on provisions on sound originating practices, and the servicing function. Finally, we study accounting and banking regulations, including the Basel I and Basel II Frameworks, which determine the consolidation of the SPV, and the de-recognition of the securitized asset from the originating company’s balance-sheet, as well as the posterior treatment of those assets, in particular by banks. Chapters VI-IX are concerned with liability matters. Chapter VI is an introduction to the different sources of liability. Chapter VII focuses on the liability by the SPV and its management for the information supplied to investors, the management of the asset pool, and the breach of loyalty (or fiduciary) duties. Chapter VIII rather refers to the liability of the originator as a result of such information and statements, but also as a result of inadequate and reckless originating or servicing practices. Chapter IX finally focuses on third parties entrusted with the soundness of the transaction towards the market, the so-called gatekeepers. In this respect, we make special emphasis on the liability of indenture trustees, underwriters and rating agencies. Chapters X and XI focus on the international aspects of securitization. Chapter X contains a conflicts of laws analysis of the different aspects of structured finance. In this respect, a study is made of the laws applicable to the vehicle, to the transfer of risks (either by assignment or by means of derivatives contracts), to liability issues; and a study is also made of the competent jurisdiction (and applicable law) in bankruptcy cases; as well as in cases where a substance-over-form is performed. Then, special attention is also devoted to the role of financial and securities regulations; as well as to their territorial limits, and extraterritoriality problems involved. Chapter XI supplements the prior Chapter, for it analyzes the limits to the States’ exercise of regulatory power by the personal and “market” freedoms included in the US Constitution or the EU Treaties. A reference is also made to the (still insufficient) rules from the WTO Framework, and their significance to the States’ recognition and regulation of securitization transactions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis the impact of R&D expenditures on firm market value and stock returns is examined. This is performed in a sample of European listed firms for the period 2000-2009. I apply different linear and GMM econometric estimations for testing the impact of R&D on market prices and construct country portfolios based on firms’ R&D expenditure to market capitalization ratio for studying the effect of R&D on stock returns. The results confirm that more innovative firms have a better market valuation,investors consider R&D as an asset that produces long-term benefits for corporations. The impact of R&D on firm value differs across countries. It is significantly modulated by the financial and legal environment where firms operate. Other firm and industry characteristics seem to play a determinant role when investors value R&D. First, only larger firms with lower financial leverage that operate in highly innovative sectors decide to disclose their R&D investment. Second, the markets assign a premium to small firms, which operate in hi-tech sectors compared to larger enterprises for low-tech industries. On the other hand, I provide empirical evidence indicating that generally highly R&D-intensive firms may enhance mispricing problems related to firm valuation. As R&D contributes to the estimation of future stock returns, portfolios that comprise high R&D-intensive stocks may earn significant excess returns compared to the less innovative after controlling for size and book-to-market risk. Further, the most innovative firms are generally more risky in terms of stock volatility but not systematically more risky than low-tech firms. Firms that operate in Continental Europe suffer more mispricing compared to Anglo-Saxon peers but the former are less volatile, other things being equal. The sectors where firms operate are determinant even for the impact of R&D on stock returns; this effect is much stronger in hi-tech industries.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The evolution of the electronics embedded applications forces electronics systems designers to match their ever increasing requirements. This evolution pushes the computational power of digital signal processing systems, as well as the energy required to accomplish the computations, due to the increasing mobility of such applications. Current approaches used to match these requirements relies on the adoption of application specific signal processors. Such kind of devices exploits powerful accelerators, which are able to match both performance and energy requirements. On the other hand, the too high specificity of such accelerators often results in a lack of flexibility which affects non-recurrent engineering costs, time to market, and market volumes too. The state of the art mainly proposes two solutions to overcome these issues with the ambition of delivering reasonable performance and energy efficiency: reconfigurable computing and multi-processors computing. All of these solutions benefits from the post-fabrication programmability, that definitively results in an increased flexibility. Nevertheless, the gap between these approaches and dedicated hardware is still too high for many application domains, especially when targeting the mobile world. In this scenario, flexible and energy efficient acceleration can be achieved by merging these two computational paradigms, in order to address all the above introduced constraints. This thesis focuses on the exploration of the design and application spectrum of reconfigurable computing, exploited as application specific accelerators for multi-processors systems on chip. More specifically, it introduces a reconfigurable digital signal processor featuring a heterogeneous set of reconfigurable engines, and a homogeneous multi-core system, exploiting three different flavours of reconfigurable and mask-programmable technologies as implementation platform for applications specific accelerators. In this work, the various trade-offs concerning the utilization multi-core platforms and the different configuration technologies are explored, characterizing the design space of the proposed approach in terms of programmability, performance, energy efficiency and manufacturing costs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The country-of-origin is the “nationality” of a food when it goes through customs in a foreign country, and is a “brand” when the food is for sale in a foreign market. My research on country-of-origin labeling (COOL) started from a case study on the extra virgin olive oil exported from Italy to China; the result shows that asymmetric and imperfect origin information may lead to market inefficiency, even market failure in emerging countries. Then, I used the Delphi method to conduct qualitative and systematic research on COOL; the panel of experts in food labeling and food policy was composed of 19 members in 13 countries; the most important consensus is that multiple countries of origin marking can provide accurate information about the origin of a food produced by two or more countries, avoiding misinformation for consumers. Moreover, I enhanced the research on COOL by analyzing the rules of origin and drafting a guideline for the standardization of origin marking. Finally, from the perspective of information economics I estimated the potential effect of the multiple countries of origin labeling on the business models of international trade, and analyzed the regulatory options for mandatory or voluntary COOL of main ingredients. This research provides valuable insights for the formulation of COOL policy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The need to effectively manage the documentation covering the entire production process, from the concept phase right through to market realise, constitutes a key issue in the creation of a successful and highly competitive product. For almost forty years the most commonly used strategies to achieve this have followed Product Lifecycle Management (PLM) guidelines. Translated into information management systems at the end of the '90s, this methodology is now widely used by companies operating all over the world in many different sectors. PLM systems and editor programs are the two principal types of software applications used by companies for their process aotomation. Editor programs allow to store in documents the information related to the production chain, while the PLM system stores and shares this information so that it can be used within the company and made it available to partners. Different software tools, which capture and store documents and information automatically in the PLM system, have been developed in recent years. One of them is the ''DirectPLM'' application, which has been developed by the Italian company ''Focus PLM''. It is designed to ensure interoperability between many editors and the Aras Innovator PLM system. In this dissertation we present ''DirectPLM2'', a new version of the previous software application DirectPLM. It has been designed and developed as prototype during the internship by Focus PLM. Its new implementation separates the abstract logic of business from the real commands implementation, previously strongly dependent on Aras Innovator. Thanks to its new design, Focus PLM can easily develop different versions of DirectPLM2, each one devised for a specific PLM system. In fact, the company can focus the development effort only on a specific set of software components which provides specialized functions interacting with that particular PLM system. This allows shorter Time-To-Market and gives the company a significant competitive advantage.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Bedouin of South Sinai have been significantly affected by the politics of external powers for a long time. However, never had the interest of external powers in Sinai been so strong as since the Israeli-Egyptian wars in the second half of the 20th century when Bedouin interests started to collide with Egypt’s plans for a development of luxury tourism in South Sinai. rnrnThe tourism boom that has started in the 1980s has brought economic and infrastructure development to the Bedouin and tourism has become the most important source of income for the Bedouin. However, while the absolute increase of tourists to Sinai has trickled down to the Bedouin to some extent, the participation of Bedouin in the overall tourism development is under-proportionate. Moreover, the Bedouin have become increasingly dependent on monetary income and consequently from tourism as the only significant source of income while at the same time they have lost much of their land as well as their self-determination.rnrnIn this context, the Bedouin livelihoods have become very vulnerable due to repeated depressions in the tourism industry as well as marginalization. Major marginalization processes the Bedouin are facing are the loss of land, barriers to market entry, especially increasingly strict rules and regulations in the tourism industry, as well as discrimination by the authorities. Social differentiation and Bedouin preferences are identified as further factors in Bedouin marginalization.rnrnThe strategies Bedouin have developed in response to all these problems are coping strategies, which try to deal with the present problem at the individual level. Basically no strategies have been developed at the collective level that would aim to actively shape the Bedouin’s present and future. Collective action has been hampered by a variety of factors, such as the speed of the developments, the distribution of power or the decay of tribal structures.rnWhile some Bedouin might be able to continue their tourism activities, a large number of informal jobs will not be feasible anymore. The majority of the previously mostly self-employed Bedouin will probably be forced to work as day-laborers who will have lost much of their pride, dignity, sovereignty and freedom. Moreover, with a return to subsistence being impossible for the majority of the Bedouin, it is likely that an increasing number of marginalized Bedouin will turn to illegal income generating activities such as smuggling or drug cultivation. This in turn will lead to further repression and discrimination and could escalate in a serious violent conflict between the Bedouin and the government.rnrnDevelopment plans and projects should address the general lack of civil rights, local participation and protection of minorities in Egypt and promote Bedouin community development and the consideration of Bedouin interests in tourism development.rnrnWether the political upheavals and the resignation of president Mubarak at the beginning of 2011 will have a positive effect on the situation of the Bedouin remains to be seen.rn

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Since the late eighties, economists have been regarding the transition from command to market economies in Central and Eastern Europe with intense interest. In addition to studying the transition per se, they have begun using the region as a testing ground on which to investigate the validity of certain classic economic propositions. In his research, comprising three articles written in English and totalling 40 pages, Mr. Hanousek uses the so-called "Czech national experiment" (voucher privatisation scheme) to test the permanent income hypothesis (PIH). He took as his inspiration Kreinin's recommendation: "Since data concerning the behaviour of windfall income recipients is relatively scanty, and since such data can constitute an important test of the permanent income hypothesis, it is of interest to bring to bear on the hypothesis whatever information is available". Mr. Hanousek argues that, since the transfer of property to Czech citizens from 1992 to 1994 through the voucher scheme was not anticipated, it can be regarded as windfall income. The average size of the windfall was more than three month's salary and over 60 percent of the Czech population received this unexpected income. Furthermore, there are other reasons for conducting such an analysis in the Czech Republic. Firstly, the privatisation process took place quickly. Secondly, both the economy and consumer behaviour have been very stable. Thirdly, out of a total population of 10 million Czech citizens, an astonishing 6 million, that is, virtually every household, participated in the scheme. Thus Czech voucher privatisation provides a sample for testing the PIH almost equivalent to a full population, thus avoiding problems with the distribution of windfalls. Compare this, for instance with the fact that only 4% of the Israeli urban population received personal restitution from Germany, while the number of veterans who received the National Service Life Insurance Dividends amounted to less than 9% of the US population and were concentrated in certain age groups. But to begin with, Mr. Hanousek considers the question of whether the public percieves the transfer from the state to individual as an increase in net wealth. It can be argued that the state is only divesting itself of assets that would otherwise provide a future source of transfers. According to this argument, assigning these assets to individuals creates an offsetting change in the present value of potential future transfers so that individuals are no better off after the transfer. Mr. Hanousek disagrees with this approach. He points out that a change in the ownership of inefficient state-owned enterprises should lead to higher efficiency, which alone increases the value of enterprises and creates a windfall increase in citizens' portfolios. More importantly, the state and individuals had very different preferences during the transition. Despite government propaganda, it is doubtful that citizens of former communist countries viewed government-owned enterprises as being operated in the citizens' best interest. Moreover, it is unlikely that the public fully comprehended the sophisticated links between the state budget, state-owned enterprises, and transfers to individuals. Finally, the transfers were not equal across the population. Mr. Hanousek conducted a survey on 1263 individuals, dividing them into four monthly earnings categories. After determining whether the respondent had participated in the voucher process, he asked those who had how much of what they received from voucher privatisation had been (a) spent on goods and services, (b) invested elsewhere, (c) transferred to newly emerging pension funds, (d) given to a family member, and (e) retained in their original form as an investment. Both the mean and the variance of the windfall rise with income. He obtained similar results with respect to education, where the mean (median) windfall for those with a basic school education was 13,600 Czech Crowns (CZK), a figure that increased to 15,000 CZK for those with a high school education without exams, 19,900 CZK for high school graduates with exams, and 24,600 CZK for university graduates. Mr. Hanousek concludes that it can be argued that higher income (and better educated) groups allocated their vouchers or timed the disposition of their shares better. He turns next to an analysis of how respondents reported using their windfalls. The key result is that only a relatively small number of individuals reported spending on goods. Overall, the results provide strong support for the permanent income hypothesis, the only apparent deviation being the fact that both men and women aged 26 to 35 apparently consume more than they should if the windfall were annuitised. This finding is still fully consistent with the PIH, however, if this group is at a stage in their life-cycle where, without the windfall, they would be borrowing to finance consumption associated with family formation etc. Indeed, the PIH predicts that individuals who would otherwise borrow to finance consumption would consume the windfall up to the level equal to the annuitised fraction of the increase in lifetime income plus the full amount of the previously planned borrowing for consumption. Greater consumption would then be financed, not from investing the windfall, but from avoidance of future repayment obligations for debts that would have been incurred without the windfall.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The project drew on an extensive firm-level sample of employees to describe in detail the recent evolution of the structure of wages in the Czech Republic between 1995 and 1998. The results of the analysis were then compared with information from EU countries. Regression analysis was used to study a number of specific questions, with particular emphasis being paid to proper weighting of the sample. Jurajda first quantified the effects on male and female hourly wages in the Czech Republic of worker age and education, firm size, region, industry and ownership type. He then examined whether these effects have been changing over time and how they differ by gender, and identified those industrial sectors that carry the largest wage premiums not accounted for by worker or firm characteristics, and measured the effect of unemployment on wages. He found a substantial increase in returns on human capital, with the earning differentials for education increasing substantially between 1995 and 1998, with these gains being largely comparable to those in western countries. Overall, the Czech structure of wages is now very responsive to market forces and is converging rapidly on EU-type flexibility in almost every dimension. It is likely, however, that due to the constrained supply of tertiary-educated workers in particular, the returns on education may keep on rising, surpassing levels typical of western economies and potentially reaching the high levels observed in developing countries.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Farm protest in the United States attracted widespread attention in the 1930s as militant farmers interfered with foreclosure sales, demonstrated at county court houses and state capitals, and blocked highways and stopped trains to prevent crops and livestock from going to market in an effort to raise farm prices. The best known of the protest groups was the Farmers Holiday Association, which was formed in 1932. Prior to the Holiday, however, a left-wing group organized by Communists in 1930 known as the United Farmers League (UFL) gained an initial following in the cutover country of the Upper Peninsula of Michigan, northern Wisconsin, northern Minnesota, and parts of the Dakotas and northeast Montana. Finnish Americans dominated the UFL in the Upper Midwest and in a few locales in the Dakotas. Evidence for this high level of influence comes from the fact that the head of the Communist Party’s Agrarian Department was Henry Puro, a key figure in Finnish American Communist circles and a member of the Party’s Politburo. This paper will focus on Finnish American involvement in the UFL and, to a lesser extent, the broader-based Farmers Holiday movement.