12 resultados para Point of Common Coupling
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
This PhD thesis aimed to assess the status of common sole, one of the main commercial stocks in the Adriatic Sea, using a mix of conventional and innovative techniques to provide more reliable estimates of stock status compared to past advice. First, a meta-analysis was carried out using data-poor assessment model to analyze the whole catch assemblage of rapido fishery. The outcomes were used to estimate rebuilding time and forecast catches under different harvest control rule scenarios, with a reduction of 20% of fishing effort being suggested as a way to allow most of the species to recover to sustainable levels. Secondly, an ensemble of data-rich assessment models was developed to better incorporate uncertainty by using alternative hypotheses of main parameters. This was the first time an ensemble of models has been used in the Mediterranean to provide management advice. Consistent with data-poor analysis results, the ensemble outcomes indicated that the common sole stock was showing a recovering trend probably due to the effective management actions underway in the area rather than the moderate effort reduction according to the actual management plan. Moreover, back-calculation measurements were used to fit and compare monophasic and biphasic growth curves through the use of non-linear mixed effects models. The analyses revealed that the fitting of the biphasic curve was superior, confirming the theory that growth in size would decrease as a consequence of reproductive effort. A stock assessment simulation showed how the use of the monophasic pattern would result in a critical overestimation of biomass that could lead to a greater risk of overfishing. As a final step, a simulation-testing procedure was applied to determine the best performing reference points using stock-specific characteristic. The procedure could be routinely adopted to increase transparency in reference points calculation enhancing the credibility of scientific advice.
Resumo:
Recently, a rising interest in political and economic integration/disintegration issues has been developed in the political economy field. This growing strand of literature partly draws on traditional issues of fiscal federalism and optimum public good provision and focuses on a trade-off between the benefits of centralization, arising from economies of scale or externalities, and the costs of harmonizing policies as a consequence of the increased heterogeneity of individual preferences in an international union or in a country composed of at least two regions. This thesis stems from this strand of literature and aims to shed some light on two highly relevant aspects of the political economy of European integration. The first concerns the role of public opinion in the integration process; more precisely, how economic benefits and costs of integration shape citizens' support for European Union (EU) membership. The second is the allocation of policy competences among different levels of government: European, national and regional. Chapter 1 introduces the topics developed in this thesis by reviewing the main recent theoretical developments in the political economy analysis of integration processes. It is structured as follows. First, it briefly surveys a few relevant articles on economic theories of integration and disintegration processes (Alesina and Spolaore 1997, Bolton and Roland 1997, Alesina et al. 2000, Casella and Feinstein 2002) and discusses their relevance for the study of the impact of economic benefits and costs on public opinion attitude towards the EU. Subsequently, it explores the links existing between such political economy literature and theories of fiscal federalism, especially with regard to normative considerations concerning the optimal allocation of competences in a union. Chapter 2 firstly proposes a model of citizens’ support for membership of international unions, with explicit reference to the EU; subsequently it tests the model on a panel of EU countries. What are the factors that influence public opinion support for the European Union (EU)? In international relations theory, the idea that citizens' support for the EU depends on material benefits deriving from integration, i.e. whether European integration makes individuals economically better off (utilitarian support), has been common since the 1970s, but has never been the subject of a formal treatment (Hix 2005). A small number of studies in the 1990s have investigated econometrically the link between national economic performance and mass support for European integration (Eichenberg and Dalton 1993; Anderson and Kalthenthaler 1996), but only making informal assumptions. The main aim of Chapter 2 is thus to propose and test our model with a view to providing a more complete and theoretically grounded picture of public support for the EU. Following theories of utilitarian support, we assume that citizens are in favour of membership if they receive economic benefits from it. To develop this idea, we propose a simple political economic model drawing on the recent economic literature on integration and disintegration processes. The basic element is the existence of a trade-off between the benefits of centralisation and the costs of harmonising policies in presence of heterogeneous preferences among countries. The approach we follow is that of the recent literature on the political economy of international unions and the unification or break-up of nations (Bolton and Roland 1997, Alesina and Wacziarg 1999, Alesina et al. 2001, 2005a, to mention only the relevant). The general perspective is that unification provides returns to scale in the provision of public goods, but reduces each member state’s ability to determine its most favoured bundle of public goods. In the simple model presented in Chapter 2, support for membership of the union is increasing in the union’s average income and in the loss of efficiency stemming from being outside the union, and decreasing in a country’s average income, while increasing heterogeneity of preferences among countries points to a reduced scope of the union. Afterwards we empirically test the model with data on the EU; more precisely, we perform an econometric analysis employing a panel of member countries over time. The second part of Chapter 2 thus tries to answer the following question: does public opinion support for the EU really depend on economic factors? The findings are broadly consistent with our theoretical expectations: the conditions of the national economy, differences in income among member states and heterogeneity of preferences shape citizens’ attitude towards their country’s membership of the EU. Consequently, this analysis offers some interesting policy implications for the present debate about ratification of the European Constitution and, more generally, about how the EU could act in order to gain more support from the European public. Citizens in many member states are called to express their opinion in national referenda, which may well end up in rejection of the Constitution, as recently happened in France and the Netherlands, triggering a European-wide political crisis. These events show that nowadays understanding public attitude towards the EU is not only of academic interest, but has a strong relevance for policy-making too. Chapter 3 empirically investigates the link between European integration and regional autonomy in Italy. Over the last few decades, the double tendency towards supranationalism and regional autonomy, which has characterised some European States, has taken a very interesting form in this country, because Italy, besides being one of the founding members of the EU, also implemented a process of decentralisation during the 1970s, further strengthened by a constitutional reform in 2001. Moreover, the issue of the allocation of competences among the EU, the Member States and the regions is now especially topical. The process leading to the drafting of European Constitution (even if then it has not come into force) has attracted much attention from a constitutional political economy perspective both on a normative and positive point of view (Breuss and Eller 2004, Mueller 2005). The Italian parliament has recently passed a new thorough constitutional reform, still to be approved by citizens in a referendum, which includes, among other things, the so called “devolution”, i.e. granting the regions exclusive competence in public health care, education and local police. Following and extending the methodology proposed in a recent influential article by Alesina et al. (2005b), which only concentrated on the EU activity (treaties, legislation, and European Court of Justice’s rulings), we develop a set of quantitative indicators measuring the intensity of the legislative activity of the Italian State, the EU and the Italian regions from 1973 to 2005 in a large number of policy categories. By doing so, we seek to answer the following broad questions. Are European and regional legislations substitutes for state laws? To what extent are the competences attributed by the European treaties or the Italian Constitution actually exerted in the various policy areas? Is their exertion consistent with the normative recommendations from the economic literature about their optimum allocation among different levels of government? The main results show that, first, there seems to be a certain substitutability between EU and national legislations (even if not a very strong one), but not between regional and national ones. Second, the EU concentrates its legislative activity mainly in international trade and agriculture, whilst social policy is where the regions and the State (which is also the main actor in foreign policy) are more active. Third, at least two levels of government (in some cases all of them) are significantly involved in the legislative activity in many sectors, even where the rationale for that is, at best, very questionable, indicating that they actually share a larger number of policy tasks than that suggested by the economic theory. It appears therefore that an excessive number of competences are actually shared among different levels of government. From an economic perspective, it may well be recommended that some competences be shared, but only when the balance between scale or spillover effects and heterogeneity of preferences suggests so. When, on the contrary, too many levels of government are involved in a certain policy area, the distinction between their different responsibilities easily becomes unnecessarily blurred. This may not only leads to a slower and inefficient policy-making process, but also risks to make it too complicate to understand for citizens, who, on the contrary, should be able to know who is really responsible for a certain policy when they vote in national,local or European elections or in referenda on national or European constitutional issues.
Resumo:
This Ph.D. candidate thesis collects the research work I conducted under the supervision of Prof.Bruno Samor´ı in 2005,2006 and 2007. Some parts of this work included in the Part III have been begun by myself during my undergraduate thesis in the same laboratory and then completed during the initial part of my Ph.D. thesis: the whole results have been included for the sake of understanding and completeness. During my graduate studies I worked on two very different protein systems. The theorical trait d’union between these studies, at the biological level, is the acknowledgement that protein biophysical and structural studies must, in many cases, take into account the dynamical states of protein conformational equilibria and of local physico-chemical conditions where the system studied actually performs its function. This is introducted in the introductory part in Chapter 2. Two different examples of this are presented: the structural significance deriving from the action of mechanical forces in vivo (Chapter 3) and the complexity of conformational equilibria in intrinsically unstructured proteins and amyloid formation (Chapter 4). My experimental work investigated both these examples by using in both cases the single molecule force spectroscopy technique (described in Chapter 5 and Chapter 6). The work conducted on angiostatin focused on the characterization of the relationships between the mechanochemical properties and the mechanism of action of the angiostatin protein, and most importantly their intertwining with the further layer of complexity due to disulfide redox equilibria (Part III). These studies were accompanied concurrently by the elaboration of a theorical model for a novel signalling pathway that may be relevant in the extracellular space, detailed in Chapter 7.2. The work conducted on -synuclein (Part IV) instead brought a whole new twist to the single molecule force spectroscopy methodology, applying it as a structural technique to elucidate the conformational equilibria present in intrinsically unstructured proteins. These equilibria are of utmost interest from a biophysical point of view, but most importantly because of their direct relationship with amyloid aggregation and, consequently, the aetiology of relevant pathologies like Parkinson’s disease. The work characterized, for the first time, conformational equilibria in an intrinsically unstructured protein at the single molecule level and, again for the first time, identified a monomeric folded conformation that is correlated with conditions leading to -synuclein and, ultimately, Parkinson’s disease. Also, during the research work, I found myself in the need of a generalpurpose data analysis application for single molecule force spectroscopy data analysis that could solve some common logistic and data analysis problems that are common in this technique. I developed an application that addresses some of these problems, herein presented (Part V), and that aims to be publicly released soon.
Resumo:
The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.
Resumo:
In this thesis, we present our work about some generalisations of ideas, techniques and physical interpretations typical for integrable models to one of the most outstanding advances in theoretical physics of nowadays: the AdS/CFT correspondences. We have undertaken the problem of testing this conjectured duality under various points of view, but with a clear starting point - the integrability - and with a clear ambitious task in mind: to study the finite-size effects in the energy spectrum of certain string solutions on a side and in the anomalous dimensions of the gauge theory on the other. Of course, the final desire woul be the exact comparison between these two faces of the gauge/string duality. In few words, the original part of this work consists in application of well known integrability technologies, in large parte borrowed by the study of relativistic (1+1)-dimensional integrable quantum field theories, to the highly non-relativisic and much complicated case of the thoeries involved in the recent conjectures of AdS5/CFT4 and AdS4/CFT3 corrspondences. In details, exploiting the spin chain nature of the dilatation operator of N = 4 Super-Yang-Mills theory, we concentrated our attention on one of the most important sector, namely the SL(2) sector - which is also very intersting for the QCD understanding - by formulating a new type of nonlinear integral equation (NLIE) based on a previously guessed asymptotic Bethe Ansatz. The solutions of this Bethe Ansatz are characterised by the length L of the correspondent spin chain and by the number s of its excitations. A NLIE allows one, at least in principle, to make analytical and numerical calculations for arbitrary values of these parameters. The results have been rather exciting. In the important regime of high Lorentz spin, the NLIE clarifies how it reduces to a linear integral equations which governs the subleading order in s, o(s0). This also holds in the regime with L ! 1, L/ ln s finite (long operators case). This region of parameters has been particularly investigated in literature especially because of an intriguing limit into the O(6) sigma model defined on the string side. One of the most powerful methods to keep under control the finite-size spectrum of an integrable relativistic theory is the so called thermodynamic Bethe Ansatz (TBA). We proposed a highly non-trivial generalisation of this technique to the non-relativistic case of AdS5/CFT4 and made the first steps in order to determine its full spectrum - of energies for the AdS side, of anomalous dimensions for the CFT one - at any values of the coupling constant and of the size. At the leading order in the size parameter, the calculation of the finite-size corrections is much simpler and does not necessitate the TBA. It consists in deriving for a nonrelativistc case a method, invented for the first time by L¨uscher to compute the finite-size effects on the mass spectrum of relativisic theories. So, we have formulated a new version of this approach to adapt it to the case of recently found classical string solutions on AdS4 × CP3, inside the new conjecture of an AdS4/CFT3 correspondence. Our results in part confirm the string and algebraic curve calculations, in part are completely new and then could be better understood by the rapidly evolving developments of this extremely exciting research field.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
In this dissertation the pyrolytic conversion of biomass into chemicals and fuels was investigated from the analytical point of view. The study was focused on the liquid (bio-oil) and solid (char) fractions obtainable from biomass pyrolysis. The drawbacks of Py-GC-MS described so far were partially solved by coupling different analytical configurations (Py-GC-MS, Py-GC-MIP-AED and off-line Py-SPE and Py-SPME-GC-MS with derivatization procedures). The application of different techniques allowed a satisfactory comparative analysis of pyrolysis products of different biomass and a high throughput screening on effect of 33 catalysts on biomass pyrolysis. As the results of the screening showed, the most interesting catalysts were those containing copper (able to reduce the high molecular weight fraction of bio-oil without large yield decrease) and H-ZSM-5 (able to entirely convert the bio-oil into “gasoline like” aromatic products). In order to establish the noxious compounds content of the liquid product, a clean-up step was included in the Py-SPE procedure. This allowed to investigate pollutants (PAHs) generation from pyrolysis and catalytic pyrolysis of biomass. In fact, bio-oil from non-catalytic pyrolysis of biomass showed a moderate PAHs content, while the use of H-ZSM-5 catalyst for bio-oil up-grading determined an astonishing high production of PAHs (if compared to what observed in alkanes cracking), indicating an important concern in the substitution fossil fuel with bio-oil derived from biomass. Moreover, the analytical procedures developed in this thesis were directly applied for the detailed study of the most useful process scheme and up-grading route to chemical intermediates (anhydrosugars), transportation fuels or commodity chemicals (aromatic hydrocarbons). In the applied study, poplar and microalgae biomass were investigated and overall GHGs balance of pyrolysis of agricultural residues in Ravenna province was performed. A special attention was put on the comparison of the effect of bio-char different use (fuel or as soil conditioner) on the soil health and GHGs emissions.
Resumo:
From the institutional point of view, the legal system of IPR (intellectual property right, hereafter, IPR) is one of incentive institutions of innovation and it plays very important role in the development of economy. According to the law, the owner of the IPR enjoy a kind of exclusive right to use his IP(intellectual property, hereafter, IP), in other words, he enjoys a kind of legal monopoly position in the market. How to well protect the IPR and at the same time to regulate the abuse of IPR is very interested topic in this knowledge-orientated market and it is the basic research question in this dissertation. In this paper, by way of comparing study and by way of law and economic analyses, and based on the Austrian Economics School’s theories, the writer claims that there is no any contradiction between the IPR and competition law. However, in this new economy (high-technology industries), there is really probability of the owner of IPR to abuse his dominant position. And with the characteristics of the new economy, such as, the high rates of innovation, “instant scalability”, network externality and lock-in effects, the IPR “will vest the dominant undertakings with the power not just to monopolize the market but to shift such power from one market to another, to create strong barriers to enter and, in so doing, granting the perpetuation of such dominance for quite a long time.”1 Therefore, in order to keep the order of market, to vitalize the competition and innovation, and to benefit the customer, in EU and US, it is common ways to apply the competition law to regulate the IPR abuse. In Austrian Economic School perspective, especially the Schumpeterian theories, the innovation/competition/monopoly and entrepreneurship are inter-correlated, therefore, we should apply the dynamic antitrust model based on the AES theories to analysis the relationship between the IPR and competition law. China is still a developing country with relative not so high ability of innovation. Therefore, at present, to protect the IPR and to make good use of the incentive mechanism of IPR legal system is the first important task for Chinese government to do. However, according to the investigation reports,2 based on their IPR advantage and capital advantage, some multinational companies really obtained the dominant or monopoly market position in some aspects of some industries, and there are some IPR abuses conducted by such multinational companies. And then, the Chinese government should be paying close attention to regulate any IPR abuse. However, how to effectively regulate the IPR abuse by way of competition law in Chinese situation, from the law and economic theories’ perspective, from the legislation perspective, and from the judicial practice perspective, there is a long way for China to go!
Resumo:
In this thesis, we extend some ideas of statistical physics to describe the properties of human mobility. By using a database containing GPS measures of individual paths (position, velocity and covered space at a spatial scale of 2 Km or a time scale of 30 sec), which includes the 2% of the private vehicles in Italy, we succeed in determining some statistical empirical laws pointing out "universal" characteristics of human mobility. Developing simple stochastic models suggesting possible explanations of the empirical observations, we are able to indicate what are the key quantities and cognitive features that are ruling individuals' mobility. To understand the features of individual dynamics, we have studied different aspects of urban mobility from a physical point of view. We discuss the implications of the Benford's law emerging from the distribution of times elapsed between successive trips. We observe how the daily travel-time budget is related with many aspects of the urban environment, and describe how the daily mobility budget is then spent. We link the scaling properties of individual mobility networks to the inhomogeneous average durations of the activities that are performed, and those of the networks describing people's common use of space with the fractional dimension of the urban territory. We study entropy measures of individual mobility patterns, showing that they carry almost the same information of the related mobility networks, but are also influenced by a hierarchy among the activities performed. We discover that Wardrop's principles are violated as drivers have only incomplete information on traffic state and therefore rely on knowledge on the average travel-times. We propose an assimilation model to solve the intrinsic scattering of GPS data on the street network, permitting the real-time reconstruction of traffic state at a urban scale.
Resumo:
This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.
Resumo:
This thesis regards the study and the development of new cognitive assessment and rehabilitation techniques of subjects with traumatic brain injury (TBI). In particular, this thesis i) provides an overview about the state of art of this new assessment and rehabilitation technologies, ii) suggests new methods for the assessment and rehabilitation and iii) contributes to the explanation of the neurophysiological mechanism that is involved in a rehabilitation treatment. Some chapters provide useful information to contextualize TBI and its outcome; they describe the methods used for its assessment/rehabilitation. The other chapters illustrate a series of experimental studies conducted in healthy subjects and TBI patients that suggest new approaches to assessment and rehabilitation. The new proposed approaches have in common the use of electroencefalografy (EEG). EEG was used in all the experimental studies with a different purpose, such as diagnostic tool, signal to command a BCI-system, outcome measure to evaluate the effects of a treatment, etc. The main achieved results are about: i) the study and the development of a system for the communication with patients with disorders of consciousness. It was possible to identify a paradigm of reliable activation during two imagery task using EEG signal or EEG and NIRS signal; ii) the study of the effects of a neuromodulation technique (tDCS) on EEG pattern. This topic is of great importance and interest. The emerged founding showed that the tDCS can manipulate the cortical network activity and through the research of optimal stimulation parameters, it is possible move the working point of a neural network and bring it in a condition of maximum learning. In this way could be possible improved the performance of a BCI system or to improve the efficacy of a rehabilitation treatment, like neurofeedback.
Resumo:
The aim of this thesis is to develop a depth analysis of the inductive power transfer (or wireless power transfer, WPT) along a metamaterial composed of cells arranged in a planar configuration, in order to deliver power to a receiver sliding on them. In this way, the problem of the efficiency strongly affected by the weak coupling between emitter and receiver can be obviated, and the distance of transmission can significantly be increased. This study is made using a circuital approach and the magnetoinductive wave (MIW) theory, in order to simply explain the behavior of the transmission coefficient and efficiency from the circuital and experimental point of view. Moreover, flat spiral resonators are used as metamaterial cells, particularly indicated in literature for WPT metamaterials operating at MHz frequencies (5-30 MHz). Finally, this thesis presents a complete electrical characterization of multilayer and multiturn flat spiral resonators and, in particular, it proposes a new approach for the resistance calculation through finite element simulations, in order to consider all the high frequency parasitic effects. Multilayer and multiturn flat spiral resonators are studied in order to decrease the operating frequency down to kHz, maintaining small external dimensions and allowing the metamaterials to be supplied by electronic power converters (resonant inverters).