22 resultados para model-based reasoning processes
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The concept of competitiveness, for a long time considered as strictly connected to economic and financial performances, evolved, above all in recent years, toward new, wider interpretations disclosing its multidimensional nature. The shift to a multidimensional view of the phenomenon has excited an intense debate involving theoretical reflections on the features characterizing it, as well as methodological considerations on its assessment and measurement. The present research has a twofold objective: going in depth with the study of tangible and intangible aspect characterizing multidimensional competitive phenomena by assuming a micro-level point of view, and measuring competitiveness through a model-based approach. Specifically, we propose a non-parametric approach to Structural Equation Models techniques for the computation of multidimensional composite measures. Structural Equation Models tools will be used for the development of the empirical application on the italian case: a model based micro-level competitiveness indicator for the measurement of the phenomenon on a large sample of Italian small and medium enterprises will be constructed.
Resumo:
During my PhD,I have been develop an innovative technique to reproduce in vitro the 3D thymic microenvironment, to be used for growth and differentiation of thymocytes, and possible transplantation replacement in conditions of depressed thymic immune regulation. The work has been developed in the laboratory of Tissue Engineering at the University Hospital in Basel, Switzerland, under the tutorship of Prof.Ivan Martin. Since a number of studies have suggested that the 3D structure of the thymic microenvironment might play a key role in regulating the survival and functional competence of thymocytes, I’ve focused my effort on the isolation and purification of the extracellular matrix of the mouse thymus. Specifically, based on the assumption that TEC can favour the differentiation of pre-T lymphocytes, I’ve developed a specific decellularization protocol to obtain the intact, DNA-free extracellular matrix of the adult mouse thymus. Two different protocols satisfied the main characteristics of a decellularized matrix, according to qualitative and quantitative assays. In particular, the quantity of DNA was less than 10% in absolute value, no positive staining for cells was found and the 3D structure and composition of the ECM were maintained. In addition, I was able to prove that the decellularized matrixes were not cytotoxic for the cells themselves, and were able to increase expression of MHC II antigens compared to control cells grown in standard conditions. I was able to prove that TECs grow and proliferate up to ten days on top the decellularized matrix. After a complete characterization of the culture system, these innovative natural scaffolds could be used to improve the standard culture conditions of TEC, to study in vitro the action of different factors on their differentiation genes, and to test the ability of TECs to induce in vitro maturation of seeded T lymphocytes.
Resumo:
This PhD thesis reports the main activities carried out during the 3 years long “Mechanics and advanced engineering sciences” course, at the Department of Industrial Engineering of the University of Bologna. The research project title is “Development and analysis of high efficiency combustion systems for internal combustion engines” and the main topic is knock, one of the main challenges for boosted gasoline engines. Through experimental campaigns, modelling activity and test bench validation, 4 different aspects have been addressed to tackle the issue. The main path goes towards the definition and calibration of a knock-induced damage model, to be implemented in the on-board control strategy, but also usable for the engine calibration and potentially during the engine design. Ionization current signal capabilities have been investigated to fully replace the pressure sensor, to develop a robust on-board close-loop combustion control strategy, both in knock-free and knock-limited conditions. Water injection is a powerful solution to mitigate knock intensity and exhaust temperature, improving fuel consumption; its capabilities have been modelled and validated at the test bench. Finally, an empiric model is proposed to predict the engine knock response, depending on several operating condition and control parameters, including injected water quantity.
Resumo:
This manuscript reports the overall development of a Ph.D. research project during the “Mechanics and advanced engineering sciences” course at the Department of Industrial Engineering of the University of Bologna. The project is focused on the development of a combustion control system for an innovative Spark Ignited engine layout. In details, the controller is oriented to manage a prototypal engine equipped with a Port Water Injection system. The water injection technology allows an increment of combustion efficiency due to the knock mitigation effect that permits to keep the combustion phasing closer to the optimal position with respect to the traditional layout. At the beginning of the project, the effects and the possible benefits achievable by water injection have been investigated by a focused experimental campaign. Then the data obtained by combustion analysis have been processed to design a control-oriented combustion model. The model identifies the correlation between Spark Advance, combustion phasing and injected water mass, and two different strategies are presented, both based on an analytic and semi-empirical approach and therefore compatible with a real-time application. The model has been implemented in a combustion controller that manages water injection to reach the best achievable combustion efficiency while keeping knock levels under a pre-established threshold. Three different versions of the algorithm are described in detail. This controller has been designed and pre-calibrated in a software-in-the-loop environment and later an experimental validation has been performed with a rapid control prototyping approach to highlight the performance of the system on real set-up. To further make the strategy implementable on an onboard application, an estimation algorithm of combustion phasing, necessary for the controller, has been developed during the last phase of the PhD Course, based on accelerometric signals.
Resumo:
Over the last 60 years, computers and software have favoured incredible advancements in every field. Nowadays, however, these systems are so complicated that it is difficult – if not challenging – to understand whether they meet some requirement or are able to show some desired behaviour or property. This dissertation introduces a Just-In-Time (JIT) a posteriori approach to perform the conformance check to identify any deviation from the desired behaviour as soon as possible, and possibly apply some corrections. The declarative framework that implements our approach – entirely developed on the promising open source forward-chaining Production Rule System (PRS) named Drools – consists of three components: 1. a monitoring module based on a novel, efficient implementation of Event Calculus (EC), 2. a general purpose hybrid reasoning module (the first of its genre) merging temporal, semantic, fuzzy and rule-based reasoning, 3. a logic formalism based on the concept of expectations introducing Event-Condition-Expectation rules (ECE-rules) to assess the global conformance of a system. The framework is also accompanied by an optional module that provides Probabilistic Inductive Logic Programming (PILP). By shifting the conformance check from after execution to just in time, this approach combines the advantages of many a posteriori and a priori methods proposed in literature. Quite remarkably, if the corrective actions are explicitly given, the reactive nature of this methodology allows to reconcile any deviations from the desired behaviour as soon as it is detected. In conclusion, the proposed methodology brings some advancements to solve the problem of the conformance checking, helping to fill the gap between humans and the increasingly complex technology.
Resumo:
Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.
Resumo:
This work deals with the development of calibration procedures and control systems to improve the performance and efficiency of modern spark ignition turbocharged engines. The algorithms developed are used to optimize and manage the spark advance and the air-to-fuel ratio to control the knock and the exhaust gas temperature at the turbine inlet. The described work falls within the activity that the research group started in the previous years with the industrial partner Ferrari S.p.a. . The first chapter deals with the development of a control-oriented engine simulator based on a neural network approach, with which the main combustion indexes can be simulated. The second chapter deals with the development of a procedure to calibrate offline the spark advance and the air-to-fuel ratio to run the engine under knock-limited conditions and with the maximum admissible exhaust gas temperature at the turbine inlet. This procedure is then converted into a model-based control system and validated with a Software in the Loop approach using the engine simulator developed in the first chapter. Finally, it is implemented in a rapid control prototyping hardware to manage the combustion in steady-state and transient operating conditions at the test bench. The third chapter deals with the study of an innovative and cheap sensor for the in-cylinder pressure measurement, which is a piezoelectric washer that can be installed between the spark plug and the engine head. The signal generated by this kind of sensor is studied, developing a specific algorithm to adjust the value of the knock index in real-time. Finally, with the engine simulator developed in the first chapter, it is demonstrated that the innovative sensor can be coupled with the control system described in the second chapter and that the performance obtained could be the same reachable with the standard in-cylinder pressure sensors.
Resumo:
The study of mass transport in polymeric membranes has grown in importance due to its potential application in many processes such as separation of gases and vapors, packaging, controlled drug release. The diffusion of a low molecular weight species in a polymer is often accompanied by other phenomena like swelling, reactions, stresses, that have not been investigated in all their aspects yet. Furthermore, novel materials have been developed that include inorganic fillers, reactive functional groups or ions, that make the scenery even more complicated. The present work focused on the experimental study of systems where the diffusion is accompanied by other processes; suitable models were also developed to describe the particular circumstances in order to understand the underlying concepts and be able to design the performances of the material. The effect of solvent-induced deformation in polymeric films during sorption processes was studied since the dilation, especially in constrained membranes, can cause the development of stresses and therefore early failures of the material. The bending beam technique was used to test the effects of the dilation and the stress induced in the polymer by penetrant diffusion. A model based on the laminate theory was developed that accounts for the swelling and is able to predict the stress that raise in the material. The addition of inorganic fillers affects the transport properties of polymeric films. Mixed matrix membranes based on fluorinated, high free volume matrices show attractive performances for separation purposes but there is a need for deeper investigation of the selectivity properties towards gases and vapors. A new procedure based on the NELF model was tested on the experimental data; it allows to predict solubility of every penetrant on the basis of data for one vapor. The method has proved to be useful also for the determination of the diffusion coefficient and for an estimation of the permeability in the composite materials. Oxygen scavenging systems can overcome lack of barrier properties in common polymers that forbids their application in sensitive applications as food packaging. The final goal of obtaining a membrane almost impermeable to oxygen leads to experimental times out of reach. Hence, a simple model was developed in order to describe the transport of oxygen in a membrane with also reactive groups and analyze the experimental data collected on SBS copolymers that show attractive scavenging capacity. Furthermore, a model for predicting the oxygen barrier behavior of a film formed as a blend of OSP in a common packaging material was built, considering particles capable of reactions with oxygen embedded in a non-reactive matrix. Perfluorosulphonic acid ionomers (PFSI) are capturing attention due to a high thermal and chemical resistance coupled with very peculiar transport properties, that make them appropriate to be used in fuel cells. The possible effect of different formation procedure was studied together with the swelling due to water sorption since both water uptake and dilation can dramatically affect the fuel cells performances. The water diffusion and sorption was studied with a FTIR-ATR spectrometer that can give deeper information on the bonds between water molecules and the sulphonic hydrophilic groups and, therefore, on the microstructure of the hydrated ionomer.
Resumo:
THE TITLE OF MY THESIS IS THE ROLE OF THE IDEAS AND THEIR CHANGE IN HIGHER EDUCATION POLICY-MAKING PROCESSES FROM THE EIGHTIES TO PRESENT-DAY: THE CASES OF ENGLAND AND NEW ZEALAND IN COMPARATIVE PERSPECTIVE UNDER A THEORETICAL POINT OF VIEW, THE AIM OF MY WORK IS TO CARRY OUT A RESEARCH MODELLED ON THE CONSTRUCTIVIST THEORY. IT FOCUSES ON THE ANALYSIS OF THE IMPACT OF IDEAS ON THE PROCESSES OF POLICY MAKING BY MEANS OF EPISTEMIC COMMUNITIES, THINK TANKS AND VARIOUS SOCIOECONOMIC CONTEXTS THAT MAY HAVE PLAYED A KEY ROLE IN THE CONSTRUCTION OF THE DIFFERENT PATHS. FROM MY POINT OF VIEW IDEAS CONSTITUTE A PRIORITY RESEARCH FIELD WHICH IS WORTH ANALYSING SINCE THEIR ROLE IN POLICY MAKING PROCESSES HAS BEEN TRADITIONALLY RATHER UNEXPLORED. IN THIS CONTEXT AND WITH THE AIM OF DEVELOPING A RESEARCH STRAND BASED ON THE ROLE OF IDEAS, I INTEND TO CARRY ON MY STUDY UNDER THE PERSPECTIVE OF CHANGE. DEPENDING ON THE DATA AND INFORMATION THAT I COLLECTED I EVALUATED THE WEIGHT OF EACH OF THESE VARIABLES AND MAYBE OTHERS SUCH AS THE INSTITUTIONS AND THE INDIVIDUAL INTERESTS, WHICH MAY HAVE INFLUENCED THE FORMATION OF THE POLICY MAKING PROCESSES. UNDER THIS LIGHT, I PLANNED TO ADOPT THE QUALITATIVE METHODOLOGY OF RESEARCH WHICH I BELIEVE TO BE VERY EFFECTIVE AGAINST THE MORE DIFFICULT AND POSSIBLY REDUCTIVE APPLICATION OF QUANTITIVE DATA SETS. I RECKON THEREFORE THAT THE MOST APPROPRIATE TOOLS FOR INFORMATION PROCESSING INCLUDE CONTENT ANALYSIS, AND IN-DEPTH INTERVIEWS TO PERSONALITIES OF THE POLITICAL PANORAMA (ÉLITE OR NOT) WHO HAVE PARTICIPATED IN THE PROCESS OF HIGHER EDUCATION REFORM FROM THE EIGHTIES TO PRESENT-DAY. THE TWO CASES TAKEN INTO CONSIDERATION SURELY SET AN EXAMPLE OF RADICAL REFORM PROCESSES WHICH HAVE OCCURRED IN QUITE DIFFERENT CONTEXTS DETERMINED BY THE SOCIOECONOMIC CHARACTERISTICS AND THE TRAITS OF THE ÉLITE. IN NEW ZEALAND THE DESCRIBED PROCESS HAS TAKEN PLACE WITH A STEADY PACE AND A GOOD GRADE OF CONSEQUANTIALITY, IN LINE WTH THE REFORMS IN OTHER STATE DIVISIONS DRIVEN BY THE IDEAS OF THE NEW PUBLIC MANAGEMENT. CONTRARILY IN ENGLAND THE REFORMATIVE ACTION OF MARGARET THATCHER HAS ACQUIRED A VERY RADICAL CONNOTATION AS IT HAS BROUGHT INTO THE AMBIT OF HIGHER EDUCATION POLICY CONCEPTS LIKE EFFICIENCY, EXCELLENCE, RATIONALIZATION THAT WOULD CONTRAST WITH THE GENERALISTIC AND MASS-ORIENTED IDEAS THAT WERE FASHIONABLE DURING THE SEVENTIES. THE MISSION I INTEND TO ACCOMPLISH THORUGHOUT MY RESEARCH IS TO INVESTIGATE AND ANALYSE INTO MORE DEPTH THE DIFFERENCES THAT SEEM TO EMERGE FROM TWO CONTEXTS WHICH MOST OF THE LITERATURE REGARDS AS A SINGLE MODEL: THE ANGLO-SAXON MODEL. UNDER THIS LIGHT, THE DENSE ANALYSIS OF POLICY PROCESSES ALLOWED TO BRING OUT BOTH THE CONTROVERSIAL AND CONTRASTING ASPECTS OF THE TWO REALITIES COMPARED, AND THE ROLE AND WEIGHT OF VARIABLES SUCH AS IDEAS (MAIN VARIABLE), INSTITUTIONAL SETTINGS AND INDIVIDUAL INTERESTS ACTING IN EACH CONTEXT. THE CASES I MEAN TO ATTEND PRESENT PECULIAR ASPECTS WORTH DEVELOPING AN IN-DEPTH ANALYSIS, AN OUTLINE OF WHICH WILL BE PROVIDED IN THIS ABSTRACT. ENGLAND THE CONSERVATIVE GOVERNMENT, SINCE 1981, INTRODUCED RADICAL CHANGES IN THE SECTOR OF HIGHER EDUCATION: FIRST CUTTING DOWN ON STATE FUNDINGS AND THEN WITH THE CREATION OF AN INSTITUTION FOR THE PLANNING AND LEADERSHIP OF THE POLYTECHNICS (NON-UNIVERSITY SECTOR). AFTERWARDS THE SCHOOL REFORM BY MARGARET THATCHER IN 1988 RAISED TO A GREAT STIR ALL OVER EUROPE DUE TO BOTH ITS CONSIDERABLE INNOVATIVE IMPRINT AND THE STRONG ATTACK AGAINST THE PEDAGOGY OF THE ‘ACTIVE’ SCHOOLING AND PROGRESSIVE EDUCATION, UNTIL THEN RECOGNIZED AS A MERIT OF THE BRITISH PUBLIC SCHOOL. IN THE AMBIT OF UNIVERSITY EDUCATION THIS REFORM, TOGETHER WITH SIMILAR MEASURES BROUGHT IN DURING 1992, PUT INTO PRACTICE THE CONSERVATIVE PRINCIPLES THROUGH A SERIES OF ACTIONS THAT INCLUDED: THE SUPPRESSION OF THE IRREMOVABILITY PRINCIPLE FOR UNIVERSITY TEACHERS; THE INTRODUCTION OF STUDENT LOANS FOR LOW-INCOME STUDENTS AND THE CANCELLATION OF THE CLEAR DISTINCTION BETWEEN UNIVERSITIES AND POLYTECHNICS. THE POLICIES OF THE LABOUR MAJORITY OF MR BLAIR DID NOT QUITE DIVERGE FROM THE CONSERVATIVES’ POSITION. IN 2003 BLAIR’S CABINET RISKED TO BECOME A MINORITY RIGHT ON THE OCCASION OF AN IMPORTANT UNIVERSITY REFORM PROPOSAL. THIS PROPOSAL WOULD FORESEE THE AUTONOMY FOR THE UNIVERSITIES TO RAISE UP TO 3.000 POUNDS THE ENROLMENT FEES FOR STUDENTS (WHILE FORMERLY THE CEILING WAS 1.125 POUNDS). BLAIR HAD TO FACE INTERNAL OPPOSITION WITHIN HIS OWN PARTY IN RELATION TO A MEASURE THAT, ACCORDING TO THE 150 MPS PROMOTERS OF AN ADVERSE MOTION, HAD NOT BEEN INCLUDED IN THE ELECTORAL PROGRAMME AND WOULD RISK CREATING INCOME-BASED DISCRIMINATION AMONG STUDENTS. AS A MATTER OF FACT THE BILL FOCUSED ON THE INTRODUCTION OF VERY LOW-INTEREST STUDENT LOANS TO BE SETTLED ONLY WHEN THE STUDENT WOULD HAVE FOUND A REMUNERATED OCCUPATION (A SYSTEM ALREADY PROVIDED FOR BY THE AUSTRALIAN LEGISLATION). NEW ZEALAND CONTRARILY TO MANY OTHER COUNTRIES, NEW ZEALAND HAS ADOPTED A VERY WIDE VISION OF THE TERTIARY EDUCATION. IT INCLUDES IN FACT THE FULL EDUCATIONAL PROGRAMME THAT IS INTERNATIONALLY RECOGNIZED AS THE POST-SECONDARY EDUCATION. SHOULD WE SPOTLIGHT A PECULIARITY OF THE NEW ZEALAND TERTIARY EDUCATION POLICY THEN IT WOULD BE ‘CHANGE’. LOOKING AT THE REFORM HISTORY RELATED TO THE TERTIARY EDUCATION SYSTEM, WE CAN CLEARLY IDENTIFY FOUR ‘SUB-PERIODS’ FROM THE EIGHTIES TO PRESENT-DAY: 1. BEFORE THE 80S’: AN ELITARIAN SYSTEM CHARACTERIZED BY LOW PARTICIPATION RATES. 2. BETWEEN MID AND LATE 80S’: A TREND TOWARDS THE ENLARGEMENT OF PARTICIPATION ASSOCIATED TO A GREATER COMPETITION. 3. 1990-1999: A FUTHER STEP TOWARDS A COMPETITIVE MODEL BASED ON THE MARKET-ORIENTED SYSTEM. 4. FROM 2000 TO TODAY: A CONTINUOUS EVOLUTION TOWARDS A MORE COMPETITIVE MODEL BASED ON THE MARKET-ORIENTED SYSTEM TOGETHER WITH A GROWING ATTENTION TO STATE CONTROL FOR SOCIAL AND ECONOMIC DEVELOPMENT OF THE NATION. AT PRESENT THE GOVERNMENT OF NEW ZEALAND OPERATES TO STRENGHTHEN THIS PROCESS, PRIMARILY IN RELATION TO THE ROLE OF TERTIARY EDUCATION AS A STEADY FACTOR OF NATIONAL WALFARE, WHERE PROFESSIONAL DEVELOPMENT CONTRIBUTES ACTIVELY TO THE GROWTH OF THE NATIONAL ECONOMIC SYSTEM5. THE CASES OF ENGLAND AND NEW ZEALAND ARE THE FOCUS OF AN IN-DEPTH INVESTIGATION THAT STARTS FROM AN ANALYSIS OF THE POLICIES OF EACH NATION AND DEVELOP INTO A COMPARATIVE STUDY. AT THIS POINT I ATTEMPT TO DRAW SOME PRELIMINARY IMPRESSIONS ON THE FACTS ESSENTIALLY DECRIBED ABOVE. THE UNIVERSITY POLICIES IN ENGLAND AND NEW ZEALAND HAVE BOTH UNDERGONE A SIGNIFICANT REFORMATORY PROCESS SINCE THE EARLY EIGHTIES; IN BOTH CONTEXTS THE IMPORTANCE OF IDEAS THAT CONSTITUTED THE BASE OF POLITICS UNTIL 1980 WAS QUITE RELEVANT. GENERALLY SPEAKING, IN BOTH CASES THE PRE-REFORM POLICIES WERE INSPIRED BY EGALITARIANISM AND EXPANSION OF THE STUDENT POPULATION WHILE THOSE BROUGHT IN BY THE REFORM WOULD PURSUE EFFICIENCY, QUALITY AND COMPETITIVENESS. UNDOUBTEDLY, IN LINE WITH THIS GENERAL TENDENCY THAT REFLECTS THE HYPOTHESIS PROPOSED, THE TWO UNIVERSITY SYSTEMS PRESENT SEVERAL DIFFERENCES. THE UNIVERSITY SYSTEM IN NEW ZEALAND PROCEEDED STEADILY TOWARDS THE IMPLEMENTATION OF A MANAGERIAL CONCEPTION OF TERTIARY EDUCATION, ESPECIALLY FROM 1996 ONWARDS, IN ACCORDANCE WITH THE REFORMATORY PROCESS OF THE WHOLE PUBLIC SECTOR. IN THE UNITED KINGDOM, AS IN THE REST OF EUROPE, THE NEW APPROACH TO UNIVERSITY POLICY-MAKING HAD TO CONFRONT A DEEP-ROOTED TRADITION OF PROGRESSIVE EDUCATION AND THE IDEA OF EDUCATION EXPANSION THAT IN FACT DOMINATED UNTIL THE EIGHTIES. FROM THIS VIEW POINT THE GOVERNING ACTION OF MARGARET THATCHER GAVE RISE TO A RADICAL CHANGE THAT REVOLUTIONIZED THE OBJECTIVES AND KEY VALUES OF THE WHOLE EDUCATIONAL SYSTEM, IN PARTICULAR IN THE HIGHER EDUCATION SECTOR. IDEAS AS EFFICIENCY, EXCELLENCE AND CONTROL OF THE PERFORMANCE BECAME DECISIVE. THE LABOUR CABINETS OF BLAIR DEVELOPED IN THE WAKE OF CONSERVATIVE REFORMS. THIS APPEARS TO BE A FOCAL POINT OF THIS STUDY THAT OBSERVES HOW ALSO IN NEW ZEALAND THE REFORMING PROCESS OCCURRED TRANSVERSELY DURING PROGRESSIVE AND CONSERVATIVE ADMINISTRATIONS. THE PRELIMINARY IMPRESSION IS THEREFORE THAT IDEAS DEEPLY MARK THE REFORMATIVE PROCESSES: THE AIM OF MY RESEARCH IS TO VERIFY TO WHICH EXTENT THIS STATEMENT IS TRUE. IN ORDER TO BUILD A COMPREHENSIVE ANALYLIS, FURTHER SIGNIFICANT FACTORS WILL HAVE TO BE INVESTIGATED: THE WAY IDEAS ARE PERCEIVED AND IMPLEMENTED BY THE DIFFERENT POLITICAL ELITES; HOW THE VARIOUS SOCIOECONOMIC CONTEXTS INFLUENCE THE REFORMATIVE PROCESS; HOW THE INSTITUTIONAL STRUCTURES CONDITION THE POLICY-MAKING PROCESSES; WHETHER INDIVIDUAL INTERESTS PLAY A ROLE AND, IF YES, TO WHICH EXTENT.
Resumo:
Membrane-based separation processes are acquiring, in the last years, an increasing importance because of their intrinsic energetic and environmental sustainability: some types of polymeric materials, showing adequate perm-selectivity features, appear rather suitable for these applications, because of their relatively low cost and easy processability. In this work have been studied two different types of polymeric membranes, in view of possible applications to the gas separation processes, i.e. Mixed Matrix Membranes (MMMs) and high free volume glassy polymers. Since the early 90’s, it has been understood that the performances of polymeric materials in the field of gas separations show an upper bound in terms of permeability and selectivity: in particular, an increase of permeability is often accompanied by a decrease of selectivity and vice-versa, while several inorganic materials, like zeolites or silica derivates, can overcome this limitation. As a consequence, it has been developed the idea of dispersing inorganic particles in polymeric matrices, in order to obtain membranes with improved perm-selectivity features. In particular, dispersing fumed silica nanoparticles in high free volume glassy polymers improves in all the cases gases and vapours permeability, while the selectivity may either increase or decrease, depending upon material and gas mixture: that effect is due to the capacity of nanoparticles to disrupt the local chain packing, increasing the dimensions of excess free volume elements trapped in the polymer matrix. In this work different kinds of MMMs were fabricated using amorphous Teflon® AF or PTMSP and fumed silica: in all the cases, a considerable increase of solubility, diffusivity and permeability of gases and vapours (n-alkanes, CO2, methanol) was observed, while the selectivity shows a non-monotonous trend with filler fraction. Moreover, the classical models for composites are not able to capture the increase of transport properties due to the silica addition, so it has been necessary to develop and validate an appropriate thermodynamic model that allows to predict correctly the mass transport features of MMMs. In this work, another material, called poly-trimethylsilyl-norbornene (PTMSN) was examined: it is a new generation high free volume glassy polymer that, like PTMSP, shows unusual high permeability and selectivity levels to the more condensable vapours. These two polymer differ each other because PTMSN shows a more pronounced chemical stability, due to its structure double-bond free. For this polymer, a set of Lattice Fluid parameters was estimated, making possible a comparison between experimental and theoretical solubility isotherms for hydrocarbons and alcoholic vapours: the successfully modelling task, based on application of NELF model, offers a reliable alternative to direct sorption measurement, which is extremely time-consuming due to the relevant relaxation phenomena showed by each sorption step. For this material also dilation experiments were performed, in order to quantify its dimensional stability in presence of large size, swelling vapours.
Resumo:
Forecasting the time, location, nature, and scale of volcanic eruptions is one of the most urgent aspects of modern applied volcanology. The reliability of probabilistic forecasting procedures is strongly related to the reliability of the input information provided, implying objective criteria for interpreting the historical and monitoring data. For this reason both, detailed analysis of past data and more basic research into the processes of volcanism, are fundamental tasks of a continuous information-gain process; in this way the precursor events of eruptions can be better interpreted in terms of their physical meanings with correlated uncertainties. This should lead to better predictions of the nature of eruptive events. In this work we have studied different problems associated with the long- and short-term eruption forecasting assessment. First, we discuss different approaches for the analysis of the eruptive history of a volcano, most of them generally applied for long-term eruption forecasting purposes; furthermore, we present a model based on the characteristics of a Brownian passage-time process to describe recurrent eruptive activity, and apply it for long-term, time-dependent, eruption forecasting (Chapter 1). Conversely, in an effort to define further monitoring parameters as input data for short-term eruption forecasting in probabilistic models (as for example, the Bayesian Event Tree for eruption forecasting -BET_EF-), we analyze some characteristics of typical seismic activity recorded in active volcanoes; in particular, we use some methodologies that may be applied to analyze long-period (LP) events (Chapter 2) and volcano-tectonic (VT) seismic swarms (Chapter 3); our analysis in general are oriented toward the tracking of phenomena that can provide information about magmatic processes. Finally, we discuss some possible ways to integrate the results presented in Chapters 1 (for long-term EF), 2 and 3 (for short-term EF) in the BET_EF model (Chapter 4).
Resumo:
Two of the main features of today complex software systems like pervasive computing systems and Internet-based applications are distribution and openness. Distribution revolves around three orthogonal dimensions: (i) distribution of control|systems are characterised by several independent computational entities and devices, each representing an autonomous and proactive locus of control; (ii) spatial distribution|entities and devices are physically distributed and connected in a global (such as the Internet) or local network; and (iii) temporal distribution|interacting system components come and go over time, and are not required to be available for interaction at the same time. Openness deals with the heterogeneity and dynamism of system components: complex computational systems are open to the integration of diverse components, heterogeneous in terms of architecture and technology, and are dynamic since they allow components to be updated, added, or removed while the system is running. The engineering of open and distributed computational systems mandates for the adoption of a software infrastructure whose underlying model and technology could provide the required level of uncoupling among system components. This is the main motivation behind current research trends in the area of coordination middleware to exploit tuple-based coordination models in the engineering of complex software systems, since they intrinsically provide coordinated components with communication uncoupling and further details in the references therein. An additional daunting challenge for tuple-based models comes from knowledge-intensive application scenarios, namely, scenarios where most of the activities are based on knowledge in some form|and where knowledge becomes the prominent means by which systems get coordinated. Handling knowledge in tuple-based systems induces problems in terms of syntax - e.g., two tuples containing the same data may not match due to differences in the tuple structure - and (mostly) of semantics|e.g., two tuples representing the same information may not match based on a dierent syntax adopted. Till now, the problem has been faced by exploiting tuple-based coordination within a middleware for knowledge intensive environments: e.g., experiments with tuple-based coordination within a Semantic Web middleware (surveys analogous approaches). However, they appear to be designed to tackle the design of coordination for specic application contexts like Semantic Web and Semantic Web Services, and they result in a rather involved extension of the tuple space model. The main goal of this thesis was to conceive a more general approach to semantic coordination. In particular, it was developed the model and technology of semantic tuple centres. It is adopted the tuple centre model as main coordination abstraction to manage system interactions. A tuple centre can be seen as a programmable tuple space, i.e. an extension of a Linda tuple space, where the behaviour of the tuple space can be programmed so as to react to interaction events. By encapsulating coordination laws within coordination media, tuple centres promote coordination uncoupling among coordinated components. Then, the tuple centre model was semantically enriched: a main design choice in this work was to try not to completely redesign the existing syntactic tuple space model, but rather provide a smooth extension that { although supporting semantic reasoning { keep the simplicity of tuple and tuple matching as easier as possible. By encapsulating the semantic representation of the domain of discourse within coordination media, semantic tuple centres promote semantic uncoupling among coordinated components. The main contributions of the thesis are: (i) the design of the semantic tuple centre model; (ii) the implementation and evaluation of the model based on an existent coordination infrastructure; (iii) a view of the application scenarios in which semantic tuple centres seem to be suitable as coordination media.
Resumo:
The diagnosis, grading and classification of tumours has benefited considerably from the development of DCE-MRI which is now essential to the adequate clinical management of many tumour types due to its capability in detecting active angiogenesis. Several strategies have been proposed for DCE-MRI evaluation. Visual inspection of contrast agent concentration curves vs time is a very simple yet operator dependent procedure, therefore more objective approaches have been developed in order to facilitate comparison between studies. In so called model free approaches, descriptive or heuristic information extracted from time series raw data have been used for tissue classification. The main issue concerning these schemes is that they have not a direct interpretation in terms of physiological properties of the tissues. On the other hand, model based investigations typically involve compartmental tracer kinetic modelling and pixel-by-pixel estimation of kinetic parameters via non-linear regression applied on region of interests opportunely selected by the physician. This approach has the advantage to provide parameters directly related to the pathophysiological properties of the tissue such as vessel permeability, local regional blood flow, extraction fraction, concentration gradient between plasma and extravascular-extracellular space. Anyway, nonlinear modelling is computational demanding and the accuracy of the estimates can be affected by the signal-to-noise ratio and by the initial solutions. The principal aim of this thesis is investigate the use of semi-quantitative and quantitative parameters for segmentation and classification of breast lesion. The objectives can be subdivided as follow: describe the principal techniques to evaluate time intensity curve in DCE-MRI with focus on kinetic model proposed in literature; to evaluate the influence in parametrization choice for a classic bi-compartmental kinetic models; to evaluate the performance of a method for simultaneous tracer kinetic modelling and pixel classification; to evaluate performance of machine learning techniques training for segmentation and classification of breast lesion.
Resumo:
Particulate matter is one of the main atmospheric pollutants, with a great chemical-environmental relevance. Improving knowledge of the sources of particulate matter and of their apportionment is needed to handle and fulfill the legislation regarding this pollutant, to support further development of air policy as well as air pollution management. Various instruments have been used to understand the sources of particulate matter and atmospheric radiotracers at the site of Mt. Cimone (44.18° N, 10.7° E, 2165 m asl), hosting a global WMO-GAW station. Thanks to its characteristics, this location is suitable investigate the regional and long-range transport of polluted air masses on the background Southern-Europe free-troposphere. In particular, PM10 data sampled at the station in the period 1998-2011 were analyzed in the framework of the main meteorological and territorial features. A receptor model based on back trajectories was applied to study the source regions of particulate matter. Simultaneous measurements of atmospheric radionuclides Pb-210 and Be-7 acquired together with PM10 have also been analysed to acquire a better understanding of vertical and horizontal transports able to affect atmospheric composition. Seasonal variations of atmospheric radiotracers have been studied both analysing the long-term time series acquired at the measurement site as well as by means of a state-of-the-art global 3-D chemistry and transport model. Advection patterns characterizing the circulation at the site have been identified by means of clusters of back-trajectories. Finally, the results of a source apportionment study of particulate matter carried on in a midsize town of the Po Valley (actually recognised as one of the most polluted European regions) are reported. An approach exploiting different techniques, and in particular different kinds of models, successfully achieved a characterization of the processes/sources of particulate matter at the two sites, and of atmospheric radiotracers at the site of Mt. Cimone.
Resumo:
Polymeric membranes represent a promising technology for gas separation processes, thanks to low costs, reduced energy consumption and limited waste production. The present thesis aims at studying the transport properties of two membrane materials, suitable for CO2 purification applications. In the first part, a polyimide, Matrimid 5218, has been throughout investigated, with particular reference to the effect of thermal treatment, aging and the presence of water vapor in the gas transport process. Permeability measurements showed that thermal history affects relevantly the diffusion of gas molecules across the membrane, influencing also the stability of the separation performances. Subsequently, the effect of water on Matrimid transport properties has been characterized for a wide set of incondensable penetrants. A monotonous reduction of permeability took place at increasing the water concentration within the polymer matrix, affecting the investigated gaseous species to the same extent, despite the different thermodynamic and kinetic features. In this view, a novel empirical model, based on the Free Volume Theory, has been proposed to qualitatively describe the phenomenon. Moreover, according to the accurate representation of the experimental data, the suggested approach has been combined with a more rigorous thermodynamic tool (NELF Model), allowing an exhaustive description of water influence on the single parameters contributing to the gas permeation across the membrane. In the second part, the study has focused on the synthesis and characterization of facilitated transport membranes, able to achieving outstanding separation performances thanks to the chemical enhancement of CO2 permeability. In particular, the transport properties have been investigated for high pressure CO2 separation applications and specific solutions have been proposed to solve stability issues, frequently arising under such severe conditions. Finally, the effect of different process parameters have been investigated, aiming at the identification of the optimal conditions capable to maximize the separation performance.