901 resultados para Many-core systems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motion control is a sub-field of automation, in which the position and/or velocity of machines are controlled using some type of device. In motion control the position, velocity, force, pressure, etc., profiles are designed in such a way that the different mechanical parts work as an harmonious whole in which a perfect synchronization must be achieved. The real-time exchange of information in the distributed system that is nowadays an industrial plant plays an important role in order to achieve always better performance, better effectiveness and better safety. The network for connecting field devices such as sensors, actuators, field controllers such as PLCs, regulators, drive controller etc., and man-machine interfaces is commonly called fieldbus. Since the motion transmission is now task of the communication system, and not more of kinematic chains as in the past, the communication protocol must assure that the desired profiles, and their properties, are correctly transmitted to the axes then reproduced or else the synchronization among the different parts is lost with all the resulting consequences. In this thesis, the problem of trajectory reconstruction in the case of an event-triggered communication system is faced. The most important feature that a real-time communication system must have is the preservation of the following temporal and spatial properties: absolute temporal consistency, relative temporal consistency, spatial consistency. Starting from the basic system composed by one master and one slave and passing through systems made up by many slaves and one master or many masters and one slave, the problems in the profile reconstruction and temporal properties preservation, and subsequently the synchronization of different profiles in network adopting an event-triggered communication system, have been shown. These networks are characterized by the fact that a common knowledge of the global time is not available. Therefore they are non-deterministic networks. Each topology is analyzed and the proposed solution based on phase-locked loops adopted for the basic master-slave case has been improved to face with the other configurations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last years, Intelligent Tutoring Systems have been a very successful way for improving learning experience. Many issues must be addressed until this technology can be defined mature. One of the main problems within the Intelligent Tutoring Systems is the process of contents authoring: knowledge acquisition and manipulation processes are difficult tasks because they require a specialised skills on computer programming and knowledge engineering. In this thesis we discuss a general framework for knowledge management in an Intelligent Tutoring System and propose a mechanism based on first order data mining to partially automate the process of knowledge acquisition that have to be used in the ITS during the tutoring process. Such a mechanism can be applied in Constraint Based Tutor and in the Pseudo-Cognitive Tutor. We design and implement a part of the proposed architecture, mainly the module of knowledge acquisition from examples based on first order data mining. We then show that the algorithm can be applied at least two different domains: first order algebra equation and some topics of C programming language. Finally we discuss the limitation of current approach and the possible improvements of the whole framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]Many different complex systems depend on a large number n of mutually independent random Boolean variables. The most useful representation for these systems –usually called complex stochastic Boolean systems (CSBSs)– is the intrinsic order graph. This is a directed graph on 2n vertices, corresponding to the 2n binary n-tuples (u1, . . . , un) ∈ {0, 1} n of 0s and 1s. In this paper, different duality properties of the intrinsic order graph are rigorously analyzed in detail. The results can be applied to many CSBSs arising from any scientific, technical or social area…

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present PhD project was focused on the development of new tools and methods for luminescence-based techniques. In particular, the ultimate goal was to present substantial improvements to the currently available technologies for both research and diagnostic in the fields of biology, proteomics and genomics. Different aspects and problems were investigated, requiring different strategies and approaches. The whole work was thus divided into separate chapters, each based on the study of one specific aspect of luminescence: Chemiluminescence, Fluorescence and Electrochemiluminescence. CHAPTER 1, Chemiluminescence The work on luminol-enhancer solution lead to a new luminol solution formulation with 1 order of magnitude lower detection limit for HRP. This technology was patented with Cyanagen brand and is now sold worldwide for Western Blot and ELISA applications. CHAPTER 2, Fluorescescence The work on dyed-doped silica nanoparticles is marking a new milestone in the development of nanotechnologies for biological applications. While the project is still in progress, preliminary studies on model structures are leading to very promising results. The improved brightness of these nano-sized objects, their simple synthesis and handling, their low toxicity will soon turn them, we strongly believe, into a new generation of fluorescent labels for many applications. CHAPTER 3, Electrochemiluminescence The work on electrochemiluminescence produced interesting results that can potentially turn into great improvements from an analytical point of view. Ru(bpy)3 derivatives were employed both for on-chip microarray (Chapter 3.1) and for microscopic imaging applications (Chapter 3.2). The development of these new techniques is still under investigation, but the obtained results confirm the possibility to achieve the final goal. Furthermore the development of new ECL-active species (Chapter 3.3, 3.4, 3.5) and their use in these applications can significantly improve overall performances, thus helping to spread ECL as powerful analytical tool for routinary techniques. To conclude, the results obtained are of strong value to largely increase the sensitivity of luminescence techniques, thus fulfilling the expectation we had at the beginning of this research work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chemistry can contribute, in many different ways to solve the challenges we are facing to modify our inefficient and fossil-fuel based energy system. The present work was motivated by the search for efficient photoactive materials to be employed in the context of the energy problem: materials to be utilized in energy efficient devices and in the production of renewable electricity and fuels. We presented a new class of copper complexes, that could find application in lighting techhnologies, by serving as luminescent materials in LEC, OLED, WOLED devices. These technologies may provide substantial energy savings in the lighting sector. Moreover, recently, copper complexes have been used as light harvesting compounds in dye sensitized photoelectrochemical solar cells, which offer a viable alternative to silicon-based photovoltaic technologies. We presented also a few supramolecular systems containing fullerene, e.g. dendrimers, dyads and triads.The most complex among these arrays, which contain porphyrin moieties, are presented in the final chapter. They undergo photoinduced energy- and electron transfer processes also with long-lived charge separated states, i.e. the fundamental processes to power artificial photosynthetic systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this Ph.D. project has been the design and characterization of new and more efficient luminescent tools, in particular sensors and labels, for analytical chemistry, medical diagnostics and imaging. Actually both the increasing temporal and spatial resolutions that are demanded by those branches, coupled to a sensitivity that is required to reach the single molecule resolution, can be provided by the wide range of techniques based on luminescence spectroscopy. As far as the development of new chemical sensors is concerned, as chemists we were interested in the preparation of new, efficient, sensing materials. In this context, we kept developing new molecular chemosensors, by exploiting the supramolecular approach, for different classes of analytes. In particular we studied a family of luminescent tetrapodal-hosts based on aminopyridinium units with pyrenyl groups for the detection of anions. These systems exhibited noticeable changes in the photophysical properties, depending on the nature of the anion; in particular, addition of chloride resulted in a conformational change, giving an initial increase in excimeric emission. A good selectivity for dicarboxylic acid was also found. In the search for higher sensitivities, we moved our attention also to systems able to perform amplification effects. In this context we described the metal ion binding properties of three photoactive poly-(arylene ethynylene) co-polymers with different complexing units and we highlighted, for one of them, a ten-fold amplification of the response in case of addition of Zn2+, Cu2+ and Hg2+ ions. In addition, we were able to demonstrate the formation of complexes with Yb3+ an Er3+ and an efficient sensitization of their typical metal centered NIR emission upon excitation of the polymer structure, this feature being of particular interest for their possible applications in optical imaging and in optical amplification for telecommunication purposes. An amplification effect was also observed during this research in silica nanoparticles derivatized with a suitable zinc probe. In this case we were able to prove, for the first time, that nanoparticles can work as “off-on” chemosensors with signal amplification. Fluorescent silica nanoparticles can be thus seen as innovative multicomponent systems in which the organization of photophysically active units gives rise to fruitful collective effects. These precious effects can be exploited for biological imaging, medical diagnostic and therapeutics, as evidenced also by some results reported in this thesis. In particular, the observed amplification effect has been obtained thanks to a suitable organization of molecular probe units onto the surface of the nanoparticles. In the effort of reaching a deeper inside in the mechanisms which lead to the final amplification effects, we also attempted to find a correlation between the synthetic route and the final organization of the active molecules in the silica network, and thus with those mutual interactions between one another which result in the emerging, collective behavior, responsible for the desired signal amplification. In this context, we firstly investigated the process of formation of silica nanoparticles doped with pyrene derivative and we showed that the dyes are not uniformly dispersed inside the silica matrix; thus, core-shell structures can be formed spontaneously in a one step synthesis. Moreover, as far as the design of new labels is concerned, we reported a new synthetic approach to obtain a class of robust, biocompatible silica core-shell nanoparticles able to show a long-term stability. Taking advantage of this new approach we also showed the synthesis and photophysical properties of core-shell NIR absorbing and emitting materials that proved to be very valuable for in-vivo imaging. In general, the dye doped silica nanoparticles prepared in the framework of this project can conjugate unique properties, such as a very high brightness, due to the possibility to include many fluorophores per nanoparticle, high stability, because of the shielding effect of the silica matrix, and, to date, no toxicity, with a simple and low-cost preparation. All these features make these nanostructures suitable to reach the low detection limits that are nowadays required for effective clinical and environmental applications, fulfilling in this way the initial expectations of this research project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many research fields are pushing the engineering of large-scale, mobile, and open systems towards the adoption of techniques inspired by self-organisation: pervasive computing, but also distributed artificial intelligence, multi-agent systems, social networks, peer-topeer and grid architectures exploit adaptive techniques to make global system properties emerge in spite of the unpredictability of interactions and behaviour. Such a trend is visible also in coordination models and languages, whenever a coordination infrastructure needs to cope with managing interactions in highly dynamic and unpredictable environments. As a consequence, self-organisation can be regarded as a feasible metaphor to define a radically new conceptual coordination framework. The resulting framework defines a novel coordination paradigm, called self-organising coordination, based on the idea of spreading coordination media over the network, and charge them with services to manage interactions based on local criteria, resulting in the emergence of desired and fruitful global coordination properties of the system. Features like topology, locality, time-reactiveness, and stochastic behaviour play a key role in both the definition of such a conceptual framework and the consequent development of self-organising coordination services. According to this framework, the thesis presents several self-organising coordination techniques developed during the PhD course, mainly concerning data distribution in tuplespace-based coordination systems. Some of these techniques have been also implemented in ReSpecT, a coordination language for tuple spaces, based on logic tuples and reactions to events occurring in a tuple space. In addition, the key role played by simulation and formal verification has been investigated, leading to analysing how automatic verification techniques like probabilistic model checking can be exploited in order to formally prove the emergence of desired behaviours when dealing with coordination approaches based on self-organisation. To this end, a concrete case study is presented and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past decade the study of superparamagnetic nanoparticles has been intensively developed for many biomedical applications such as magnetically assisted drug delivery, MRI contrast agents, cells separation and hyperthermia therapy. All of these applications require nanoparticles with high magnetization, equipped also with a suitable surface coating which has to be non-toxic and biocompatible. In this master thesis, the silica coating of commercially available magnetic nanoparticles was investigated. Silica is a versatile material with many intrinsic features, such as hydrophilicity, low toxicity, proper design and derivatization yields particularly stable colloids even in physiological conditions. The coating process was applied to commercial magnetite particles dispersed in an aqueous solution. The formation of silica coated magnetite nanoparticles was performed following two main strategies: the Stöber process, in which the silica coating of the nanoparticle was directly formed by hydrolysis and condensation of suitable precursor in water-alcoholic mixtures; and the reverse microemulsions method in which inverse micelles were used to confine the hydrolysis and condensation reactions that bring to the nanoparticles formation. Between these two methods, the reverse microemulsions one resulted the most versatile and reliable because of the high control level upon monodispersity, silica shell thickness and overall particle size. Moving from low to high concentration, within the microemulsion region a gradual shift from larger particles to smaller one was detected. By increasing the amount of silica precursor the silica shell can also be tuned. Fluorescent dyes have also been incorporated within the silica shell by linking with the silica matrix. The structure of studied nanoparticles was investigated by using transmission electron microscope (TEM) and dynamic light scattering (DLS). These techniques have been used to monitor the syntetic procedures and for the final characterization of silica coated and silica dye doped nanoparticles. Finally, field dependent magnetization measurements showed the magnetic properties of core-shell nanoparticles were preserved. Due to a very well defined structure that combines magnetic and luminescent properties together with the possibility of further functionalization, these multifunctional nanoparticles are potentially useful platforms in biomedical fields such as labeling and imaging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Supramolecular self-assembly represents a key technology for the spontaneous construction of nanoarchitectures and for the fabrication of materials with enhanced physical and chemical properties. In addition, a significant asset of supramolecular self-assemblies rests on their reversible formation, thanks to the kinetic lability of their non-covalent interactions. This dynamic nature can be exploited for the development of “self-healing” and “smart” materials towards the tuning of their functional properties upon various external factors. One particular intriguing objective in the field is to reach a high level of control over the shape and size of the supramolecular architectures, in order to produce well-defined functional nanostructures by rational design. In this direction, many investigations have been pursued toward the construction of self-assembled objects from numerous low-molecular weight scaffolds, for instance by exploiting multiple directional hydrogen-bonding interactions. In particular, nucleobases have been used as supramolecular synthons as a result of their efficiency to code for non-covalent interaction motifs. Among nucleobases, guanine represents the most versatile one, because of its different H-bond donor and acceptor sites which display self-complementary patterns of interactions. Interestingly, and depending on the environmental conditions, guanosine derivatives can form various types of structures. Most of the supramolecular architectures reported in this Thesis from guanosine derivatives require the presence of a cation which stabilizes, via dipole-ion interactions, the macrocyclic G-quartet that can, in turn, stack in columnar G-quadruplex arrangements. In addition, in absence of cations, guanosine can polymerize via hydrogen bonding to give a variety of supramolecular networks including linear ribbons. This complex supramolecular behavior confers to the guanine-guanine interactions their upper interest among all the homonucleobases studied. They have been subjected to intense investigations in various areas ranging from structural biology and medicinal chemistry – guanine-rich sequences are abundant in telomeric ends of chromosomes and promoter regions of DNA, and are capable of forming G-quartet based structures– to material science and nanotechnology. This Thesis, organized into five Chapters, describes mainly some recent advances in the form and function provided by self-assembly of guanine based systems. More generally, Chapter 4 will focus on the construction of supramolecular self-assemblies whose self-assembling process and self-assembled architectures can be controlled by light as external stimulus. Chapter 1 will describe some of the many recent studies of G-quartets in the general area of nanoscience. Natural G- quadruplexes can be useful motifs to build new structures and biomaterials such as self-assembled nanomachines, biosensors, therapeutic aptamer and catalysts. In Chapters 2-4 it is pointed out the core concept held in this PhD Thesis, i.e. the supramolecular organization of lipophilic guanosine derivatives with photo or chemical addressability. Chapter 2 will mainly focus on the use of cation-templated guanosine derivatives as a potential scaffold for designing functional materials with tailored physical properties, showing a new way to control the bottom-up realization of well-defined nanoarchitectures. In section 2.6.7, the self-assembly properties of compound 28a may be considered an example of open-shell moieties ordered by a supramolecular guanosine architecture showing a new (magnetic) property. Chapter 3 will report on ribbon-like structures, supramolecular architectures formed by guanosine derivatives that may be of interest for the fabrication of molecular nanowires within the framework of future molecular electronic applications. In section 3.4 we investigate the supramolecular polymerizations of derivatives dG 1 and G 30 by light scattering technique and TEM experiments. The obtained data reveal the presence of several levels of organization due to the hierarchical self-assembly of the guanosine units in ribbons that in turn aggregate in fibrillar or lamellar soft structures. The elucidation of these structures furnishes an explanation to the physical behaviour of guanosine units which display organogelator properties. Chapter 4 will describe photoresponsive self-assembling systems. Numerous research examples have demonstrated that the use of photochromic molecules in supramolecular self-assemblies is the most reasonable method to noninvasively manipulate their degree of aggregation and supramolecular architectures. In section 4.4 we report on the photocontrolled self-assembly of modified guanosine nucleobase E-42: by the introduction of a photoactive moiety at C8 it is possible to operate a photocontrol over the self-assembly of the molecule, where the existence of G-quartets can be alternately switched on and off. In section 4.5 we focus on the use of cyclodextrins as photoresponsive host-guest assemblies: αCD–azobenzene conjugates 47-48 (section 4.5.3) are synthesized in order to obtain a photoresponsive system exhibiting a fine photocontrollable degree of aggregation and self-assembled architecture. Finally, Chapter 5 contains the experimental protocols used for the research described in Chapters 2-4.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Synchronization is a key issue in any communication system, but it becomes fundamental in the navigation systems, which are entirely based on the estimation of the time delay of the signals coming from the satellites. Thus, even if synchronization has been a well known topic for many years, the introduction of new modulations and new physical layer techniques in the modern standards makes the traditional synchronization strategies completely ineffective. For this reason, the design of advanced and innovative techniques for synchronization in modern communication systems, like DVB-SH, DVB-T2, DVB-RCS, WiMAX, LTE, and in the modern navigation system, like Galileo, has been the topic of the activity. Recent years have seen the consolidation of two different trends: the introduction of Orthogonal Frequency Division Multiplexing (OFDM) in the communication systems, and of the Binary Offset Carrier (BOC) modulation in the modern Global Navigation Satellite Systems (GNSS). Thus, a particular attention has been given to the investigation of the synchronization algorithms in these areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present PhD thesis summarizes the three-years study about the neutronic investigation of a new concept nuclear reactor aiming at the optimization and the sustainable management of nuclear fuel in a possible European scenario. A new generation nuclear reactor for the nuclear reinassance is indeed desired by the actual industrialized world, both for the solution of the energetic question arising from the continuously growing energy demand together with the corresponding reduction of oil availability, and the environment question for a sustainable energy source free from Long Lived Radioisotopes and therefore geological repositories. Among the Generation IV candidate typologies, the Lead Fast Reactor concept has been pursued, being the one top rated in sustainability. The European Lead-cooled SYstem (ELSY) has been at first investigated. The neutronic analysis of the ELSY core has been performed via deterministic analysis by means of the ERANOS code, in order to retrieve a stable configuration for the overall design of the reactor. Further analyses have been carried out by means of the Monte Carlo general purpose transport code MCNP, in order to check the former one and to define an exact model of the system. An innovative system of absorbers has been conceptualized and designed for both the reactivity compensation and regulation of the core due to cycle swing, as well as for safety in order to guarantee the cold shutdown of the system in case of accident. Aiming at the sustainability of nuclear energy, the steady-state nuclear equilibrium has been investigated and generalized into the definition of the ``extended'' equilibrium state. According to this, the Adiabatic Reactor Theory has been developed, together with a New Paradigm for Nuclear Power: in order to design a reactor that does not exchange with the environment anything valuable (thus the term ``adiabatic''), in the sense of both Plutonium and Minor Actinides, it is required indeed to revert the logical design scheme of nuclear cores, starting from the definition of the equilibrium composition of the fuel and submitting to the latter the whole core design. The New Paradigm has been applied then to the core design of an Adiabatic Lead Fast Reactor complying with the ELSY overall system layout. A complete core characterization has been done in order to asses criticality and power flattening; a preliminary evaluation of the main safety parameters has been also done to verify the viability of the system. Burn up calculations have been then performed in order to investigate the operating cycle for the Adiabatic Lead Fast Reactor; the fuel performances have been therefore extracted and inserted in a more general analysis for an European scenario. The present nuclear reactors fleet has been modeled and its evolution simulated by means of the COSI code in order to investigate the materials fluxes to be managed in the European region. Different plausible scenarios have been identified to forecast the evolution of the European nuclear energy production, including the one involving the introduction of Adiabatic Lead Fast Reactors, and compared to better analyze the advantages introduced by the adoption of new concept reactors. At last, since both ELSY and the ALFR represent new concept systems based upon innovative solutions, the neutronic design of a demonstrator reactor has been carried out: such a system is intended to prove the viability of technology to be implemented in the First-of-a-Kind industrial power plant, with the aim at attesting the general strategy to use, to the largest extent. It was chosen then to base the DEMO design upon a compromise between demonstration of developed technology and testing of emerging technology in order to significantly subserve the purpose of reducing uncertainties about construction and licensing, both validating ELSY/ALFR main features and performances, and to qualify numerical codes and tools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The treatment of the Cerebral Palsy (CP) is considered as the “core problem” for the whole field of the pediatric rehabilitation. The reason why this pathology has such a primary role, can be ascribed to two main aspects. First of all CP is the form of disability most frequent in childhood (one new case per 500 birth alive, (1)), secondarily the functional recovery of the “spastic” child is, historically, the clinical field in which the majority of the therapeutic methods and techniques (physiotherapy, orthotic, pharmacologic, orthopedic-surgical, neurosurgical) were first applied and tested. The currently accepted definition of CP – Group of disorders of the development of movement and posture causing activity limitation (2) – is the result of a recent update by the World Health Organization to the language of the International Classification of Functioning Disability and Health, from the original proposal of Ingram – A persistent but not unchangeable disorder of posture and movement – dated 1955 (3). This definition considers CP as a permanent ailment, i.e. a “fixed” condition, that however can be modified both functionally and structurally by means of child spontaneous evolution and treatments carried out during childhood. The lesion that causes the palsy, happens in a structurally immature brain in the pre-, peri- or post-birth period (but only during the firsts months of life). The most frequent causes of CP are: prematurity, insufficient cerebral perfusion, arterial haemorrhage, venous infarction, hypoxia caused by various origin (for example from the ingestion of amniotic liquid), malnutrition, infection and maternal or fetal poisoning. In addition to these causes, traumas and malformations have to be included. The lesion, whether focused or spread over the nervous system, impairs the whole functioning of the Central Nervous System (CNS). As a consequence, they affect the construction of the adaptive functions (4), first of all posture control, locomotion and manipulation. The palsy itself does not vary over time, however it assumes an unavoidable “evolutionary” feature when during growth the child is requested to meet new and different needs through the construction of new and different functions. It is essential to consider that clinically CP is not only a direct expression of structural impairment, that is of etiology, pathogenesis and lesion timing, but it is mainly the manifestation of the path followed by the CNS to “re”-construct the adaptive functions “despite” the presence of the damage. “Palsy” is “the form of the function that is implemented by an individual whose CNS has been damaged in order to satisfy the demands coming from the environment” (4). Therefore it is only possible to establish general relations between lesion site, nature and size, and palsy and recovery processes. It is quite common to observe that children with very similar neuroimaging can have very different clinical manifestations of CP and, on the other hand, children with very similar motor behaviors can have completely different lesion histories. A very clear example of this is represented by hemiplegic forms, which show bilateral hemispheric lesions in a high percentage of cases. The first section of this thesis is aimed at guiding the interpretation of CP. First of all the issue of the detection of the palsy is treated from historical viewpoint. Consequently, an extended analysis of the current definition of CP, as internationally accepted, is provided. The definition is then outlined in terms of a space dimension and then of a time dimension, hence it is highlighted where this definition is unacceptably lacking. The last part of the first section further stresses the importance of shifting from the traditional concept of CP as a palsy of development (defect analysis) towards the notion of development of palsy, i.e., as the product of the relationship that the individual however tries to dynamically build with the surrounding environment (resource semeiotics) starting and growing from a different availability of resources, needs, dreams, rights and duties (4). In the scientific and clinic community no common classification system of CP has so far been universally accepted. Besides, no standard operative method or technique have been acknowledged to effectively assess the different disabilities and impairments exhibited by children with CP. CP is still “an artificial concept, comprising several causes and clinical syndromes that have been grouped together for a convenience of management” (5). The lack of standard and common protocols able to effectively diagnose the palsy, and as a consequence to establish specific treatments and prognosis, is mainly because of the difficulty to elevate this field to a level based on scientific evidence. A solution aimed at overcoming the current incomplete treatment of CP children is represented by the clinical systematic adoption of objective tools able to measure motor defects and movement impairments. A widespread application of reliable instruments and techniques able to objectively evaluate both the form of the palsy (diagnosis) and the efficacy of the treatments provided (prognosis), constitutes a valuable method able to validate care protocols, establish the efficacy of classification systems and assess the validity of definitions. Since the ‘80s, instruments specifically oriented to the analysis of the human movement have been advantageously designed and applied in the context of CP with the aim of measuring motor deficits and, especially, gait deviations. The gait analysis (GA) technique has been increasingly used over the years to assess, analyze, classify, and support the process of clinical decisions making, allowing for a complete investigation of gait with an increased temporal and spatial resolution. GA has provided a basis for improving the outcome of surgical and nonsurgical treatments and for introducing a new modus operandi in the identification of defects and functional adaptations to the musculoskeletal disorders. Historically, the first laboratories set up for gait analysis developed their own protocol (set of procedures for data collection and for data reduction) independently, according to performances of the technologies available at that time. In particular, the stereophotogrammetric systems mainly based on optoelectronic technology, soon became a gold-standard for motion analysis. They have been successfully applied especially for scientific purposes. Nowadays the optoelectronic systems have significantly improved their performances in term of spatial and temporal resolution, however many laboratories continue to use the protocols designed on the technology available in the ‘70s and now out-of-date. Furthermore, these protocols are not coherent both for the biomechanical models and for the adopted collection procedures. In spite of these differences, GA data are shared, exchanged and interpreted irrespectively to the adopted protocol without a full awareness to what extent these protocols are compatible and comparable with each other. Following the extraordinary advances in computer science and electronics, new systems for GA no longer based on optoelectronic technology, are now becoming available. They are the Inertial and Magnetic Measurement Systems (IMMSs), based on miniature MEMS (Microelectromechanical systems) inertial sensor technology. These systems are cost effective, wearable and fully portable motion analysis systems, these features gives IMMSs the potential to be used both outside specialized laboratories and to consecutive collect series of tens of gait cycles. The recognition and selection of the most representative gait cycle is then easier and more reliable especially in CP children, considering their relevant gait cycle variability. The second section of this thesis is focused on GA. In particular, it is firstly aimed at examining the differences among five most representative GA protocols in order to assess the state of the art with respect to the inter-protocol variability. The design of a new protocol is then proposed and presented with the aim of achieving gait analysis on CP children by means of IMMS. The protocol, named ‘Outwalk’, contains original and innovative solutions oriented at obtaining joint kinematic with calibration procedures extremely comfortable for the patients. The results of a first in-vivo validation of Outwalk on healthy subjects are then provided. In particular, this study was carried out by comparing Outwalk used in combination with an IMMS with respect to a reference protocol and an optoelectronic system. In order to set a more accurate and precise comparison of the systems and the protocols, ad hoc methods were designed and an original formulation of the statistical parameter coefficient of multiple correlation was developed and effectively applied. On the basis of the experimental design proposed for the validation on healthy subjects, a first assessment of Outwalk, together with an IMMS, was also carried out on CP children. The third section of this thesis is dedicated to the treatment of walking in CP children. Commonly prescribed treatments in addressing gait abnormalities in CP children include physical therapy, surgery (orthopedic and rhizotomy), and orthoses. The orthotic approach is conservative, being reversible, and widespread in many therapeutic regimes. Orthoses are used to improve the gait of children with CP, by preventing deformities, controlling joint position, and offering an effective lever for the ankle joint. Orthoses are prescribed for the additional aims of increasing walking speed, improving stability, preventing stumbling, and decreasing muscular fatigue. The ankle-foot orthosis (AFO), with a rigid ankle, are primarily designed to prevent equinus and other foot deformities with a positive effect also on more proximal joints. However, AFOs prevent the natural excursion of the tibio-tarsic joint during the second rocker, hence hampering the natural leaning progression of the whole body under the effect of the inertia (6). A new modular (submalleolar) astragalus-calcanear orthosis, named OMAC, has recently been proposed with the intention of substituting the prescription of AFOs in those CP children exhibiting a flat and valgus-pronated foot. The aim of this section is thus to present the mechanical and technical features of the OMAC by means of an accurate description of the device. In particular, the integral document of the deposited Italian patent, is provided. A preliminary validation of OMAC with respect to AFO is also reported as resulted from an experimental campaign on diplegic CP children, during a three month period, aimed at quantitatively assessing the benefit provided by the two orthoses on walking and at qualitatively evaluating the changes in the quality of life and motor abilities. As already stated, CP is universally considered as a persistent but not unchangeable disorder of posture and movement. Conversely to this definition, some clinicians (4) have recently pointed out that movement disorders may be primarily caused by the presence of perceptive disorders, where perception is not merely the acquisition of sensory information, but an active process aimed at guiding the execution of movements through the integration of sensory information properly representing the state of one’s body and of the environment. Children with perceptive impairments show an overall fear of moving and the onset of strongly unnatural walking schemes directly caused by the presence of perceptive system disorders. The fourth section of the thesis thus deals with accurately defining the perceptive impairment exhibited by diplegic CP children. A detailed description of the clinical signs revealing the presence of the perceptive impairment, and a classification scheme of the clinical aspects of perceptual disorders is provided. In the end, a functional reaching test is proposed as an instrumental test able to disclosure the perceptive impairment. References 1. Prevalence and characteristics of children with cerebral palsy in Europe. Dev Med Child Neurol. 2002 Set;44(9):633-640. 2. Bax M, Goldstein M, Rosenbaum P, Leviton A, Paneth N, Dan B, et al. Proposed definition and classification of cerebral palsy, April 2005. Dev Med Child Neurol. 2005 Ago;47(8):571-576. 3. Ingram TT. A study of cerebral palsy in the childhood population of Edinburgh. Arch. Dis. Child. 1955 Apr;30(150):85-98. 4. Ferrari A, Cioni G. The spastic forms of cerebral palsy : a guide to the assessment of adaptive functions. Milan: Springer; 2009. 5. Olney SJ, Wright MJ. Cerebral Palsy. Campbell S et al. Physical Therapy for Children. 2nd Ed. Philadelphia: Saunders. 2000;:533-570. 6. Desloovere K, Molenaers G, Van Gestel L, Huenaerts C, Van Campenhout A, Callewaert B, et al. How can push-off be preserved during use of an ankle foot orthosis in children with hemiplegia? A prospective controlled study. Gait Posture. 2006 Ott;24(2):142-151.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die vorliegende Arbeit beschäftigt sich mit dem Einfluß von Kettenverzweigungen unterschiedlicher Topologien auf die statischen Eigenschaften von Polymeren. Diese Untersuchungen werden mit Hilfe von Monte-Carlo- und Molekular-Dynamik-Simulationen durchgeführt.Zunächst werden einige theoretische Konzepte und Modelle eingeführt, welche die Beschreibung von Polymerketten auf mesoskopischen Längenskalen gestatten. Es werden wichtige Bestimmungsgrößen eingeführt und erläutert, welche zur quantitativen Charakterisierung von Verzweigungsstrukturen bei Polymeren geeignet sind. Es wird ebenso auf die verwendeten Optimierungstechniken eingegangen, die bei der Implementierung des Computerprogrammes Verwendung fanden. Untersucht werden neben linearen Polymerketten unterschiedliche Topolgien -Sternpolymere mit variabler Armzahl, Übergang von Sternpolymeren zu linearen Polymeren, Ketten mit variabler Zahl von Seitenketten, reguläre Dendrimere und hyperverzweigte Strukturen - in Abhängigkeit von der Lösungsmittelqualität. Es wird zunächst eine gründliche Analyse des verwendeten Simulationsmodells an sehr langen linearen Einzelketten vorgenommen. Die Skalierungseigenschaften der linearen Ketten werden untersucht in dem gesamten Lösungsmittelbereich vom guten Lösungsmittel bis hin zu weitgehend kollabierten Ketten im schlechten Lösungsmittel. Ein wichtiges Ergebnis dieser Arbeit ist die Bestätigung der Korrekturen zum Skalenverhalten des hydrodynamischen Radius Rh. Dieses Ergebnis war möglich aufgrund der großen gewählten Kettenlängen und der hohen Qualität der erhaltenen Daten in dieser Arbeit, insbesondere bei den linearen ketten, und es steht im Widerspruch zu vielen bisherigen Simulations-Studien und experimentellen Arbeiten. Diese Korrekturen zum Skalenverhalten wurden nicht nur für die linearen Ketten, sondern auch für Sternpolymere mit unterchiedlicher Armzahl gezeigt. Für lineare Ketten wird der Einfluß von Polydispersität untersucht.Es wird gezeigt, daß eine eindeutige Abbildung von Längenskalen zwischen Simulationsmodell und Experiment nicht möglich ist, da die zu diesem Zweck verwendete dimensionslose Größe eine zu schwache Abhängigkeit von der Polymerisation der Ketten besitzt. Ein Vergleich von Simulationsdaten mit industriellem Low-Density-Polyäthylen(LDPE) zeigt, daß LDPE in Form von stark verzweigten Ketten vorliegt.Für reguläre Dendrimere konnte ein hochgradiges Zurückfalten der Arme in die innere Kernregion nachgewiesen werden.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A one-dimensional multi-component reactive fluid transport algorithm, 1DREACT (Steefel, 1993) was used to investigate different fluid-rock interaction systems. A major short coming of mass transport calculations which include mineral reactions is that solid solutions occurring in many minerals are not treated adequately. Since many thermodynamic models of solid solutions are highly non-linear, this can seriously impact on the stability and efficiency of the solution algorithms used. Phase petrology community saw itself faced with a similar predicament 10 years ago. To improve performance and reliability, phase equilibrium calculations have been using pseudo compounds. The same approach is used here in the first, using the complex plagioclase solid solution as an example. Thermodynamic properties of a varying number of intermediate plagioclase phases were calculated using ideal molecular, Al-avoidance, and non-ideal mixing models. These different mixing models can easily be incorporated into the simulations without modification of the transport code. Simulation results show that as few as nine intermediate compositions are sufficient to characterize the diffusional profile between albite and anorthite. Hence this approach is very efficient, and can be used with little effort. A subsequent chapter reports the results of reactive fluid transport modeling designed to constrain the hydrothermal alteration of Paleoproterozoic sediments of the Southern Lake Superior region. Field observations reveal that quartz-pyrophyllite (or kaolinite) bearing assemblages have been transformed into muscovite-pyrophyllite-diaspore bearing assemblages due to action of fluids migrating along permeable flow channels. Fluid-rock interaction modeling with an initial qtz-prl assemblage and a K-rich fluid simulates the formation of observed mineralogical transformation. The bulk composition of the system evolves from an SiO2-rich one to an Al2O3+K2O-rich one. Simulations show that the fluid flow was up-temperature (e.g. recharge) and that fluid was K-rich. Pseudo compound approach to include solid solutions in reactive transport models was tested in modeling hydrothermal alteration of Icelandic basalts. Solid solutions of chlorites, amphiboles and plagioclase were included as the secondary mineral phases. Saline and fresh water compositions of geothermal fluids were used to investigate the effect of salinity on alteration. Fluid-rock interaction simulations produce the observed mineral transformations. They show that roughly the same alteration minerals are formed due to reactions with both types of fluid which is in agreement with the field observations. A final application is directed towards the remediation of nitrate rich groundwaters. Removal of excess nitrate from groundwater by pyrite oxidation was modeled using the reactive fluid transport algorithm. Model results show that, when a pyrite-bearing, permeable zone is placed in the flow path, nitrate concentration in infiltrating water can be significantly lowered, in agreement with proposals from the literature. This is due to nitrogen reduction. Several simulations investigate the efficiency of systems with different mineral reactive surface areas, reactive barrier zone widths, and flow rates to identify the optimum setup.