22 resultados para Formal and Material Limits

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last decade considerable attention has been devoted to the rewarding use of Green Chemistry in various synthetic processes and applications. Green Chemistry is of special interest in the synthesis of expensive pharmaceutical products, where suitable adoption of “green” reagents and conditions is highly desirable. Our project especially focused in a search for new green radical processes which might also find useful applications in the industry. In particular, we have explored the possible adoption of green solvents in radical Thiol-Ene and Thiol-Yne coupling reactions, which to date have been normally performed in “ordinary” organic solvents such as benzene and toluene, with the primary aim of applying those coupling reactions to the construction of biological substrates. We have additionally tuned adequate reaction conditions which might enable achievement of highly functionalised materials and/or complex bioconjugation via homo/heterosequence. Furthermore, we have performed suitable theoretical studies to gain useful chemical information concerning mechanistic implications of the use of green solvents in the radical Thiol-Yne coupling reactions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern embedded systems embrace many-core shared-memory designs. Due to constrained power and area budgets, most of them feature software-managed scratchpad memories instead of data caches to increase the data locality. It is therefore programmers’ responsibility to explicitly manage the memory transfers, and this make programming these platform cumbersome. Moreover, complex modern applications must be adequately parallelized before they can the parallel potential of the platform into actual performance. To support this, programming languages were proposed, which work at a high level of abstraction, and rely on a runtime whose cost hinders performance, especially in embedded systems, where resources and power budget are constrained. This dissertation explores the applicability of the shared-memory paradigm on modern many-core systems, focusing on the ease-of-programming. It focuses on OpenMP, the de-facto standard for shared memory programming. In a first part, the cost of algorithms for synchronization and data partitioning are analyzed, and they are adapted to modern embedded many-cores. Then, the original design of an OpenMP runtime library is presented, which supports complex forms of parallelism such as multi-level and irregular parallelism. In the second part of the thesis, the focus is on heterogeneous systems, where hardware accelerators are coupled to (many-)cores to implement key functional kernels with orders-of-magnitude of speedup and energy efficiency compared to the “pure software” version. However, three main issues rise, namely i) platform design complexity, ii) architectural scalability and iii) programmability. To tackle them, a template for a generic hardware processing unit (HWPU) is proposed, which share the memory banks with cores, and the template for a scalable architecture is shown, which integrates them through the shared-memory system. Then, a full software stack and toolchain are developed to support platform design and to let programmers exploiting the accelerators of the platform. The OpenMP frontend is extended to interact with it.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main contribution of this thesis is the proposal of novel strategies for the selection of parameters arising in variational models employed for the solution of inverse problems with data corrupted by Poisson noise. In light of the importance of using a significantly small dose of X-rays in Computed Tomography (CT), and its need of using advanced techniques to reconstruct the objects due to the high level of noise in the data, we will focus on parameter selection principles especially for low photon-counts, i.e. low dose Computed Tomography. For completeness, since such strategies can be adopted for various scenarios where the noise in the data typically follows a Poisson distribution, we will show their performance for other applications such as photography, astronomical and microscopy imaging. More specifically, in the first part of the thesis we will focus on low dose CT data corrupted only by Poisson noise by extending automatic selection strategies designed for Gaussian noise and improving the few existing ones for Poisson. The new approaches will show to outperform the state-of-the-art competitors especially in the low-counting regime. Moreover, we will propose to extend the best performing strategy to the hard task of multi-parameter selection showing promising results. Finally, in the last part of the thesis, we will introduce the problem of material decomposition for hyperspectral CT, which data encodes information of how different materials in the target attenuate X-rays in different ways according to the specific energy. We will conduct a preliminary comparative study to obtain accurate material decomposition starting from few noisy projection data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This PhD thesis reports on car fluff management, recycling and recovery. Car fluff is the residual waste produced by car recycling operations, particularly from hulk shredding. Car fluff is known also as Automotive Shredder Residue (ASR) and it is made of plastics, rubbers, textiles, metals and other materials, and it is very heterogeneous both in its composition and in its particle size. In fact, fines may amount to about 50%, making difficult to sort out recyclable materials or exploit ASR heat value by energy recovery. This 3 years long study started with the definition of the Italian End-of-Life Vehicles (ELVs) recycling state of the art. A national recycling trial revealed Italian recycling rate to be around 81% in 2008, while European Community recycling target are set to 85% by 2015. Consequently, according to Industrial Ecology framework, a life cycle assessment (LCA) has been conducted revealing that sorting and recycling polymers and metals contained in car fluff, followed by recovering residual energy, is the route which has the best environmental perspective. This results led the second year investigation that involved pyrolysis trials on pretreated ASR fractions aimed at investigating which processes could be suitable for an industrial scale ASR treatment plant. Sieving followed by floatation reported good result in thermochemical conversion of polymers with polyolefins giving excellent conversion rate. This factor triggered ecodesign considerations. Ecodesign, together with LCA, is one of the Industrial Ecology pillars and it consists of design for recycling and design for disassembly, both aimed at the improvement of car components dismantling speed and the substitution of non recyclable material. Finally, during the last year, innovative plants and technologies for metals recovery from car fluff have been visited and tested worldwide in order to design a new car fluff treatment plant aimed at ASR energy and material recovery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation adopts a multidisciplinary approach to investigate graphical and formal features of Cretan Hieroglyphic and Linear A. Drawing on theories which understand inscribed artefacts as an interplay of materials, iconography, and texts, I combine archaeological and philological considerations with statistical and experimental observations. The work is formulated on three key-questions. The first deals with the origins of Cretan Hieroglyphic. After providing a fresh view on Prepalatial seals chronology, I identify a number of forerunners of Hieroglyphic signs in iconographic motifs attested among the Prepalatial glyptic and material culture. I further identified a specific style-group, i.e., the ‘Border and Leaf Complex’, as the decisive step towards the emergence of the Hieroglyphic graphic repertoire. The second deals with the interweaving of formal, iconographical, and epigraphic features of Hieroglyphic seals with the sequences they bear and the contexts of their usage. By means of two Correspondence Analyses, I showed that the iconography on seals in some materials and shapes is closer to Cretan Hieroglyphics, than that on the other ones. Through two Social Network Analyses, I showed that Hieroglyphic impressions, especially at Knossos, follow a precise sealing pattern due to their shapes and sequences. Furthermore, prisms with a high number of inscribed faces adhere to formal features of jasper ones. Finally, through experimental engravings, I showed differences in cutting rates among materials, as well as the efficiency of abrasives and tools unearthed within the Quartier Mu. The third question concerns overlaps in chronology, findspots and signaries between Cretan Hieroglyphic and Linear A. I discussed all possible earliest instances of both scripts and argued for some items datable to the MM I-IIA period. I further provide an insight into the Hieroglyphic-Linear A dubitanda and criteria for their interpretation. Finally, I suggest four different patterns in the creation and diversification of the two signaries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer aided design of Monolithic Microwave Integrated Circuits (MMICs) depends critically on active device models that are accurate, computationally efficient, and easily extracted from measurements or device simulators. Empirical models of active electron devices, which are based on actual device measurements, do not provide a detailed description of the electron device physics. However they are numerically efficient and quite accurate. These characteristics make them very suitable for MMIC design in the framework of commercially available CAD tools. In the empirical model formulation it is very important to separate linear memory effects (parasitic effects) from the nonlinear effects (intrinsic effects). Thus an empirical active device model is generally described by an extrinsic linear part which accounts for the parasitic passive structures connecting the nonlinear intrinsic electron device to the external world. An important task circuit designers deal with is evaluating the ultimate potential of a device for specific applications. In fact once the technology has been selected, the designer would choose the best device for the particular application and the best device for the different blocks composing the overall MMIC. Thus in order to accurately reproducing the behaviour of different-in-size devices, good scalability properties of the model are necessarily required. Another important aspect of empirical modelling of electron devices is the mathematical (or equivalent circuit) description of the nonlinearities inherently associated with the intrinsic device. Once the model has been defined, the proper measurements for the characterization of the device are performed in order to identify the model. Hence, the correct measurement of the device nonlinear characteristics (in the device characterization phase) and their reconstruction (in the identification or even simulation phase) are two of the more important aspects of empirical modelling. This thesis presents an original contribution to nonlinear electron device empirical modelling treating the issues of model scalability and reconstruction of the device nonlinear characteristics. The scalability of an empirical model strictly depends on the scalability of the linear extrinsic parasitic network, which should possibly maintain the link between technological process parameters and the corresponding device electrical response. Since lumped parasitic networks, together with simple linear scaling rules, cannot provide accurate scalable models, either complicate technology-dependent scaling rules or computationally inefficient distributed models are available in literature. This thesis shows how the above mentioned problems can be avoided through the use of commercially available electromagnetic (EM) simulators. They enable the actual device geometry and material stratification, as well as losses in the dielectrics and electrodes, to be taken into account for any given device structure and size, providing an accurate description of the parasitic effects which occur in the device passive structure. It is shown how the electron device behaviour can be described as an equivalent two-port intrinsic nonlinear block connected to a linear distributed four-port passive parasitic network, which is identified by means of the EM simulation of the device layout, allowing for better frequency extrapolation and scalability properties than conventional empirical models. Concerning the issue of the reconstruction of the nonlinear electron device characteristics, a data approximation algorithm has been developed for the exploitation in the framework of empirical table look-up nonlinear models. Such an approach is based on the strong analogy between timedomain signal reconstruction from a set of samples and the continuous approximation of device nonlinear characteristics on the basis of a finite grid of measurements. According to this criterion, nonlinear empirical device modelling can be carried out by using, in the sampled voltage domain, typical methods of the time-domain sampling theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A prevalent claim is that we are in knowledge economy. When we talk about knowledge economy, we generally mean the concept of “Knowledge-based economy” indicating the use of knowledge and technologies to produce economic benefits. Hence knowledge is both tool and raw material (people’s skill) for producing some kind of product or service. In this kind of environment economic organization is undergoing several changes. For example authority relations are less important, legal and ownership-based definitions of the boundaries of the firm are becoming irrelevant and there are only few constraints on the set of coordination mechanisms. Hence what characterises a knowledge economy is the growing importance of human capital in productive processes (Foss, 2005) and the increasing knowledge intensity of jobs (Hodgson, 1999). Economic processes are also highly intertwined with social processes: they are likely to be informal and reciprocal rather than formal and negotiated. Another important point is also the problem of the division of labor: as economic activity becomes mainly intellectual and requires the integration of specific and idiosyncratic skills, the task of dividing the job and assigning it to the most appropriate individuals becomes arduous, a “supervisory problem” (Hogdson, 1999) emerges and traditional hierarchical control may result increasingly ineffective. Not only specificity of know how makes it awkward to monitor the execution of tasks, more importantly, top-down integration of skills may be difficult because ‘the nominal supervisors will not know the best way of doing the job – or even the precise purpose of the specialist job itself – and the worker will know better’ (Hogdson,1999). We, therefore, expect that the organization of the economic activity of specialists should be, at least partially, self-organized. The aim of this thesis is to bridge studies from computer science and in particular from Peer-to-Peer Networks (P2P) to organization theories. We think that the P2P paradigm well fits with organization problems related to all those situation in which a central authority is not possible. We believe that P2P Networks show a number of characteristics similar to firms working in a knowledge-based economy and hence that the methodology used for studying P2P Networks can be applied to organization studies. Three are the main characteristics we think P2P have in common with firms involved in knowledge economy: - Decentralization: in a pure P2P system every peer is an equal participant, there is no central authority governing the actions of the single peers; - Cost of ownership: P2P computing implies shared ownership reducing the cost of owing the systems and the content, and the cost of maintaining them; - Self-Organization: it refers to the process in a system leading to the emergence of global order within the system without the presence of another system dictating this order. These characteristics are present also in the kind of firm that we try to address and that’ why we have shifted the techniques we adopted for studies in computer science (Marcozzi et al., 2005; Hales et al., 2007 [39]) to management science.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The "sustainability" concept relates to the prolonging of human economic systems with as little detrimental impact on ecological systems as possible. Construction that exhibits good environmental stewardship and practices that conserve resources in a manner that allow growth and development to be sustained for the long-term without degrading the environment are indispensable in a developed society. Past, current and future advancements in asphalt as an environmentally sustainable paving material are especially important because the quantities of asphalt used annually in Europe as well as in the U.S. are large. The asphalt industry is still developing technological improvements that will reduce the environmental impact without affecting the final mechanical performance. Warm mix asphalt (WMA) is a type of asphalt mix requiring lower production temperatures compared to hot mix asphalt (HMA), while aiming to maintain the desired post construction properties of traditional HMA. Lowering the production temperature reduce the fuel usage and the production of emissions therefore and that improve conditions for workers and supports the sustainable development. Even the crumb-rubber modifier (CRM), with shredded automobile tires and used in the United States since the mid 1980s, has proven to be an environmentally friendly alternative to conventional asphalt pavement. Furthermore, the use of waste tires is not only relevant in an environmental aspect but also for the engineering properties of asphalt [Pennisi E., 1992]. This research project is aimed to demonstrate the dual value of these Asphalt Mixes in regards to the environmental and mechanical performance and to suggest a low environmental impact design procedure. In fact, the use of eco-friendly materials is the first phase towards an eco-compatible design but it cannot be the only step. The eco-compatible approach should be extended also to the design method and material characterization because only with these phases is it possible to exploit the maximum potential properties of the used materials. Appropriate asphalt concrete characterization is essential and vital for realistic performance prediction of asphalt concrete pavements. Volumetric (Mix design) and mechanical (Permanent deformation and Fatigue performance) properties are important factors to consider. Moreover, an advanced and efficient design method is necessary in order to correctly use the material. A design method such as a Mechanistic-Empirical approach, consisting of a structural model capable of predicting the state of stresses and strains within the pavement structure under the different traffic and environmental conditions, was the application of choice. In particular this study focus on the CalME and its Incremental-Recursive (I-R) procedure, based on damage models for fatigue and permanent shear strain related to the surface cracking and to the rutting respectively. It works in increments of time and, using the output from one increment, recursively, as input to the next increment, predicts the pavement conditions in terms of layer moduli, fatigue cracking, rutting and roughness. This software procedure was adopted in order to verify the mechanical properties of the study mixes and the reciprocal relationship between surface layer and pavement structure in terms of fatigue and permanent deformation with defined traffic and environmental conditions. The asphalt mixes studied were used in a pavement structure as surface layer of 60 mm thickness. The performance of the pavement was compared to the performance of the same pavement structure where different kinds of asphalt concrete were used as surface layer. In comparison to a conventional asphalt concrete, three eco-friendly materials, two warm mix asphalt and a rubberized asphalt concrete, were analyzed. The First Two Chapters summarize the necessary steps aimed to satisfy the sustainable pavement design procedure. In Chapter I the problem of asphalt pavement eco-compatible design was introduced. The low environmental impact materials such as the Warm Mix Asphalt and the Rubberized Asphalt Concrete were described in detail. In addition the value of a rational asphalt pavement design method was discussed. Chapter II underlines the importance of a deep laboratory characterization based on appropriate materials selection and performance evaluation. In Chapter III, CalME is introduced trough a specific explanation of the different equipped design approaches and specifically explaining the I-R procedure. In Chapter IV, the experimental program is presented with a explanation of test laboratory devices adopted. The Fatigue and Rutting performances of the study mixes are shown respectively in Chapter V and VI. Through these laboratory test data the CalME I-R models parameters for Master Curve, fatigue damage and permanent shear strain were evaluated. Lastly, in Chapter VII, the results of the asphalt pavement structures simulations with different surface layers were reported. For each pavement structure, the total surface cracking, the total rutting, the fatigue damage and the rutting depth in each bound layer were analyzed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of stone and its types of processing have been very important in the vernacular architecture of the cross-border Carso. In Carso this represents an important legacy of centuries and has a uniform typological characteristic to a great extent. The stone was the main constituent of the local architecture, setting and shaping the human environment, incorporating the history of places through their specific symbolic and constructive language. The primary aim of this research is the recognition of the constructive rules and the values embedded in the Carso rural architecture by use and processing of stone. Central to this investigation is the typological reading, aimed to analyze the constructive language expressed by this legacy, through the analysis of the relationship between type, technique and material.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work of the present thesis is focused on the implementation of microelectronic voltage sensing devices, with the purpose of transmitting and extracting analog information between devices of different nature at short distances or upon contact. Initally, chip-to-chip communication has been studied, and circuitry for 3D capacitive coupling has been implemented. Such circuits allow the communication between dies fabricated in different technologies. Due to their novelty, they are not standardized and currently not supported by standard CAD tools. In order to overcome such burden, a novel approach for the characterization of such communicating links has been proposed. This results in shorter design times and increased accuracy. Communication between an integrated circuit (IC) and a probe card has been extensively studied as well. Today wafer probing is a costly test procedure with many drawbacks, which could be overcome by a different communication approach such as capacitive coupling. For this reason wireless wafer probing has been investigated as an alternative approach to standard on-contact wafer probing. Interfaces between integrated circuits and biological systems have also been investigated. Active electrodes for simultaneous electroencephalography (EEG) and electrical impedance tomography (EIT) have been implemented for the first time in a 0.35 um process. Number of wires has been minimized by sharing the analog outputs and supply on a single wire, thus implementing electrodes that require only 4 wires for their operation. Minimization of wires reduces the cable weight and thus limits the patient's discomfort. The physical channel for communication between an IC and a biological medium is represented by the electrode itself. As this is a very crucial point for biopotential acquisitions, large efforts have been carried in order to investigate the different electrode technologies and geometries and an electromagnetic model is presented in order to characterize the properties of the electrode to skin interface.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Questa dissertazione tratterà l’argomento dello studio di metodi di progettazione e processi tecnologici innovativi per l’industrializzazione in medio-grande serie di componenti strutturali in materiale composito. L’interesse della ricerca verso questo ambito è suscitato dai notevoli vantaggi che l’utilizzo di materiali dall’alto rapporto prestazioni meccaniche/peso danno nella ricerca di elevate prestazioni in applicazioni sportive e diminuzione dei consumi ed emissioni inquinanti in mezzi di trasporto di grande serie. Lo studio di componenti in materiale composito è caratterizzato dalla peculiarità di non poter disgiungere la progettazione della geometria della parte da quella del materiale e del processo, ed in questo senso nella figura del progettista si vanno a riassumere sinergicamente competenze riguardanti i tre ambiti. Lo scopo di questo lavoro è la proposizione di una metodologia di progettazione e produzione di componenti strutturali che permetta l’utilizzazione ottimale della natura fibrosa del materiale composito sia dal punto di vista del trasferimento dei carichi tra diversi componenti, sia dal punto di vista del processo di laminazione che avviene per nastratura automatizzata. Lo studio è volto a mostrare in quali termini tale tecnologia sia potenzialmente in grado di superare i vincoli di forma ed i limiti di efficienza meccanica delle giunzioni tra le parti e di garantire maggiore produttività e costi inferiori rispetti ai diversi metodi di produzione che rappresentano oggi lo stato dell’arte dell’industrializzazione in medio-grande serie. Particolare attenzione verrà posta sull’utilizzo della tecnologia oggetto di studio per la produzione di telai automobilistici.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La tesi intende offrire una disamina approfondita dell’istituto del giudicato implicito, di rito e di merito, sotto il duplice profilo dei recenti orientamenti della giurisprudenza di legittimità, da un lato, e dei rilievi critici della dottrina, dall’altro. Il candidato si sofferma, preliminarmente, sulla ratio delle recenti sentenze delle sezioni unite della Cassazione, le quali promuovono un’interpretazione restrittiva e residuale dell’art. 37 c.p.c. alla luce del principio costituzionale della ragionevole durata. Si pone, quindi, il problema del rapporto tra il giudicato implicito sulla giurisdizione e i principi processuali costituzionali, dedicando ampio spazio e rilievo alle riflessioni critiche della dottrina sulla teoria del giudicato implicito. Il candidato passa così all’esame del giudicato implicito sulle questioni preliminari di merito, dopo aver trattato il tema dell’ordine logico-giuridico delle questioni e della struttura della decisione. Nel corso di questa analisi, ravvisa nel principio della ragione più liquida la negazione dell’idea di giudicato implicito, sviluppando così alcune riflessioni critiche sul giudicato di merito implicito. A questo punto, il piano dell’indagine si incentra su un aspetto specifico dell’istituto in esame, particolarmente importante sotto il profilo dei risvolti applicativi: le implicazioni del giudicato implicito in sede di impugnazione. Segue, quindi, la parte conclusiva della tesi dedicata ai profili di criticità del recente orientamento delle sezioni unite, con particolare riguardo all’onere di appello incidentale della parte vittoriosa nel merito. La tesi mette in luce come la struttura degli istituti venga in qualche misura piegata ad esigenze di deflazione, che andrebbero però perseguite con altri e più coerenti strumenti.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The needed of new intermediates/products for screening in the fields of drug discovery and material science is the driving force behind the development of new methodologies and technologies. Organic scaffolds are privileged targets for this scouting. Among them a priority place must be attributed to those including nitrogen functionalities in their scaffolds. It comes out that new methodologies, allowing the introduction of the nitrogen atom for the synthesis of an established target or for the curiosity driven researches, will always be welcome. The target of this PhD Thesis’ work is framed within this goal. Accordingly, Chapter 1 reports the preparation of new N-Heteroarylmethyl 3-carboxy-5-hydroxy piperidine scaffold, as potential and selective α-glucosidase inhibitors. The proposed reversible uncompetitive mechanism of inhibition makes them attractive as interesting candidate for drug development. Chapter 2 is more environmentally method-driven research. Eco-friendly studies on the synthesis of enantiomerically pure 1,4-dihydropyridines using “solid” ammonia (magnesium nitride) is reported via classical Hantzch method. Chapter 3 and Chapter 4 may be targeted as the core of the Thesis’s research work. Chapter 3 reports the studies addressed to the synthesis of N-containing heterocycles by using N-trialkylsilylimine/hetero-Diels–Alder (HAD) approach. New eco-friendly methodology as MAOS (Microwave Assisted Organic Synthesis) has been used as witness of our interest to a sustainable chemistry. Theoretical calculations were adopted to fully clarify the reaction mechanism. Chapter 4 is dedicated to picture the most recent studies performed on the application of N-Metallo-ketene imines (metallo= Si, Sn, Al), relatively new intermediates which are becoming very popular, in the preparation of highly functionalized N-containing derivatives, accordingly to the Thesis’ target. Derivatives obtained are designed in such a way that they could be of interest in the field of drug and new material chemistry.