310 resultados para Automotive, Design, Automobile, Linee di carattere


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Self-organisation is increasingly being regarded as an effective approach to tackle modern systems complexity. The self-organisation approach allows the development of systems exhibiting complex dynamics and adapting to environmental perturbations without requiring a complete knowledge of the future surrounding conditions. However, the development of self-organising systems (SOS) is driven by different principles with respect to traditional software engineering. For instance, engineers typically design systems combining smaller elements where the composition rules depend on the reference paradigm, but typically produce predictable results. Conversely, SOS display non-linear dynamics, which can hardly be captured by deterministic models, and, although robust with respect to external perturbations, are quite sensitive to changes on inner working parameters. In this thesis, we describe methodological aspects concerning the early-design stage of SOS built relying on the Multiagent paradigm: in particular, we refer to the A&A metamodel, where MAS are composed by agents and artefacts, i.e. environmental resources. Then, we describe an architectural pattern that has been extracted from a recurrent solution in designing self-organising systems: this pattern is based on a MAS environment formed by artefacts, modelling non-proactive resources, and environmental agents acting on artefacts so as to enable self-organising mechanisms. In this context, we propose a scientific approach for the early design stage of the engineering of self-organising systems: the process is an iterative one and each cycle is articulated in four stages, modelling, simulation, formal verification, and tuning. During the modelling phase we mainly rely on the existence of a self-organising strategy observed in Nature and, hopefully encoded as a design pattern. Simulations of an abstract system model are used to drive design choices until the required quality properties are obtained, thus providing guarantees that the subsequent design steps would lead to a correct implementation. However, system analysis exclusively based on simulation results does not provide sound guarantees for the engineering of complex systems: to this purpose, we envision the application of formal verification techniques, specifically model checking, in order to exactly characterise the system behaviours. During the tuning stage parameters are tweaked in order to meet the target global dynamics and feasibility constraints. In order to evaluate the methodology, we analysed several systems: in this thesis, we only describe three of them, i.e. the most representative ones for each of the three years of PhD course. We analyse each case study using the presented method, and describe the exploited formal tools and techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In fluid dynamics research, pressure measurements are of great importance to define the flow field acting on aerodynamic surfaces. In fact the experimental approach is fundamental to avoid the complexity of the mathematical models for predicting the fluid phenomena. It’s important to note that, using in-situ sensor to monitor pressure on large domains with highly unsteady flows, several problems are encountered working with the classical techniques due to the transducer cost, the intrusiveness, the time response and the operating range. An interesting approach for satisfying the previously reported sensor requirements is to implement a sensor network capable of acquiring pressure data on aerodynamic surface using a wireless communication system able to collect the pressure data with the lowest environmental–invasion level possible. In this thesis a wireless sensor network for fluid fields pressure has been designed, built and tested. To develop the system, a capacitive pressure sensor, based on polymeric membrane, and read out circuitry, based on microcontroller, have been designed, built and tested. The wireless communication has been performed using the Zensys Z-WAVE platform, and network and data management have been implemented. Finally, the full embedded system with antenna has been created. As a proof of concept, the monitoring of pressure on the top of the mainsail in a sailboat has been chosen as working example.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Machining Centre is nowadays a complex mechanical, electronic, electrical system that needs integrated design capabilities which very often require a high time-consuming effort. Numerical techniques for designing and dimensioning the machine structure and components usually requires different knowledge according to the system that have to be designed. This Ph. D Thesis is related about the efforts of the Authors to develop a system that allows to perform the complete project of a new machine optimized in its dynamic behaviour. An integration of the different systems developed, each of which respond to specific necessities of designer, is here presented. In particular a dynamic analysis system, based on a lumped mass approach, that rapidly allows to setup the drives of the machine and an Integrated Dynamic Simulation System, based on a FEM approach, that permit a dynamic optimization, are shown. A multilevel Data Base, and an operator interface module provide to complete the designing platform. The proposed approach represents a significant step toward the virtual machining for the prediction of the quality of the worked surface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

CHAPTER 1:FLUID-VISCOUS DAMPERS In this chapter the fluid-viscous dampers are introduced. The first section is focused on the technical characteristics of these devices, their mechanical behavior and the latest evolution of the technology whose they are equipped. In the second section we report the definitions and the guide lines about the design of these devices included in some international codes. In the third section the results of some experimental tests carried out by some authors on the response of these devices to external forces are discussed. On this purpose we report some technical schedules that are usually enclosed to the devices now available on the international market. In the third section we show also some analytic models proposed by various authors, which are able to describe efficiently the physical behavior of the fluid-viscous dampers. In the last section we propose some cases of application of these devices on existing structures and on new-construction structures. We show also some cases in which these devices have been revealed good for aims that lies outside the reduction of seismic actions on the structures. CHAPTER 2:DESIGN METHODS PROPOSED IN LITERATURE In this chapter the more widespread design methods proposed in literature for structures equipped by fluid-viscous dampers are introduced. In the first part the response of sdf systems in the case of harmonic external force is studied, in the last part the response in the case of random external force is discussed. In the first section the equations of motion in the case of an elastic-linear sdf system equipped with a non-linear fluid-viscous damper undergoing a harmonic force are introduced. This differential problem is analytically quite complex and it’s not possible to be solved in a closed form. Therefore some authors have proposed approximate solution methods. The more widespread methods are based on equivalence principles between a non-linear device and an equivalent linear one. Operating in this way it is possible to define an equivalent damping ratio and the problem becomes linear; the solution of the equivalent problem is well-known. In the following section two techniques of linearization, proposed by some authors in literature, are described: the first technique is based on the equivalence of the energy dissipated by the two devices and the second one is based on the equivalence of power consumption. After that we compare these two techniques by studying the response of a sdf system undergoing a harmonic force. By introducing the equivalent damping ratio we can write the equation of motion of the non-linear differential problem in an implicit form, by dividing, as usual, for the mass of the system. In this way, we get a reduction of the number of variables, by introducing the natural frequency of the system. The equation of motion written in this form has two important properties: the response is linear dependent on the amplitude of the external force and the response is dependent on the ratio of the frequency of the external harmonic force and the natural frequency of the system only, and not on their single values. All these considerations, in the last section, are extended to the case of a random external force. CHAPTER 3: DESIGN METHOD PROPOSED In this chapter the theoretical basis of the design method proposed are introduced. The need to propose a new design method for structures equipped with fluid-viscous dampers arises from the observation that the methods reported in literature are always iterative, because the response affects some parameters included in the equation of motion (such as the equivalent damping ratio). In the first section the dimensionless parameterε is introduced. This parameter has been obtained from the definition of equivalent damping ratio. The implicit form of the equation of motion is written by introducing the parameter ε, instead of the equivalent damping ratio. This new implicit equation of motions has not any terms affected by the response, so that once ε is known the response can be evaluated directly. In the second section it is discussed how the parameter ε affects some characteristics of the response: drift, velocity and base shear. All the results described till this point have been obtained by keeping the non-linearity of the behavior of the dampers. In order to get a linear formulation of the problem, that is possible to solve by using the well-known methods of the dynamics of structures, as we did before for the iterative methods by introducing the equivalent damping ratio, it is shown how the equivalent damping ratio can be evaluated from knowing the value of ε. Operating in this way, once the parameter ε is known, it is quite easy to estimate the equivalent damping ratio and to proceed with a classic linear analysis. In the last section it is shown how the parameter ε could be taken as reference for the evaluation of the convenience of using non-linear dampers instead of linear ones on the basis of the type of external force and the characteristics of the system. CHAPTER 4: MULTI-DEGREE OF FREEDOM SYSTEMS In this chapter the design methods of a elastic-linear mdf system equipped with non-linear fluidviscous dampers are introduced. It has already been shown that, in the sdf systems, the response of the structure can be evaluated through the estimation of the equivalent damping ratio (ξsd) assuming the behavior of the structure elastic-linear. We would to mention that some adjusting coefficients, to be applied to the equivalent damping ratio in order to consider the actual behavior of the structure (that is non-linear), have already been proposed in literature; such coefficients are usually expressed in terms of ductility, but their treatment is over the aims of this thesis and we does not go into further. The method usually proposed in literature is based on energy equivalence: even though this procedure has solid theoretical basis, it must necessary include some iterative process, because the expression of the equivalent damping ratio contains a term of the response. This procedure has been introduced primarily by Ramirez, Constantinou et al. in 2000. This procedure is reported in the first section and it is defined “Iterative Method”. Following the guide lines about sdf systems reported in the previous chapters, it is introduced a procedure for the assessment of the parameter ε in the case of mdf systems. Operating in this way the evaluation of the equivalent damping ratio (ξsd) can be done directly without implementing iterative processes. This procedure is defined “Direct Method” and it is reported in the second section. In the third section the two methods are analyzed by studying 4 cases of two moment-resisting steel frames undergoing real accelerogramms: the response of the system calculated by using the two methods is compared with the numerical response obtained from the software called SAP2000-NL, CSI product. In the last section a procedure to create spectra of the equivalent damping ratio, affected by the parameter ε and the natural period of the system for a fixed value of exponent α, starting from the elasticresponse spectra provided by any international code, is introduced.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An amperometric glucose biosensor was developed using an anionic clay matrix (LDH) as enzyme support. The enzyme glucose oxidase (GOx) was immobilized on a layered double hydroxide Ni/Al-NO3 LDH during the electrosynthesis, which was followed by crosslinking with glutaraldehyde (GA) vapours or with GA and bovine serum albumin (GABSA) to avoid the enzyme release. The electrochemical reaction was carried out potentiostatically, at -0.9V vs. SCE, using a rotating disc Pt electrode to assure homogeneity of the electrodeposition suspension, containing GOx, Ni(NO3)2 and Al(NO3)3 in 0.3 M KNO3. The mechanism responsible of the LDH electrodeposition involves the precipitation of the LDH due to the increase of pH at the surface of the electrode, following the cathodic reduction of nitrates. The Pt surface modified with the Ni/Al-NO3 LDH shows a much reduced noise, giving rise to a better signal to noise ratio for the currents relative to H2O2 oxidation, and a linear range for H2O2 determination wider than the one observed for bare Pt electrodes. We pointed out the performances of the biosensor in terms of sensitivity to glucose, calculated from the slope of the linear part of the calibration curve for enzimatically produced H2O2; the sensitivity was dependent on parameters related to the electrodeposition in addition to working conditions. In order to optimise the glucose biosensor performances, with a reduced number of experimental runs, we applied an experimental design. A first screening was performed considering the following variables: deposition time (30 - 120 s), enzyme concentration (0.5 - 3.0 mg/mL), Ni/Al molar ratio (3:1 or 2:1) of the electrodeposition solution at a total metals concentration of 0.03 M and pH of the working buffer solution (5.5-7.0). On the basis of the results from this screening, a full factorial design was carried out, taking into account only enzyme concentration and Ni/Al molar ratio of the electrosynthesis solution. A full factorial design was performed to study linear interactions between factors and their quadratic effects and the optimal setup was evaluated by the isoresponse curves. The significant factors were: enzyme concentration (linear and quadratic terms) and the interaction between enzyme concentration and Ni/Al molar ratio. Since the major obstacle for application of amperometric glucose biosensors is the interference signal resulting from other electro-oxidizable species present in the real matrices, such as ascorbate (AA), the use of different permselective membranes on Pt-LDHGOx modified electrode was discussed with the aim of improving biosensor selectivity and stability. Conventional membranes obtained using Nafion, glutaraldehyde (GA) vapours, GA-BSA were tested together with more innovative materials like palladium hexacyanoferrate (PdHCF) and titania hydrogels. Particular attention has been devoted to hydrogels, because they possess some attractive features, which are generally considered to favour biosensor materials biocompatibility and, consequently, the functional enzyme stability. The Pt-LDH-GOx-PdHCF hydrogel biosensor presented an anti-interferant ability so that to be applied for an accurate glucose analysis in blood. To further improve the biosensor selectivity, protective membranes containing horseradish peroxidase (HRP) were also investigated with the aim of oxidising the interferants before they reach the electrode surface. In such a case glucose determination was also accomplished in real matrices with high AA content. Furthermore, the application of a LDH containing nickel in the oxidised state was performed not only as a support for the enzyme, but also as anti-interferant sistem. The result is very promising and it could be the starting point for further applications in the field of amperometric biosensors; the study could be extended to other oxidase enzymes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation deals with the problems and the opportunities of a semiotic approach to perception. Is perception, seen as the ability to detect and articulate an coherent picture of the surrounding environment, describable in semiotic terms? Is it possibile, for a discipline wary of any attempt to reduce semiotic meaning to a psychological and naturalized issue, to come to terms with the cognitive, automatic and genetically hard-wired specifics of our perceptive systems? In order to deal with perceptive signs, is it necessary to modify basic assumptions in semiotics, or can we simply extend the range of our conceptual instruments and definitions? And what if perception is a wholly different semiotic machinery, to be considered as sui generis, but nonetheless interesting for a general theory of semiotics? By exposing the major ideas put forward by the main thinkers in the semiotic field, Mattia de Bernardis gives a comprehensive picture of the theoretical situation, adding to the classical dichotomy between structuralist and interpretative semiotics another distinction, that between homogeneist and etherogeneist theories of perception. Homogeneist semioticians see perception as one of many semiotic means of sign production, totally similar to the other ones, while heterogeneist semioticians consider perceptive meaning as essentially different from normal semiotic meaning, so much so that it requires new methods and ideas to be analyzed. The main example of etherogeneist approach to perception in semiotic literature, Umberto Eco’s “primary semiosis” is then presented, critically examined and eventually rejected and the homogeneist stance is affirmed as the most promising path towards a semiotic theory of perception.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La tesi si pone come finalità quella di analizzare un comprensorio territoriale dal punto di vista insediativo, cercando di coglierne le peculiarità e i mutamenti nel lungo periodo attraverso un uso incrociato di fonti scritte e archeologiche. La ricerca ha preso avvio dall’analisi del Saltopiano, uno dei distretti di ambito rurale che si ritrova nelle fonti tra IX-XII secolo, in passato già affrontato dalla storiografia specialmente in relazione all’organizzazione istituzionale delle aree rurali tra Longobardia e Romania durante i secoli altomedievali. Attraverso l’esame delle fonti scritte edite si è cercato di ricostruire il quadro dell’organizzazione territoriale, partendo dalla disamina dei centri di potere laici ed ecclesiastici che a questa area avevano rivolto il proprio interesse patrimoniale e politico, ma proponendo in modo analitico i dati che forniscono indicazioni dirette in relazione all’organizzazione insediativa e quindi alla gestione del territorio dal punto di vista socioeconomico. E’ stato posto in risalto il carattere di un insediamento rurale a maglie larghe, secondo la scansione in fundi e la presenza di poli di accentramento importanti come pievi, castra, vici e con una compresenza, per quanto ristretta a pochi esempi significativi, di altre forme di organizzazione come la curtis e la massa. Con la prosecuzione dello studio del territorio in senso diacronico, prima la scomparsa del riferimento al Saltopiano, poi la progressiva conquista del contado da parte del Comune di Bologna ha determinato un vero e proprio mutamento nell’approccio di analisi. E’ stato dato spazio all’analisi di fondi inediti (conservati principalmente all’Archivio di Stato di Bologna) e specificamente legati alla realtà territoriale studiata. In primo luogo, sono stati esaminati gli estimi del contado (Galliera e Massumatico), una fonte già frequentata in passato da altri studiosi, soprattutto con un interesse dal punto di vista demografico e economico. Nel caso specifico, sono stati estrapolati dalle prime rilevazioni fiscali del 1235 e del 1245 e poi da quelle di primo Trecento i dati che restituiscono l’organizzazione del territorio in modo concreto. Partendo dalle riflessioni di studi svolti in passato, che avevano considerato il fondamentale inserimento di importanti famiglie cittadine nella gestione sempre più ampia dei beni agricoli nel contado è stata avviata l’analisi di un altro fondo inedito, quello dei registri del Vicariato di Galliera (in particolare quelli concernenti la denuncia dei “danni dati” sulle proprietà agricole), da cui emerge in modo evidente la presenza di famiglie come i Guastavillani, i Caccianemici, i Lambertini. Tali dati, in associazione a quelli tratti dagli estimi, hanno fornito elementi essenziali per la comprensione del territorio rurale nel suo complesso e nei rapporti di interdipendenza tra le diverse componenti sociali. Una terza parte della tesi è dedicata nella sua totalità all’analisi delle fonti materiali che forniscono dati per lo studio dell’insediamento medievale nel territorio compreso tra gli attuali comuni di S. Pietro in Casale e Galliera. Partendo da alcune ricerche preliminari compiute negli anni ’90 del secolo scorso, è stato impostato un progetto di ricerca archeologica articolatosi in due campagne di ricognizione di superficie e in una prima campagna di scavo tramite sondaggio presso la torre di Galliera, al fine di ricavare dati di prima mano in un’area pressoché inesplorata dal punto di vista archeologico. Nonostante i limiti riscontrati dal punto di vista pratico, a causa del terreno fortemente alluvionato, si sono raccolti dati specifici che aiutano a inquadrare questo comprensorio e a confrontarlo con altre aree della regione e in particolare del comitato bolognese studiate in ricerche analoghe, mettendone in evidenza le specificità e le caratterizzazioni. Inoltre, alcune importanti persistenze materiali (un sistema di torri di cui rimangono alcuni esempi ancora conservati in alzato) hanno permesso di gettare luce sul valore commerciale e quindi strategico dell’area, soprattutto in funzione del passaggio delle merci lungo una via fluviale fondamentale tra XIII-XIV secolo nel collegare Ferrara a Bologna.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Questa tesi riguarda l'analisi delle trasmissioni ad ingranaggi e delle ruote dentate in generale, nell'ottica della minimizzazione delle perdite di energia. È stato messo a punto un modello per il calcolo della energia e del calore dissipati in un riduttore, sia ad assi paralleli sia epicicloidale. Tale modello consente di stimare la temperatura di equilibrio dell'olio al variare delle condizioni di funzionamento. Il calcolo termico è ancora poco diffuso nel progetto di riduttori, ma si è visto essere importante soprattutto per riduttori compatti, come i riduttori epicicloidali, per i quali la massima potenza trasmissibile è solitamente determinata proprio da considerazioni termiche. Il modello è stato implementato in un sistema di calcolo automatizzato, che può essere adattato a varie tipologie di riduttore. Tale sistema di calcolo consente, inoltre, di stimare l'energia dissipata in varie condizioni di lubrificazione ed è stato utilizzato per valutare le differenze tra lubrificazione tradizionale in bagno d'olio e lubrificazione a “carter secco” o a “carter umido”. Il modello è stato applicato al caso particolare di un riduttore ad ingranaggi a due stadi: il primo ad assi paralleli ed il secondo epicicloidale. Nell'ambito di un contratto di ricerca tra il DIEM e la Brevini S.p.A. di Reggio Emilia, sono state condotte prove sperimentali su un prototipo di tale riduttore, prove che hanno consentito di tarare il modello proposto [1]. Un ulteriore campo di indagine è stato lo studio dell’energia dissipata per ingranamento tra due ruote dentate utilizzando modelli che prevedano il calcolo di un coefficiente d'attrito variabile lungo il segmento di contatto. I modelli più comuni, al contrario, si basano su un coefficiente di attrito medio, mentre si può constatare che esso varia sensibilmente durante l’ingranamento. In particolare, non trovando in letteratura come varia il rendimento nel caso di ruote corrette, ci si è concentrati sul valore dell'energia dissipata negli ingranaggi al variare dello spostamento del profilo. Questo studio è riportato in [2]. È stata condotta una ricerca sul funzionamento di attuatori lineari vite-madrevite. Si sono studiati i meccanismi che determinano le condizioni di usura dell'accoppiamento vite-madrevite in attuatori lineari, con particolare riferimento agli aspetti termici del fenomeno. Si è visto, infatti, che la temperatura di contatto tra vite e chiocciola è il parametro più critico nel funzionamento di questi attuatori. Mediante una prova sperimentale, è stata trovata una legge che, data pressione, velocità e fattore di servizio, stima la temperatura di esercizio. Di tale legge sperimentale è stata data un'interpretazione sulla base dei modelli teorici noti. Questo studio è stato condotto nell'ambito di un contratto di ricerca tra il DIEM e la Ognibene Meccanica S.r.l. di Bologna ed è pubblicato in [3].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we introduce an analytical approach for the frequency warping transform. Criteria for the design of operators based on arbitrary warping maps are provided and an algorithm carrying out a fast computation is defined. Such operators can be used to shape the tiling of time-frequency plane in a flexible way. Moreover, they are designed to be inverted by the application of their adjoint operator. According to the proposed mathematical model, the frequency warping transform is computed by considering two additive operators: the first one represents its nonuniform Fourier transform approximation and the second one suppresses aliasing. The first operator is known to be analytically characterized and fast computable by various interpolation approaches. A factorization of the second operator is found for arbitrary shaped non-smooth warping maps. By properly truncating the operators involved in the factorization, the computation turns out to be fast without compromising accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chemists have long sought to extrapolate the power of biological catalysis and recognition to synthetic systems. These efforts have focused largely on low molecular weight catalysts and receptors; however, biological systems themselves rely almost exclusively on polymers, proteins and RNA, to perform complex chemical functions. Proteins and RNA are unique in their ability to adopt compact, well-ordered conformations, and specific folding provides precise spatial orientation of the functional groups that comprise the “active site”. These features suggest that identification of new polymer backbones with discrete and predictable folding propensities (“foldamers”) will provide a basis for design of molecular machines with unique capabilities. The foldamer approach complements current efforts to design unnatural properties into polypeptides and polynucleotides. The aim of this thesis is the synthesis and conformational studies of new classes of foldamers, using a peptidomimetic approach. Moreover their attitude to be utilized as ionophores, catalysts, and nanobiomaterials were analyzed in solution and in the solid state. This thesis is divided in thematically chapters that are reported below. It begins with a very general introduction (page 4) which is useful, but not strictly necessary, to the expert reader. It is worth mentioning that paragraph I.3 (page 22) is the starting point of this work and paragraph I.5 (page 32) isrequired to better understand the results of chapters 4 and 5. In chapter 1 (page 39) is reported the synthesis and conformational analysis of a novel class of foldamers containing (S)-β3-homophenylglycine [(S)-β3-hPhg] and D- 4-carboxy-oxazolidin-2-one (D-Oxd) residues in alternate order is reported. The experimental conformational analysis performed in solution by IR, 1HNMR, and CD spectroscopy unambiguously proved that these oligomers fold into ordered structures with increasing sequence length. Theoretical calculations employing ab initio MO theory suggest a helix with 11-membered hydrogenbonded rings as the preferred secondary structure type. The novel structures enrich the field of peptidic foldamers and might be useful in the mimicry of native peptides. In chapter 2 cyclo-(L-Ala-D-Oxd)3 and cyclo-(L-Ala-DOxd) 4 were prepared in the liquid phase with good overall yields and were utilized for bivalent ions chelation (Ca2+, Mg2+, Cu2+, Zn2+ and Hg2+); their chelation skill was analyzed with ESI-MS, CD and 1HNMR techniques and the best results were obtained with cyclo-(L-Ala-D-Oxd)3 and Mg2+ or Ca2+. Chapter 3 describes an application of oligopeptides as catalysts for aldol reactions. Paragraph 3.1 concerns the use of prolinamides as catalysts of the cross aldol addition of hydroxyacetone to aromatic aldeydes, whereas paragraphs 3.2 and 3.3 are about the catalyzed aldol addition of acetone to isatins. By means of DFT and AIM calculations, the steric and stereoelectronic effects that control the enantioselectivity in the cross-aldol addition of acetone to isatin catalysed by L-proline have been studied, also in the presence of small quantities of water. In chapter 4 is reported the synthesis and the analysis of a new fiber-like material, obtained from the selfaggregation of the dipeptide Boc-L-Phe-D-Oxd-OBn, which spontaneously forms uniform fibers consisting of parallel infinite linear chains arising from singleintermolecular N-H···O=C hydrogen bonds. This is the absolute borderline case of a parallel β-sheet structure. Longer oligomers of the same series with general formula Boc-(L-Phe-D-Oxd)n-OBn (where n = 2-5), are described in chapter 5. Their properties in solution and in the solid state were analyzed, in correlation with their attitude to form intramolecular hydrogen bond. In chapter 6 is reported the synthesis of imidazolidin-2- one-4-carboxylate and (tetrahydro)-pyrimidin-2-one-5- carboxylate, via an efficient modification of the Hofmann rearrangement. The reaction affords the desired compounds from protected asparagine or glutamine in good to high yield, using PhI(OAc)2 as source of iodine(III).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Water distribution networks optimization is a challenging problem due to the dimension and the complexity of these systems. Since the last half of the twentieth century this field has been investigated by many authors. Recently, to overcome discrete nature of variables and non linearity of equations, the research has been focused on the development of heuristic algorithms. This algorithms do not require continuity and linearity of the problem functions because they are linked to an external hydraulic simulator that solve equations of mass continuity and of energy conservation of the network. In this work, a NSGA-II (Non-dominating Sorting Genetic Algorithm) has been used. This is a heuristic multi-objective genetic algorithm based on the analogy of evolution in nature. Starting from an initial random set of solutions, called population, it evolves them towards a front of solutions that minimize, separately and contemporaneously, all the objectives. This can be very useful in practical problems where multiple and discordant goals are common. Usually, one of the main drawback of these algorithms is related to time consuming: being a stochastic research, a lot of solutions must be analized before good ones are found. Results of this thesis about the classical optimal design problem shows that is possible to improve results modifying the mathematical definition of objective functions and the survival criterion, inserting good solutions created by a Cellular Automata and using rules created by classifier algorithm (C4.5). This part has been tested using the version of NSGA-II supplied by Centre for Water Systems (University of Exeter, UK) in MATLAB® environment. Even if orientating the research can constrain the algorithm with the risk of not finding the optimal set of solutions, it can greatly improve the results. Subsequently, thanks to CINECA help, a version of NSGA-II has been implemented in C language and parallelized: results about the global parallelization show the speed up, while results about the island parallelization show that communication among islands can improve the optimization. Finally, some tests about the optimization of pump scheduling have been carried out. In this case, good results are found for a small network, while the solutions of a big problem are affected by the lack of constraints on the number of pump switches. Possible future research is about the insertion of further constraints and the evolution guide. In the end, the optimization of water distribution systems is still far from a definitive solution, but the improvement in this field can be very useful in reducing the solutions cost of practical problems, where the high number of variables makes their management very difficult from human point of view.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, computing is migrating from traditional high performance and distributed computing to pervasive and utility computing based on heterogeneous networks and clients. The current trend suggests that future IT services will rely on distributed resources and on fast communication of heterogeneous contents. The success of this new range of services is directly linked to the effectiveness of the infrastructure in delivering them. The communication infrastructure will be the aggregation of different technologies even though the current trend suggests the emergence of single IP based transport service. Optical networking is a key technology to answer the increasing requests for dynamic bandwidth allocation and configure multiple topologies over the same physical layer infrastructure, optical networks today are still “far” from accessible from directly configure and offer network services and need to be enriched with more “user oriented” functionalities. However, current Control Plane architectures only facilitate efficient end-to-end connectivity provisioning and certainly cannot meet future network service requirements, e.g. the coordinated control of resources. The overall objective of this work is to provide the network with the improved usability and accessibility of the services provided by the Optical Network. More precisely, the definition of a service-oriented architecture is the enable technology to allow user applications to gain benefit of advanced services over an underlying dynamic optical layer. The definition of a service oriented networking architecture based on advanced optical network technologies facilitates users and applications access to abstracted levels of information regarding offered advanced network services. This thesis faces the problem to define a Service Oriented Architecture and its relevant building blocks, protocols and languages. In particular, this work has been focused on the use of the SIP protocol as a inter-layers signalling protocol which defines the Session Plane in conjunction with the Network Resource Description language. On the other hand, an advantage optical network must accommodate high data bandwidth with different granularities. Currently, two main technologies are emerging promoting the development of the future optical transport network, Optical Burst and Packet Switching. Both technologies respectively promise to provide all optical burst or packet switching instead of the current circuit switching. However, the electronic domain is still present in the scheduler forwarding and routing decision. Because of the high optics transmission frequency the burst or packet scheduler faces a difficult challenge, consequentially, high performance and time focused design of both memory and forwarding logic is need. This open issue has been faced in this thesis proposing an high efficiently implementation of burst and packet scheduler. The main novelty of the proposed implementation is that the scheduling problem has turned into simple calculation of a min/max function and the function complexity is almost independent of on the traffic conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The experience of void, essential to the production of forms and to make use them, can be considered as the base of the activities that attend to the formative processes. If void and matter constitutes the basic substances of architecture. Their role in the definition of form, the symbolic value and the constructive methods of it defines the quality of the space. This job inquires the character of space in the architecture of Moneo interpreting the meaning of the void in the Basque culture through the reading of the form matrices in the work of Jorge Oteiza and Eduardo Chillida. In the tie with the Basque culture a reading key is characterized by concurring to put in relation some of the theoretical principles expressed by Moneo on the relationship between place and time, in an unique and specific vision of the space. In the analysis of the process that determines the genesis of the architecture of Moneo emerges a trajectory whose direction is constructed on two pivos: on the one hand architecture like instrument of appropriation of the place, gushed from an acquaintance process who leans itself to the reading of the relations that define the place and of the resonances through which measuring it, on the other hand the architecture whose character is able to represent and to extend the time in which he is conceived, through the autonomy that is conferred to them from values. Following the trace characterized from this hypothesis, that is supported on the theories elaborated from Moneo, surveying deepens the reading of the principles that construct the sculptural work of Oteiza and Chillida, features from a search around the topic of the void and to its expression through the form. It is instrumental to the definition of a specific area that concurs to interpret the character of the space subtended to a vision of the place and the time, affine to the sensibility of Moneo and in some way not stranger to its cultural formation. The years of the academic formation, during which Moneo enters in contact with the Basque artistic culture, seem to be an important period in the birth of that knowledge that will leads him to the formulation of theories tied to the relationship between time, place and architecture. The values expressed through the experimental work of Oteiza and Chillida during years '50 are valid bases to the understanding of such relationships. In tracing a profile of the figures of Oteiza and Chillida, without the pretension that it is exhaustive for the reading of the complex historical period in which they are placed, but with the needs to put the work in a context, I want to be evidenced the important role carried out from the two artists from the Basque cultural area within which Moneo moves its first steps. The tie that approaches Moneo to the Basque culture following the personal trajectory of the formative experience interlaces to that one of important figures of the art and the Spanish architecture. One of the more meaningful relationships is born just during the years of his academic formation, from 1958 to the 1961, when he works like student in the professional office of the architect Francisco Sáenz de Oiza, who was teaching architectural design at the ETSAM. In these years many figures of Basque artists alternated at the professional office of Oiza that enjoys the important support of the manufacturer and maecenas Juan Huarte Beaumont, introduced to he from Oteiza. The tie between Huarte and Oteiza is solid and continuous in the years and it realizes in a contribution to many of the initiatives that makes of Oteiza a forwarder of the Basque culture. In the four years of collaboration with Oiza, Moneo has the opportunity to keep in contact with an atmosphere permeated by a constant search in the field of the plastic art and with figures directly connected to such atmosphere. It’s of a period of great intensity as in the production like in the promotion of the Basque art. The collective “Blanco y Negro”, than is held in 1959 at the Galería Darro to Madrid, is only one of the many times of an exhibition of the work of Oteiza and Chillida. The end of the Fifties is a period of international acknowledgment for Chillida that for Oteiza. The decade of the Fifties consecrates the hypotheses of a mythical past of the Basque people through the spread of the studies carried out in the antecedent years. The archaeological discoveries that join to a context already rich of signs of the prehistoric era, consolidate the knowledge of a strong cultural identity. Oteiza, like Chillida and other contemporary artists, believe in a cosmogonist conception belonging to the Basques, connected to their matriarchal mythological past. The void in its meaning of absence, in the Basque culture, thus as in various archaic and oriental religions, is equivalent to the spiritual fullness as essential condition to the revealing of essence. Retracing the archaic origins of the Basque culture emerges the deep meaning that the void assumes as key element in the religious interpretation of the passage from the life to the death. The symbology becomes rich of meaningful characters who derive from the fact that it is a chthonic cult. A representation of earth like place in which divine manifest itself but also like connection between divine and human, and this manipulation of the matter of which the earth it is composed is the tangible projection of the continuous search of the man towards God. The search of equilibrium between empty and full, that characterizes also the development of the form in architecture, in the Basque culture assumes therefore a peculiar value that returns like constant in great part of the plastic expressions, than in this context seem to be privileged regarding the other expressive forms. Oteiza and Chillida develop two original points of view in the representation of the void through the form. Both use of rigorous systems of rules sensitive to the physics principles and the characters of the matter. The last aim of the Oteiza’s construction is the void like limit of the knowledge, like border between known and unknown. It doesn’t means to reduce the sculptural object to an only allusive dimension because the void as physical and spiritual power is an active void, that possesses that value able to reveal the being through the trace of un-being. The void in its transcendental manifestation acts at the same time from universal and from particular, like in the atomic structure of the matter, in which on one side it constitutes the inner structure of every atom and on the other one it is necessary condition to the interaction between all the atoms. The void can be seen therefore as the action field that concurs the relations between the forms but is also the necessary condition to the same existence of the form. In the construction of Chillida the void represents that counterpart structuring the matter, inborn in it, the element in absence of which wouldn’t be variations neither distinctive characters to define the phenomenal variety of the world. The physics laws become the subject of the sculptural representation, the void are the instrument that concurs to catch up the equilibrium. Chillida dedicate himself to experience the space through the senses, to perceive of the qualities, to tell the physics laws which forge the matter in the form and the form arranges the places. From the artistic experience of the two sculptors they can be transposed, to the architectonic work of Moneo, those matrices on which they have constructed their original lyric expressions, where the void is absolute protagonist. An ambit is defined thus within which the matrices form them drafts from the work of Oteiza and Chillida can be traced in the definition of the process of birth and construction of the architecture of Moneo, but also in the relation that the architecture establishes with the place and in the time. The void becomes instrument to read the space constructed in its relationships that determine the proportions, rhythms, and relations. In this way the void concurs to interpret the architectonic space and to read the value of it, the quality of the spaces constructing it. This because it’s like an instrument of the composition, whose role is to maintain to the separation between the elements putting in evidence the field of relations. The void is that instrument that serves to characterize the elements that are with in the composition, related between each other, but distinguished. The meaning of the void therefore pushes the interpretation of the architectonic composition on the game of the relations between the elements that, independent and distinguished, strengthen themselves in their identity. On the one hand if void, as measurable reality, concurs all the dimensional changes quantifying the relationships between the parts, on the other hand its dialectic connotation concurs to search the equilibrium that regulated such variations. Equilibrium that therefore does not represent an obtained state applying criteria setting up from arbitrary rules but that depends from the intimate nature of the matter and its embodiment in the form. The production of a form, or a formal system that can be finalized to the construction of a building, is indissolubly tied to the technique that is based on the acquaintance of the formal vocation of the matter, and what it also can representing, meaning, expresses itself in characterizing the site. For Moneo, in fact, the space defined from the architecture is above all a site, because the essence of the site is based on the construction. When Moneo speaks about “birth of the idea of plan” like essential moment in the construction process of the architecture, it refers to a process whose complexity cannot be born other than from a deepened acquaintance of the site that leads to the comprehension of its specificity. Specificity arise from the infinite sum of relations, than for Moneo is the story of the oneness of a site, of its history, of the cultural identity and of the dimensional characters that that they are tied to it beyond that to the physical characteristics of the site. This vision is leaned to a solid made physical structure of perceptions, of distances, guideline and references that then make that the process is first of all acquaintance, appropriation. Appropriation that however does not happen for directed consequence because does not exist a relationship of cause and effect between place and architecture, thus as an univocal and exclusive way does not exist to arrive to a representation of an idea. An approach that, through the construction of the place where the architecture acquires its being, searches an expression of its sense of the truth. The proposal of a distinction for areas like space, matter, spirit and time, answering to the issues that scan the topics of the planning search of Moneo, concurs a more immediate reading of the systems subtended to the composition principles, through which is related the recurrent architectonic elements in its planning dictionary. From the dialectic between the opposites that is expressed in the duality of the form, through the definition of a complex element that can mediate between inside and outside as a real system of exchange, Moneo experiences the form development of the building deepening the relations that the volume establishes in the site. From time to time the invention of a system used to answer to the needs of the program and to resolve the dual character of the construction in an only gesture, involves a deep acquaintance of the professional practice. The technical aspect is the essential support to which the construction of the system is indissolubly tied. What therefore arouses interest is the search of the criteria and the way to construct that can reveal essential aspects of the being of the things. The constructive process demands, in fact, the acquaintance of the formative properties of the matter. Property from which the reflections gush on the relations that can be born around the architecture through the resonance produced from the forms. The void, in fact, through the form is in a position to constructing the site establishing a reciprocity relation. A reciprocity that is determined in the game between empty and full and of the forms between each other, regarding around, but also with regard to the subjective experience. The construction of a background used to amplify what is arranged on it and to clearly show the relations between the parts and at the same time able to tie itself with around opening the space of the vision, is a system that in the architecture of Moneo has one of its more effective applications in the use of the platform used like architectonic element. The spiritual force of this architectonic gesture is in the ability to define a place whose projecting intention is perceived and shared with who experience and has lived like some instrument to contact the cosmic forces, in a delicate process that lead to the equilibrium with them, but in completely physical way. The principles subtended to the construction of the form taken from the study of the void and the relations that it concurs, lead to express human values in the construction of the site. The validity of these principles however is tested from the time. The time is what Moneo considers as filter that every architecture is subordinate to and the survival of architecture, or any of its formal characters, reveals them the validity of the principles that have determined it. It manifests thus, in the tie between the spatial and spiritual dimension, between the material and the worldly dimension, the state of necessity that leads, in the construction of the architecture, to establish a contact with the forces of the universe and the intimate world, through a process that translate that necessity in elaboration of a formal system.