838 resultados para design model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cutting tools with higher wear resistance are those manufactured by powder metallurgy process, which combines the development of materials and design properties, features of shape-making technology and sintering. The annual global market of cutting tools consumes about US$ 12 billion; therefore, any research to improve tool designs and machining process techniques adds value or reduces costs. The aim is to describe the Spark Plasma Sintering (SPS) of cutting tools in functionally gradient materials, to show this structure design suitability through thermal residual stress model and, lastly, to present two kinds of inserts. For this, three cutting tool materials were used (Al2O3-ZrO2, Al2O3-TiC and WC-Co). The samples were sintered by SPS at 1300 °C and 70 MPa. The results showed that mechanical and thermal displacements may be separated during thermal treatment for analysis. Besides, the absence of cracks indicated coherence between experimental results and the residual stresses predicted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human African trypanosomiasis, also known as sleeping sickness, is a major cause of death in Africa, and for which there are no safe and effective treatments available. The enzyme aldolase from Trypanosoma brucei is an attractive, validated target for drug development. A series of alkyl‑glycolamido and alkyl-monoglycolate derivatives was studied employing a combination of drug design approaches. Three-dimensional quantitative structure-activity relationships (3D QSAR) models were generated using the comparative molecular field analysis (CoMFA). Significant results were obtained for the best QSAR model (r2 = 0.95, non-cross-validated correlation coefficient, and q2 = 0.80, cross-validated correlation coefficient), indicating its predictive ability for untested compounds. The model was then used to predict values of the dependent variables (pKi) of an external test set,the predicted values were in good agreement with the experimental results. The integration of 3D QSAR, molecular docking and molecular dynamics simulations provided further insight into the structural basis for selective inhibition of the target enzyme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of tendons for the transmission of the forces and the movements in robotic devices has been investigated from several researchers all over the world. The interest in this kind of actuation modality is based on the possibility of optimizing the position of the actuators with respect to the moving part of the robot, in the reduced weight, high reliability, simplicity in the mechanic design and, finally, in the reduced cost of the resulting kinematic chain. After a brief discussion about the benefits that the use of tendons can introduce in the motion control of a robotic device, the design and control aspects of the UB Hand 3 anthropomorphic robotic hand are presented. In particular, the tendon-sheaths transmission system adopted in the UB Hand 3 is analyzed and the problem of force control and friction compensation is taken into account. The implementation of a tendon based antagonistic actuated robotic arm is then investigated. With this kind of actuation modality, and by using transmission elements with nonlinear force/compression characteristic, it is possible to achieve simultaneous stiffness and position control, improving in this way the safety of the device during the operation in unknown environments and in the case of interaction with other robots or with humans. The problem of modeling and control of this type of robotic devices is then considered and the stability analysis of proposed controller is reported. At the end, some tools for the realtime simulation of dynamic systems are presented. This realtime simulation environment has been developed with the aim of improving the reliability of the realtime control applications both for rapid prototyping of controllers and as teaching tools for the automatic control courses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with two important research aspects concerning radio frequency (RF) microresonators and switches. First, a new approach for compact modeling and simulation of these devices is presented. Then, a combined process flow for their simultaneous fabrication on a SOI substrate is proposed. Compact models for microresonators and switches are extracted by applying mathematical model order reduction (MOR) to the devices finite element (FE) description in ANSYS c° . The behaviour of these devices includes forms of nonlinearities. However, an approximation in the creation of the FE model is introduced, which enables the use of linear model order reduction. Microresonators are modeled with the introduction of transducer elements, which allow for direct coupling of the electrical and mechanical domain. The coupled system element matrices are linearized around an operating point and reduced. The resulting macromodel is valid for small signal analysis around the bias point, such as harmonic pre-stressed analysis. This is extremely useful for characterizing the frequency response of resonators. Compact modelling of switches preserves the nonlinearity of the device behaviour. Nonlinear reduced order models are obtained by reducing the number of nonlinearities in the system and handling them as input to the system. In this way, the system can be reduced using linear MOR techniques and nonlinearities are introduced directly in the reduced order model. The reduction of the number of system nonlinearities implies the approximation of all distributed forces in the model with lumped forces. Both for microresonators and switches, a procedure for matrices extraction has been developed so that reduced order models include the effects of electrical and mechanical pre-stress. The extraction process is fast and can be done automatically from ANSYS binary files. The method has been applied for the simulation of several devices both at devices and circuit level. Simulation results have been compared with full model simulations, and, when available, experimental data. Reduced order models have proven to conserve the accuracy of finite element method and to give a good description of the overall device behaviour, despite the introduced approximations. In addition, simulation is very fast, both at device and circuit level. A combined process-flow for the integrated fabrication of microresonators and switches has been defined. For this purpose, two processes that are optimized for the independent fabrication of these devices are merged. The major advantage of this process is the possibility to create on-chip circuit blocks that include both microresonators and switches. An application is, for example, aswitched filter bank for wireless transceiver. The process for microresonators fabrication is characterized by the use of silicon on insulator (SOI) wafers and on a deep reactive ion etching (DRIE) step for the creation of the vibrating structures in single-crystal silicon and the use of a sacrificial oxide layer for the definition of resonator to electrode distance. The fabrication of switches is characterized by the use of two different conductive layers for the definition of the actuation electrodes and by the use of a photoresist as a sacrificial layer for the creation of the suspended structure. Both processes have a gold electroplating step, for the creation of the resonators electrodes, transmission lines and suspended structures. The combined process flow is designed such that it conserves the basic properties of the original processes. Neither the performance of the resonators nor the performance of the switches results affected by the simultaneous fabrication. Moreover, common fabrication steps are shared, which allows for cheaper and faster fabrication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. The surgical treatment of dysfunctional hips is a severe condition for the patient and a costly therapy for the public health. Hip resurfacing techniques seem to hold the promise of various advantages over traditional THR, with particular attention to young and active patients. Although the lesson provided in the past by many branches of engineering is that success in designing competitive products can be achieved only by predicting the possible scenario of failure, to date the understanding of the implant quality is poorly pre-clinically addressed. Thus revision is the only delayed and reliable end point for assessment. The aim of the present work was to model the musculoskeletal system so as to develop a protocol for predicting failure of hip resurfacing prosthesis. Methods. Preliminary studies validated the technique for the generation of subject specific finite element (FE) models of long bones from Computed Thomography data. The proposed protocol consisted in the numerical analysis of the prosthesis biomechanics by deterministic and statistic studies so as to assess the risk of biomechanical failure on the different operative conditions the implant might face in a population of interest during various activities of daily living. Physiological conditions were defined including the variability of the anatomy, bone densitometry, surgery uncertainties and published boundary conditions at the hip. The protocol was tested by analysing a successful design on the market and a new prototype of a resurfacing prosthesis. Results. The intrinsic accuracy of models on bone stress predictions (RMSE < 10%) was aligned to the current state of the art in this field. The accuracy of prediction on the bone-prosthesis contact mechanics was also excellent (< 0.001 mm). The sensitivity of models prediction to uncertainties on modelling parameter was found below 8.4%. The analysis of the successful design resulted in a very good agreement with published retrospective studies. The geometry optimisation of the new prototype lead to a final design with a low risk of failure. The statistical analysis confirmed the minimal risk of the optimised design over the entire population of interest. The performances of the optimised design showed a significant improvement with respect to the first prototype (+35%). Limitations. On the authors opinion the major limitation of this study is on boundary conditions. The muscular forces and the hip joint reaction were derived from the few data available in the literature, which can be considered significant but hardly representative of the entire variability of boundary conditions the implant might face over the patients population. This moved the focus of the research on modelling the musculoskeletal system; the ongoing activity is to develop subject-specific musculoskeletal models of the lower limb from medical images. Conclusions. The developed protocol was able to accurately predict known clinical outcomes when applied to a well-established device and, to support the design optimisation phase providing important information on critical characteristics of the patients when applied to a new prosthesis. The presented approach does have a relevant generality that would allow the extension of the protocol to a large set of orthopaedic scenarios with minor changes. Hence, a failure mode analysis criterion can be considered a suitable tool in developing new orthopaedic devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The continuous increase of genome sequencing projects produced a huge amount of data in the last 10 years: currently more than 600 prokaryotic and 80 eukaryotic genomes are fully sequenced and publically available. However the sole sequencing process of a genome is able to determine just raw nucleotide sequences. This is only the first step of the genome annotation process that will deal with the issue of assigning biological information to each sequence. The annotation process is done at each different level of the biological information processing mechanism, from DNA to protein, and cannot be accomplished only by in vitro analysis procedures resulting extremely expensive and time consuming when applied at a this large scale level. Thus, in silico methods need to be used to accomplish the task. The aim of this work was the implementation of predictive computational methods to allow a fast, reliable, and automated annotation of genomes and proteins starting from aminoacidic sequences. The first part of the work was focused on the implementation of a new machine learning based method for the prediction of the subcellular localization of soluble eukaryotic proteins. The method is called BaCelLo, and was developed in 2006. The main peculiarity of the method is to be independent from biases present in the training dataset, which causes the over‐prediction of the most represented examples in all the other available predictors developed so far. This important result was achieved by a modification, made by myself, to the standard Support Vector Machine (SVM) algorithm with the creation of the so called Balanced SVM. BaCelLo is able to predict the most important subcellular localizations in eukaryotic cells and three, kingdom‐specific, predictors were implemented. In two extensive comparisons, carried out in 2006 and 2008, BaCelLo reported to outperform all the currently available state‐of‐the‐art methods for this prediction task. BaCelLo was subsequently used to completely annotate 5 eukaryotic genomes, by integrating it in a pipeline of predictors developed at the Bologna Biocomputing group by Dr. Pier Luigi Martelli and Dr. Piero Fariselli. An online database, called eSLDB, was developed by integrating, for each aminoacidic sequence extracted from the genome, the predicted subcellular localization merged with experimental and similarity‐based annotations. In the second part of the work a new, machine learning based, method was implemented for the prediction of GPI‐anchored proteins. Basically the method is able to efficiently predict from the raw aminoacidic sequence both the presence of the GPI‐anchor (by means of an SVM), and the position in the sequence of the post‐translational modification event, the so called ω‐site (by means of an Hidden Markov Model (HMM)). The method is called GPIPE and reported to greatly enhance the prediction performances of GPI‐anchored proteins over all the previously developed methods. GPIPE was able to predict up to 88% of the experimentally annotated GPI‐anchored proteins by maintaining a rate of false positive prediction as low as 0.1%. GPIPE was used to completely annotate 81 eukaryotic genomes, and more than 15000 putative GPI‐anchored proteins were predicted, 561 of which are found in H. sapiens. In average 1% of a proteome is predicted as GPI‐anchored. A statistical analysis was performed onto the composition of the regions surrounding the ω‐site that allowed the definition of specific aminoacidic abundances in the different considered regions. Furthermore the hypothesis that compositional biases are present among the four major eukaryotic kingdoms, proposed in literature, was tested and rejected. All the developed predictors and databases are freely available at: BaCelLo http://gpcr.biocomp.unibo.it/bacello eSLDB http://gpcr.biocomp.unibo.it/esldb GPIPE http://gpcr.biocomp.unibo.it/gpipe

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This artwork reports on two different projects that were carried out during the three years of Doctor of the Philosophy course. In the first years a project regarding Capacitive Pressure Sensors Array for Aerodynamic Applications was developed in the Applied Aerodynamic research team of the Second Faculty of Engineering, University of Bologna, Forlì, Italy, and in collaboration with the ARCES laboratories of the same university. Capacitive pressure sensors were designed and fabricated, investigating theoretically and experimentally the sensor’s mechanical and electrical behaviours by means of finite elements method simulations and by means of wind tunnel tests. During the design phase, the sensor figures of merit are considered and evaluated for specific aerodynamic applications. The aim of this work is the production of low cost MEMS-alternative devices suitable for a sensor network to be implemented in air data system. The last two year was dedicated to a project regarding Wireless Pressure Sensor Network for Nautical Applications. Aim of the developed sensor network is to sense the weak pressure field acting on the sail plan of a full batten sail by means of instrumented battens, providing a real time differential pressure map over the entire sail surface. The wireless sensor network and the sensing unit were designed, fabricated and tested in the faculty laboratories. A static non-linear coupled mechanical-electrostatic simulation, has been developed to predict the pressure versus capacitance static characteristic suitable for the transduction process and to tune the geometry of the transducer to reach the required resolution, sensitivity and time response in the appropriate full scale pressure input A time dependent viscoelastic error model has been inferred and developed by means of experimental data in order to model, predict and reduce the inaccuracy bound due to the viscolelastic phenomena affecting the Mylar® polyester film used for the sensor diaphragm. The development of the two above mentioned subjects are strictly related but presently separately in this artwork.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of the work is: define and calculate a factor of collapse related to traditional method to design sheet pile walls. Furthermore, we tried to find the parameters that most influence a finite element model representative of this problem. The text is structured in this way: from chapter 1 to 5, we analyzed a series of arguments which are usefull to understanding the problem, while the considerations mainly related to the purpose of the text are reported in the chapters from 6 to 10. In the first part of the document the following arguments are shown: what is a sheet pile wall, what are the codes to be followed for the design of these structures and what they say, how can be formulated a mathematical model of the soil, some fundamentals of finite element analysis, and finally, what are the traditional methods that support the design of sheet pile walls. In the chapter 6 we performed a parametric analysis, giving an answer to the second part of the purpose of the work. Comparing the results from a laboratory test for a cantilever sheet pile wall in a sandy soil, with those provided by a finite element model of the same problem, we concluded that:in modelling a sandy soil we should pay attention to the value of cohesion that we insert in the model (some programs, like Abaqus, don’t accept a null value for this parameter), friction angle and elastic modulus of the soil, they influence significantly the behavior of the system (structure-soil), others parameters, like the dilatancy angle or the Poisson’s ratio, they don’t seem influence it. The logical path that we followed in the second part of the text is reported here. We analyzed two different structures, the first is able to support an excavation of 4 m, while the second an excavation of 7 m. Both structures are first designed by using the traditional method, then these structures are implemented in a finite element program (Abaqus), and they are pushed to collapse by decreasing the friction angle of the soil. The factor of collapse is the ratio between tangents of the initial friction angle and of the friction angle at collapse. At the end, we performed a more detailed analysis of the first structure, observing that, the value of the factor of collapse is influenced by a wide range of parameters including: the value of the coefficients assumed in the traditional method and by the relative stiffness of the structure-soil system. In the majority of cases, we found that the value of the factor of collapse is between and 1.25 and 2. With some considerations, reported in the text, we can compare the values so far found, with the value of the safety factor proposed by the code (linked to the friction angle of the soil).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we introduce an analytical approach for the frequency warping transform. Criteria for the design of operators based on arbitrary warping maps are provided and an algorithm carrying out a fast computation is defined. Such operators can be used to shape the tiling of time-frequency plane in a flexible way. Moreover, they are designed to be inverted by the application of their adjoint operator. According to the proposed mathematical model, the frequency warping transform is computed by considering two additive operators: the first one represents its nonuniform Fourier transform approximation and the second one suppresses aliasing. The first operator is known to be analytically characterized and fast computable by various interpolation approaches. A factorization of the second operator is found for arbitrary shaped non-smooth warping maps. By properly truncating the operators involved in the factorization, the computation turns out to be fast without compromising accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The running innovation processes of the microwave transistor technologies, used in the implementation of microwave circuits, have to be supported by the study and development of proper design methodologies which, depending on the applications, will fully exploit the technology potentialities. After the choice of the technology to be used in the particular application, the circuit designer has few degrees of freedom when carrying out his design; in the most cases, due to the technological constrains, all the foundries develop and provide customized processes optimized for a specific performance such as power, low-noise, linearity, broadband etc. For these reasons circuit design is always a “compromise”, an investigation for the best solution to reach a trade off between the desired performances. This approach becomes crucial in the design of microwave systems to be used in satellite applications; the tight space constraints impose to reach the best performances under proper electrical and thermal de-rated conditions, respect to the maximum ratings provided by the used technology, in order to ensure adequate levels of reliability. In particular this work is about one of the most critical components in the front-end of a satellite antenna, the High Power Amplifier (HPA). The HPA is the main power dissipation source and so the element which mostly engrave on space, weight and cost of telecommunication apparatus; it is clear from the above reasons that design strategies addressing optimization of power density, efficiency and reliability are of major concern. Many transactions and publications demonstrate different methods for the design of power amplifiers, highlighting the availability to obtain very good levels of output power, efficiency and gain. Starting from existing knowledge, the target of the research activities summarized in this dissertation was to develop a design methodology capable optimize power amplifier performances complying all the constraints imposed by the space applications, tacking into account the thermal behaviour in the same manner of the power and the efficiency. After a reminder of the existing theories about the power amplifier design, in the first section of this work, the effectiveness of the methodology based on the accurate control of the dynamic Load Line and her shaping will be described, explaining all steps in the design of two different kinds of high power amplifiers. Considering the trade-off between the main performances and reliability issues as the target of the design activity, we will demonstrate that the expected results could be obtained working on the characteristics of the Load Line at the intrinsic terminals of the selected active device. The methodology proposed in this first part is based on the assumption that designer has the availability of an accurate electrical model of the device; the variety of publications about this argument demonstrates that it is so difficult to carry out a CAD model capable to taking into account all the non-ideal phenomena which occur when the amplifier operates at such high frequency and power levels. For that, especially for the emerging technology of Gallium Nitride (GaN), in the second section a new approach for power amplifier design will be described, basing on the experimental characterization of the intrinsic Load Line by means of a low frequency high power measurements bench. Thanks to the possibility to develop my Ph.D. in an academic spin-off, MEC – Microwave Electronics for Communications, the results of this activity has been applied to important research programs requested by space agencies, with the aim support the technological transfer from universities to industrial world and to promote a science-based entrepreneurship. For these reasons the proposed design methodology will be explained basing on many experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with a novel control approach based on the extension of the well-known Internal Model Principle to the case of periodic switched linear exosystems. This extension, inspired by power electronics applications, aims to provide an effective design method to robustly achieve the asymptotic tracking of periodic references with an infinite number of harmonics. In the first part of the thesis the basic components of the novel control scheme are described and preliminary results on stabilization are provided. In the second part, advanced control methods for two applications coming from the world high energy physics are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Technology advances in recent years have dramatically changed the way users exploit contents and services available on the Internet, by enforcing pervasive and mobile computing scenarios and enabling access to networked resources almost from everywhere, at anytime, and independently of the device in use. In addition, people increasingly require to customize their experience, by exploiting specific device capabilities and limitations, inherent features of the communication channel in use, and interaction paradigms that significantly differ from the traditional request/response one. So-called Ubiquitous Internet scenario calls for solutions that address many different challenges, such as device mobility, session management, content adaptation, context-awareness and the provisioning of multimodal interfaces. Moreover, new service opportunities demand simple and effective ways to integrate existing resources into new and value added applications, that can also undergo run-time modifications, according to ever-changing execution conditions. Despite service-oriented architectural models are gaining momentum to tame the increasing complexity of composing and orchestrating distributed and heterogeneous functionalities, existing solutions generally lack a unified approach and only provide support for specific Ubiquitous Internet aspects. Moreover, they usually target rather static scenarios and scarcely support the dynamic nature of pervasive access to Internet resources, that can make existing compositions soon become obsolete or inadequate, hence in need of reconfiguration. This thesis proposes a novel middleware approach to comprehensively deal with Ubiquitous Internet facets and assist in establishing innovative application scenarios. We claim that a truly viable ubiquity support infrastructure must neatly decouple distributed resources to integrate and push any kind of content-related logic outside its core layers, by keeping only management and coordination responsibilities. Furthermore, we promote an innovative, open, and dynamic resource composition model that allows to easily describe and enforce complex scenario requirements, and to suitably react to changes in the execution conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we study three combinatorial optimization problems belonging to the classes of Network Design and Vehicle Routing problems that are strongly linked in the context of the design and management of transportation networks: the Non-Bifurcated Capacitated Network Design Problem (NBP), the Period Vehicle Routing Problem (PVRP) and the Pickup and Delivery Problem with Time Windows (PDPTW). These problems are NP-hard and contain as special cases some well known difficult problems such as the Traveling Salesman Problem and the Steiner Tree Problem. Moreover, they model the core structure of many practical problems arising in logistics and telecommunications. The NBP is the problem of designing the optimum network to satisfy a given set of traffic demands. Given a set of nodes, a set of potential links and a set of point-to-point demands called commodities, the objective is to select the links to install and dimension their capacities so that all the demands can be routed between their respective endpoints, and the sum of link fixed costs and commodity routing costs is minimized. The problem is called non- bifurcated because the solution network must allow each demand to follow a single path, i.e., the flow of each demand cannot be splitted. Although this is the case in many real applications, the NBP has received significantly less attention in the literature than other capacitated network design problems that allow bifurcation. We describe an exact algorithm for the NBP that is based on solving by an integer programming solver a formulation of the problem strengthened by simple valid inequalities and four new heuristic algorithms. One of these heuristics is an adaptive memory metaheuristic, based on partial enumeration, that could be applied to a wider class of structured combinatorial optimization problems. In the PVRP a fleet of vehicles of identical capacity must be used to service a set of customers over a planning period of several days. Each customer specifies a service frequency, a set of allowable day-combinations and a quantity of product that the customer must receive every time he is visited. For example, a customer may require to be visited twice during a 5-day period imposing that these visits take place on Monday-Thursday or Monday-Friday or Tuesday-Friday. The problem consists in simultaneously assigning a day- combination to each customer and in designing the vehicle routes for each day so that each customer is visited the required number of times, the number of routes on each day does not exceed the number of vehicles available, and the total cost of the routes over the period is minimized. We also consider a tactical variant of this problem, called Tactical Planning Vehicle Routing Problem, where customers require to be visited on a specific day of the period but a penalty cost, called service cost, can be paid to postpone the visit to a later day than that required. At our knowledge all the algorithms proposed in the literature for the PVRP are heuristics. In this thesis we present for the first time an exact algorithm for the PVRP that is based on different relaxations of a set partitioning-like formulation. The effectiveness of the proposed algorithm is tested on a set of instances from the literature and on a new set of instances. Finally, the PDPTW is to service a set of transportation requests using a fleet of identical vehicles of limited capacity located at a central depot. Each request specifies a pickup location and a delivery location and requires that a given quantity of load is transported from the pickup location to the delivery location. Moreover, each location can be visited only within an associated time window. Each vehicle can perform at most one route and the problem is to satisfy all the requests using the available vehicles so that each request is serviced by a single vehicle, the load on each vehicle does not exceed the capacity, and all locations are visited according to their time window. We formulate the PDPTW as a set partitioning-like problem with additional cuts and we propose an exact algorithm based on different relaxations of the mathematical formulation and a branch-and-cut-and-price algorithm. The new algorithm is tested on two classes of problems from the literature and compared with a recent branch-and-cut-and-price algorithm from the literature.