983 resultados para Solution techniques


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents the outcomes of my Ph.D. course in telecommunications engineering. The focus of my research has been on Global Navigation Satellite Systems (GNSS) and in particular on the design of aiding schemes operating both at position and physical level and the evaluation of their feasibility and advantages. Assistance techniques at the position level are considered to enhance receiver availability in challenging scenarios where satellite visibility is limited. Novel positioning techniques relying on peer-to-peer interaction and exchange of information are thus introduced. More specifically two different techniques are proposed: the Pseudorange Sharing Algorithm (PSA), based on the exchange of GNSS data, that allows to obtain coarse positioning where the user has scarce satellite visibility, and the Hybrid approach, which also permits to improve the accuracy of the positioning solution. At the physical level, aiding schemes are investigated to improve the receiver’s ability to synchronize with satellite signals. An innovative code acquisition strategy for dual-band receivers, the Cross-Band Aiding (CBA) technique, is introduced to speed-up initial synchronization by exploiting the exchange of time references between the two bands. In addition vector configurations for code tracking are analyzed and their feedback generation process thoroughly investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this work presented here is the characterization of structure and dynamics of different types of supramolecular systems by advanced NMR spectroscopy. One of the characteristic features of NMR spectroscopy is based on its high selectivity. Thus, it is desirable to exploit this technique for studying structure and dynamics of large supramolecular systems without isotopic enrichment. The observed resonance frequencies are not only isotope specific but also influenced by local fields, in particular by the distribution of electron density around the investigated nucleus. Barbituric acid are well known for forming strongly hydrogen-bonded complexes with variety of adenine derivatives. The prototropic tautomerism of this material facilitates an adjustment to complementary bases containing a DDA(A = hydrogen bond acceptor site, D = hydrogen bond donor site) or ADA sequences, thereby yielding strongly hydrogen-bonded complexes. In this contribution solid-state structures of the enolizable chromophor "1-n-butyl-5-(4-nitrophenyl)-barbituric acid" that features adjustable hydrogen-bonding properties and the molecular assemblies with three different strength of bases (Proton sponge, adenine mimetic 2,6-diaminopyridine (DAP) and 2,6-diacetamidopyridine (DAC)) are studied. Diffusion NMR spectroscopy gives information over such interactions and has become the method of choice for measuring the diffusion coefficient, thereby reflecting the effective size and shape of a molecular species. In this work the investigation of supramolecular aggregates in solution state by means of DOSY NMR techniques are performed. The underlying principles of DOSY NMR experiment are discussed briefly and more importantly two applications demonstrating the potential of this method are focused on. Calix[n]arenes have gained a rather prominent position, both as host materials and as platforms to design specific receptors. In this respect, several different capsular contents of tetra urea calix[4]arenes (benzene, benzene-d6, 1-fluorobenzene, 1-fluorobenzene-d5, 1,4-difluorobenzene, and cobaltocenium) are studied by solid state NMR spectroscopy. In the solid state, the study of the interaction between tetra urea calix[4]arenes and guest is simplified by the fact that the guests molecule remains complexed and positioned within the cavity, thus allowing a more direct investigation of the host-guest interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis analyses problems related to the applicability, in business environments, of Process Mining tools and techniques. The first contribution is a presentation of the state of the art of Process Mining and a characterization of companies, in terms of their "process awareness". The work continues identifying circumstance where problems can emerge: data preparation; actual mining; and results interpretation. Other problems are the configuration of parameters by not-expert users and computational complexity. We concentrate on two possible scenarios: "batch" and "on-line" Process Mining. Concerning the batch Process Mining, we first investigated the data preparation problem and we proposed a solution for the identification of the "case-ids" whenever this field is not explicitly indicated. After that, we concentrated on problems at mining time and we propose the generalization of a well-known control-flow discovery algorithm in order to exploit non instantaneous events. The usage of interval-based recording leads to an important improvement of performance. Later on, we report our work on the parameters configuration for not-expert users. We present two approaches to select the "best" parameters configuration: one is completely autonomous; the other requires human interaction to navigate a hierarchy of candidate models. Concerning the data interpretation and results evaluation, we propose two metrics: a model-to-model and a model-to-log. Finally, we present an automatic approach for the extension of a control-flow model with social information, in order to simplify the analysis of these perspectives. The second part of this thesis deals with control-flow discovery algorithms in on-line settings. We propose a formal definition of the problem, and two baseline approaches. The actual mining algorithms proposed are two: the first is the adaptation, to the control-flow discovery problem, of a frequency counting algorithm; the second constitutes a framework of models which can be used for different kinds of streams (stationary versus evolving).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new class of inorganic-organic hybrid polymers could successfully been prepared by the combination of different polymerization techniques. The access to a broad range of organic polymers incorporated into the hybrid polymer was realized using two independent approaches.rnIn the first approach a functional poly(silsesquioxane) (PSSQ) network was pre-formed, which was capable to initiate a controlled radical polymerization to graft organic vinyl-type monomers from the PSSQ precursor. As controlled radical polymerization techniques atom transfer radical polymerization (ATRP), as well as reversible addition fragmentation chain transfer (RAFT) polymerization could be used after defined tuning of the PSSQ precursor either toward a PSSQ macro-initiator or to a PSSQ macro-chain-transfer-agent. The polymerization pathway, consisting of polycondensation of trialkoxy-silanes followed by grafting-from polymerization of different monomers, allowed synthesis of various functional hybrid polymers. A controlled synthesis of the PSSQ precursors could successfully be performed using a microreactor setup; the molecular weight could be adjusted easily while the polydispersity index could be decreased well below 2.rnThe second approach aimed to incorporate differently derived organic polymers. As examples, polycarbonate and poly(ethylene glycol) were end-group-modified using trialkoxysilanes. After end-group-functionalization these organic polymers could be incorporated into a PSSQ network.rnThese different hybrid polymers showed extraordinary coating abilities. All polymers could be processed from solution by spin-coating or dip-coating. The high amount of reactive silanol moieties in the PSSQ part could be cross-linked after application by annealing at 130° for 1h. Not only cross-linking of the whole film was achieved, which resulted in mechanical interlocking with the substrate, also chemical bonds to metal or metal oxide surfaces were formed. All coating materials showed high stability and adhesion onto various underlying materials, reaching from metals (like steel or gold) and metal oxides (like glass) to plastics (like polycarbonate or polytetrafluoroethylene).rnAs the material and the synthetic pathway were very tolerant toward different functionalities, various functional monomers could be incorporated in the final coating material. The incorporation of N-isopropylacrylamide yielded in temperature-responsive surface coatings, whereas the incorporation of redox-active monomers allowed the preparation of semi-conductive coatings, capable to produce smooth hole-injection layers on transparent conductive electrodes used in optoelectronic devices.rnThe range of possible applications could be increased tremendously by incorporation of reactive monomers, capable to undergo fast and quantitative conversions by polymer-analogous reactions. For example, grafting active esters from a PSSQ precursor yielded a reactive surface coating after application onto numerous substrates. Just by dipping the coated substrate into a solution of a functionalized amine, the desired function could be immobilized at the interface as well as throughout the whole film. The obtained reactive surface coatings could be used as basis for different functional coatings for various applications. The conversion with specifically tuned amines yielded in surfaces with adjustable wetting behaviors, switchable wetting behaviors or as recognition element for surface-oriented bio-analytical devices. The combination of hybrid materials with orthogonal reactivities allowed for the first time the preparation of multi-reactive surfaces which could be functionalized sequentially with defined fractions of different groups at the interface. rnThe introduced concept to synthesis functional hybrid polymers unifies the main requirements on an ideal coating material. Strong adhesion on a wide range of underlying materials was achieved by secondary condensation of the PSSQ part, whereas the organic part allowed incorporation of various functionalities. Thus, a flexible platform to create functional and reactive surface coatings was achieved, which could be applied to different substrates. rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work self-assembling model systems in aqueous solution were studied. The systems contained charged polymers, polyelectrolytes, that were combined with oppositely charged counterions to build up supramolecular structures. With imaging, scattering and spectroscopic techniques it was investigated how the structure of building units influences the structure of their assemblies. Polyelectrolytes with different chemical structure, molecular weight and morphology were investigated. In addition to linear polyelectrolytes, semi-flexible cylindrical bottle-brush polymers that possess a defined cross-section and a relatively high persistence along the backbone were studied. The polyelectrolytes were combined with structural organic counterions having charge numbers one to four. Especially the self-assembly of polyelectrolytes with different tetravalent water-soluble porphyrins was studied. Porphyrins have a rigid aromatic structure that has a structural effect on their self-assembly behavior and through which porphyrins are capable of self-aggregation via π-π interaction. The main focus of the thesis is the self-assembly of cylindrical bottle-brush polyelectrolytes with tetravalent porphyrins. It was shown that the addition of porphyrins to oppositely charged brush molecules induces a hierarchical formation of stable nanoscale brush-porphyrin networks. The networks can be disconnected by addition of salt and single porphyrin-decoratedrncylindrical brush polymers are obtained. These two new morphologies, brush-porphyrin networks and porphyrin-decorated brush polymers, may have potential as functional materials with interesting mechanical and optical properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis collects the outcomes of a Ph.D. course in Telecommunications engineering and it is focused on enabling techniques for Spread Spectrum (SS) navigation and communication satellite systems. It provides innovations for both interference management and code synchronization techniques. These two aspects are critical for modern navigation and communication systems and constitute the common denominator of the work. The thesis is organized in two parts: the former deals with interference management. We have proposed a novel technique for the enhancement of the sensitivity level of an advanced interference detection and localization system operating in the Global Navigation Satellite System (GNSS) bands, which allows the identification of interfering signals received with power even lower than the GNSS signals. Moreover, we have introduced an effective cancellation technique for signals transmitted by jammers, exploiting their repetitive characteristics, which strongly reduces the interference level at the receiver. The second part, deals with code synchronization. More in detail, we have designed the code synchronization circuit for a Telemetry, Tracking and Control system operating during the Launch and Early Orbit Phase; the proposed solution allows to cope with the very large frequency uncertainty and dynamics characterizing this scenario, and performs the estimation of the code epoch, of the carrier frequency and of the carrier frequency variation rate. Furthermore, considering a generic pair of circuits performing code acquisition, we have proposed a comprehensive framework for the design and the analysis of the optimal cooperation procedure, which minimizes the time required to accomplish synchronization. The study results particularly interesting since it enables the reduction of the code acquisition time without increasing the computational complexity. Finally, considering a network of collaborating navigation receivers, we have proposed an innovative cooperative code acquisition scheme, which allows exploit the shared code epoch information between neighbor nodes, according to the Peer-to-Peer paradigm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A critical point in the analysis of ground displacements time series is the development of data driven methods that allow the different sources that generate the observed displacements to be discerned and characterised. A widely used multivariate statistical technique is the Principal Component Analysis (PCA), which allows reducing the dimensionality of the data space maintaining most of the variance of the dataset explained. Anyway, PCA does not perform well in finding the solution to the so-called Blind Source Separation (BSS) problem, i.e. in recovering and separating the original sources that generated the observed data. This is mainly due to the assumptions on which PCA relies: it looks for a new Euclidean space where the projected data are uncorrelated. The Independent Component Analysis (ICA) is a popular technique adopted to approach this problem. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, I use a variational bayesian ICA (vbICA) method, which models the probability density function (pdf) of each source signal using a mix of Gaussian distributions. This technique allows for more flexibility in the description of the pdf of the sources, giving a more reliable estimate of them. Here I present the application of the vbICA technique to GPS position time series. First, I use vbICA on synthetic data that simulate a seismic cycle (interseismic + coseismic + postseismic + seasonal + noise) and a volcanic source, and I study the ability of the algorithm to recover the original (known) sources of deformation. Secondly, I apply vbICA to different tectonically active scenarios, such as the 2009 L'Aquila (central Italy) earthquake, the 2012 Emilia (northern Italy) seismic sequence, and the 2006 Guerrero (Mexico) Slow Slip Event (SSE).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-resolution microscopy techniques provide a plethora of information on biological structures from the cellular level down to the molecular level. In this review, we present the unique capabilities of transmission electron and atomic force microscopy to assess the structure, oligomeric state, function and dynamics of channel and transport proteins in their native environment, the lipid bilayer. Most importantly, membrane proteins can be visualized in the frozen-hydrated state and in buffer solution by cryo-transmission electron and atomic force microscopy, respectively. We also illustrate the potential of the scintillation proximity assay to study substrate binding of detergent-solubilized transporters prior to crystallization and structural characterization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-input multi-output (MIMO) technology is an emerging solution for high data rate wireless communications. We develop soft-decision based equalization techniques for frequency selective MIMO channels in the quest for low-complexity equalizers with BER performance competitive to that of ML sequence detection. We first propose soft decision equalization (SDE), and demonstrate that decision feedback equalization (DFE) based on soft-decisions, expressed via the posterior probabilities associated with feedback symbols, is able to outperform hard-decision DFE, with a low computational cost that is polynomial in the number of symbols to be recovered, and linear in the signal constellation size. Building upon the probabilistic data association (PDA) multiuser detector, we present two new MIMO equalization solutions to handle the distinctive channel memory. With their low complexity, simple implementations, and impressive near-optimum performance offered by iterative soft-decision processing, the proposed SDE methods are attractive candidates to deliver efficient reception solutions to practical high-capacity MIMO systems. Motivated by the need for low-complexity receiver processing, we further present an alternative low-complexity soft-decision equalization approach for frequency selective MIMO communication systems. With the help of iterative processing, two detection and estimation schemes based on second-order statistics are harmoniously put together to yield a two-part receiver structure: local multiuser detection (MUD) using soft-decision Probabilistic Data Association (PDA) detection, and dynamic noise-interference tracking using Kalman filtering. The proposed Kalman-PDA detector performs local MUD within a sub-block of the received data instead of over the entire data set, to reduce the computational load. At the same time, all the inter-ference affecting the local sub-block, including both multiple access and inter-symbol interference, is properly modeled as the state vector of a linear system, and dynamically tracked by Kalman filtering. Two types of Kalman filters are designed, both of which are able to track an finite impulse response (FIR) MIMO channel of any memory length. The overall algorithms enjoy low complexity that is only polynomial in the number of information-bearing bits to be detected, regardless of the data block size. Furthermore, we introduce two optional performance-enhancing techniques: cross- layer automatic repeat request (ARQ) for uncoded systems and code-aided method for coded systems. We take Kalman-PDA as an example, and show via simulations that both techniques can render error performance that is better than Kalman-PDA alone and competitive to sphere decoding. At last, we consider the case that channel state information (CSI) is not perfectly known to the receiver, and present an iterative channel estimation algorithm. Simulations show that the performance of SDE with channel estimation approaches that of SDE with perfect CSI.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three-dimensional flow visualization plays an essential role in many areas of science and engineering, such as aero- and hydro-dynamical systems which dominate various physical and natural phenomena. For popular methods such as the streamline visualization to be effective, they should capture the underlying flow features while facilitating user observation and understanding of the flow field in a clear manner. My research mainly focuses on the analysis and visualization of flow fields using various techniques, e.g. information-theoretic techniques and graph-based representations. Since the streamline visualization is a popular technique in flow field visualization, how to select good streamlines to capture flow patterns and how to pick good viewpoints to observe flow fields become critical. We treat streamline selection and viewpoint selection as symmetric problems and solve them simultaneously using the dual information channel [81]. To the best of my knowledge, this is the first attempt in flow visualization to combine these two selection problems in a unified approach. This work selects streamline in a view-independent manner and the selected streamlines will not change for all viewpoints. My another work [56] uses an information-theoretic approach to evaluate the importance of each streamline under various sample viewpoints and presents a solution for view-dependent streamline selection that guarantees coherent streamline update when the view changes gradually. When projecting 3D streamlines to 2D images for viewing, occlusion and clutter become inevitable. To address this challenge, we design FlowGraph [57, 58], a novel compound graph representation that organizes field line clusters and spatiotemporal regions hierarchically for occlusion-free and controllable visual exploration. We enable observation and exploration of the relationships among field line clusters, spatiotemporal regions and their interconnection in the transformed space. Most viewpoint selection methods only consider the external viewpoints outside of the flow field. This will not convey a clear observation when the flow field is clutter on the boundary side. Therefore, we propose a new way to explore flow fields by selecting several internal viewpoints around the flow features inside of the flow field and then generating a B-Spline curve path traversing these viewpoints to provide users with closeup views of the flow field for detailed observation of hidden or occluded internal flow features [54]. This work is also extended to deal with unsteady flow fields. Besides flow field visualization, some other topics relevant to visualization also attract my attention. In iGraph [31], we leverage a distributed system along with a tiled display wall to provide users with high-resolution visual analytics of big image and text collections in real time. Developing pedagogical visualization tools forms my other research focus. Since most cryptography algorithms use sophisticated mathematics, it is difficult for beginners to understand both what the algorithm does and how the algorithm does that. Therefore, we develop a set of visualization tools to provide users with an intuitive way to learn and understand these algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, advanced metering infrastructure (AMI) has been the main research focus due to the traditional power grid has been restricted to meet development requirements. There has been an ongoing effort to increase the number of AMI devices that provide real-time data readings to improve system observability. Deployed AMI across distribution secondary networks provides load and consumption information for individual households which can improve grid management. Significant upgrade costs associated with retrofitting existing meters with network-capable sensing can be made more economical by using image processing methods to extract usage information from images of the existing meters. This thesis presents a new solution that uses online data exchange of power consumption information to a cloud server without modifying the existing electromechanical analog meters. In this framework, application of a systematic approach to extract energy data from images replaces the manual reading process. One case study illustrates the digital imaging approach is compared to the averages determined by visual readings over a one-month period.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

REASONS FOR PERFORMING STUDY: There is limited information on potential diffusion of local anaesthetic solution after various diagnostic analgesic techniques of the proximal metacarpal region. OBJECTIVE: To document potential distribution of local anaesthetic solution following 4 techniques used for diagnostic analgesia of the proximal metacarpal region. METHODS: Radiodense contrast medium was injected around the lateral palmar or medial and lateral palmar metacarpal nerves in 8 mature horses, using 4 different techniques. Radiographs were obtained 0, 10 and 20 min after injection and were analysed subjectively. A mixture of radiodense contrast medium and methylene blue was injected into 4 cadaver limbs; the location of the contrast medium and dye was determined by radiography and dissection. RESULTS: Following perineural injection of the palmar metacarpal nerves, most of the contrast medium was distributed in an elongated pattern axial to the second and fourth metacarpal bones. The carpometacarpal joint was inadvertently penetrated in 4/8 limbs after injections of the palmar metacarpal nerves from medial and lateral approaches, and in 1/8 limbs when both injections were performed from the lateral approach. Following perineural injection of the lateral palmar nerve using a lateral approach, the contrast medium was diffusely distributed in all but one limb, in which the carpal sheath was inadvertently penetrated. In 5/8 limbs, following perineural injection of the lateral palmar nerve using a medial approach, the contrast medium diffused proximally to the distal third of the antebrachium. CONCLUSIONS AND POTENTIAL RELEVANCE: Inadvertent penetration of the carpometacarpal joint is common after perineural injection of the palmar metacarpal nerves, but less so if both palmar metacarpal nerves are injected using a lateral approach. Following injection of the lateral palmar nerve using a medial approach, the entire palmar aspect of the carpus may be desensitised.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel approach to the reconstruction of depth from light field data. Our method uses dictionary representations and group sparsity constraints to derive a convex formulation. Although our solution results in an increase of the problem dimensionality, we keep numerical complexity at bay by restricting the space of solutions and by exploiting an efficient Primal-Dual formulation. Comparisons with state of the art techniques, on both synthetic and real data, show promising performances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis contributes to the analysis and design of printed reflectarray antennas. The main part of the work is focused on the analysis of dual offset antennas comprising two reflectarray surfaces, one of them acts as sub-reflector and the second one acts as mainreflector. These configurations introduce additional complexity in several aspects respect to conventional dual offset reflectors, however they present a lot of degrees of freedom that can be used to improve the electrical performance of the antenna. The thesis is organized in four parts: the development of an analysis technique for dualreflectarray antennas, a preliminary validation of such methodology using equivalent reflector systems as reference antennas, a more rigorous validation of the software tool by manufacturing and testing a dual-reflectarray antenna demonstrator and the practical design of dual-reflectarray systems for some applications that show the potential of these kind of configurations to scan the beam and to generate contoured beams. In the first part, a general tool has been implemented to analyze high gain antennas which are constructed of two flat reflectarray structures. The classic reflectarray analysis based on MoM under local periodicity assumption is used for both sub and main reflectarrays, taking into account the incident angle on each reflectarray element. The incident field on the main reflectarray is computed taking into account the field radiated by all the elements on the sub-reflectarray.. Two approaches have been developed, one which employs a simple approximation to reduce the computer run time, and the other which does not, but offers in many cases, improved accuracy. The approximation is based on computing the reflected field on each element on the main reflectarray only once for all the fields radiated by the sub-reflectarray elements, assuming that the response will be the same because the only difference is a small variation on the angle of incidence. This approximation is very accurate when the reflectarray elements on the main reflectarray show a relatively small sensitivity to the angle of incidence. An extension of the analysis technique has been implemented to study dual-reflectarray antennas comprising a main reflectarray printed on a parabolic surface, or in general in a curved surface. In many applications of dual-reflectarray configurations, the reflectarray elements are in the near field of the feed-horn. To consider the near field radiated by the horn, the incident field on each reflectarray element is computed using a spherical mode expansion. In this region, the angles of incidence are moderately wide, and they are considered in the analysis of the reflectarray to better calculate the actual incident field on the sub-reflectarray elements. This technique increases the accuracy for the prediction of co- and cross-polar patterns and antenna gain respect to the case of using ideal feed models. In the second part, as a preliminary validation, the proposed analysis method has been used to design a dual-reflectarray antenna that emulates previous dual-reflector antennas in Ku and W-bands including a reflectarray as subreflector. The results for the dualreflectarray antenna compare very well with those of the parabolic reflector and reflectarray subreflector; radiation patterns, antenna gain and efficiency are practically the same when the main parabolic reflector is substituted by a flat reflectarray. The results show that the gain is only reduced by a few tenths of a dB as a result of the ohmic losses in the reflectarray. The phase adjustment on two surfaces provided by the dual-reflectarray configuration can be used to improve the antenna performance in some applications requiring multiple beams, beam scanning or shaped beams. Third, a very challenging dual-reflectarray antenna demonstrator has been designed, manufactured and tested for a more rigorous validation of the analysis technique presented. The proposed antenna configuration has the feed, the sub-reflectarray and the main-reflectarray in the near field one to each other, so that the conventional far field approximations are not suitable for the analysis of such antenna. This geometry is used as benchmarking for the proposed analysis tool in very stringent conditions. Some aspects of the proposed analysis technique that allow improving the accuracy of the analysis are also discussed. These improvements include a novel method to reduce the inherent cross polarization which is introduced mainly from grounded patch arrays. It has been checked that cross polarization in offset reflectarrays can be significantly reduced by properly adjusting the patch dimensions in the reflectarray in order to produce an overall cancellation of the cross-polarization. The dimensions of the patches are adjusted in order not only to provide the required phase-distribution to shape the beam, but also to exploit the crosses by zero of the cross-polarization components. The last part of the thesis deals with direct applications of the technique described. The technique presented is directly applicable to the design of contoured beam antennas for DBS applications, where the requirements of cross-polarisation are very stringent. The beam shaping is achieved by synthesithing the phase distribution on the main reflectarray while the sub-reflectarray emulates an equivalent hyperbolic subreflector. Dual-reflectarray antennas present also the ability to scan the beam over small angles about boresight. Two possible architectures for a Ku-band antenna are also described based on a dual planar reflectarray configuration that provides electronic beam scanning in a limited angular range. In the first architecture, the beam scanning is achieved by introducing a phase-control in the elements of the sub-reflectarray and the mainreflectarray is passive. A second alternative is also studied, in which the beam scanning is produced using 1-bit control on the main reflectarray, while a passive subreflectarray is designed to provide a large focal distance within a compact configuration. The system aims to develop a solution for bi-directional satellite links for emergency communications. In both proposed architectures, the objective is to provide a compact optics and simplicity to be folded and deployed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays computing platforms consist of a very large number of components that require to be supplied with diferent voltage levels and power requirements. Even a very small platform, like a handheld computer, may contain more than twenty diferent loads and voltage regulators. The power delivery designers of these systems are required to provide, in a very short time, the right power architecture that optimizes the performance, meets electrical specifications plus cost and size targets. The appropriate selection of the architecture and converters directly defines the performance of a given solution. Therefore, the designer needs to be able to evaluate a significant number of options in order to know with good certainty whether the selected solutions meet the size, energy eficiency and cost targets. The design dificulties of selecting the right solution arise due to the wide range of power conversion products provided by diferent manufacturers. These products range from discrete components (to build converters) to complete power conversion modules that employ diferent manufacturing technologies. Consequently, in most cases it is not possible to analyze all the alternatives (combinations of power architectures and converters) that can be built. The designer has to select a limited number of converters in order to simplify the analysis. In this thesis, in order to overcome the mentioned dificulties, a new design methodology for power supply systems is proposed. This methodology integrates evolutionary computation techniques in order to make possible analyzing a large number of possibilities. This exhaustive analysis helps the designer to quickly define a set of feasible solutions and select the best trade-off in performance according to each application. The proposed approach consists of two key steps, one for the automatic generation of architectures and other for the optimized selection of components. In this thesis are detailed the implementation of these two steps. The usefulness of the methodology is corroborated by contrasting the results using real problems and experiments designed to test the limits of the algorithms.