885 resultados para Design problems


Relevância:

70.00% 70.00%

Publicador:

Resumo:

This thesis is presented in two parts. The first part is an attempt to set out a framework of factors influencing the problem solving stage of the architectural design process. The discussion covers the nature of architectural problems and some of the main ways in which they differ from other types of design problems. The structure of constraints that both the problem and the architect impose upon solutions are seen as of great importance in defining the type of design problem solving situation. The problem solver, or architect, is then studied. The literature of the psychology of thinking is surveyed for relevant work . All of the traditional schools of psychology are found wanting in terms of providing a comprehensive theory of thinking. Various types of thinking are examined, particularly structural and productive thought, for their relevance to design problem solving. Finally some reported common traits of architects are briefly reviewed. The second section is a report of u~o main experiments which model some aspects of architectural design problem solving. The first experiment examines the way in which architects come to understand the structure of their problems. The performances of first and final year architectural students are compared with those of postgraduate science students and sixth form pupils. On the whole these groups show significantly different results and also different cognitive strategies. The second experiment poses design problems which involve both subjective and objective criteria, and examines the way in which final year architectural students are able to relate the different types of constraint produced. In the final section the significance of all the results is suggested. Some educational and methodological implications are discussed and some further experiments and investigations are proposed.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The objective in this work is to build a rapid and automated numerical design method that makes optimal design of robots possible. In this work, two classes of optimal robot design problems were specifically addressed: (1) When the objective is to optimize a pre-designed robot, and (2) when the goal is to design an optimal robot from scratch. In the first case, to reach the optimum design some of the critical dimensions or specific measures to optimize (design parameters) are varied within an established range. Then the stress is calculated as a function of the design parameter(s), the design parameter(s) that optimizes a pre-determined performance index provides the optimum design. In the second case, this work focuses on the development of an automated procedure for the optimal design of robotic systems. For this purpose, Pro/Engineer© and MatLab© software packages are integrated to draw the robot parts, optimize them, and then re-draw the optimal system parts.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This thesis presents approximation algorithms for some NP-Hard combinatorial optimization problems on graphs and networks; in particular, we study problems related to Network Design. Under the widely-believed complexity-theoretic assumption that P is not equal to NP, there are no efficient (i.e., polynomial-time) algorithms that solve these problems exactly. Hence, if one desires efficient algorithms for such problems, it is necessary to consider approximate solutions: An approximation algorithm for an NP-Hard problem is a polynomial time algorithm which, for any instance of the problem, finds a solution whose value is guaranteed to be within a multiplicative factor of the value of an optimal solution to that instance. We attempt to design algorithms for which this factor, referred to as the approximation ratio of the algorithm, is as small as possible. The field of Network Design comprises a large class of problems that deal with constructing networks of low cost and/or high capacity, routing data through existing networks, and many related issues. In this thesis, we focus chiefly on designing fault-tolerant networks. Two vertices u,v in a network are said to be k-edge-connected if deleting any set of k − 1 edges leaves u and v connected; similarly, they are k-vertex connected if deleting any set of k − 1 other vertices or edges leaves u and v connected. We focus on building networks that are highly connected, meaning that even if a small number of edges and nodes fail, the remaining nodes will still be able to communicate. A brief description of some of our results is given below. We study the problem of building 2-vertex-connected networks that are large and have low cost. Given an n-node graph with costs on its edges and any integer k, we give an O(log n log k) approximation for the problem of finding a minimum-cost 2-vertex-connected subgraph containing at least k nodes. We also give an algorithm of similar approximation ratio for maximizing the number of nodes in a 2-vertex-connected subgraph subject to a budget constraint on the total cost of its edges. Our algorithms are based on a pruning process that, given a 2-vertex-connected graph, finds a 2-vertex-connected subgraph of any desired size and of density comparable to the input graph, where the density of a graph is the ratio of its cost to the number of vertices it contains. This pruning algorithm is simple and efficient, and is likely to find additional applications. Recent breakthroughs on vertex-connectivity have made use of algorithms for element-connectivity problems. We develop an algorithm that, given a graph with some vertices marked as terminals, significantly simplifies the graph while preserving the pairwise element-connectivity of all terminals; in fact, the resulting graph is bipartite. We believe that our simplification/reduction algorithm will be a useful tool in many settings. We illustrate its applicability by giving algorithms to find many trees that each span a given terminal set, while being disjoint on edges and non-terminal vertices; such problems have applications in VLSI design and other areas. We also use this reduction algorithm to analyze simple algorithms for single-sink network design problems with high vertex-connectivity requirements; we give an O(k log n)-approximation for the problem of k-connecting a given set of terminals to a common sink. We study similar problems in which different types of links, of varying capacities and costs, can be used to connect nodes; assuming there are economies of scale, we give algorithms to construct low-cost networks with sufficient capacity or bandwidth to simultaneously support flow from each terminal to the common sink along many vertex-disjoint paths. We further investigate capacitated network design, where edges may have arbitrary costs and capacities. Given a connectivity requirement R_uv for each pair of vertices u,v, the goal is to find a low-cost network which, for each uv, can support a flow of R_uv units of traffic between u and v. We study several special cases of this problem, giving both algorithmic and hardness results. In addition to Network Design, we consider certain Traveling Salesperson-like problems, where the goal is to find short walks that visit many distinct vertices. We give a (2 + epsilon)-approximation for Orienteering in undirected graphs, achieving the best known approximation ratio, and the first approximation algorithm for Orienteering in directed graphs. We also give improved algorithms for Orienteering with time windows, in which vertices must be visited between specified release times and deadlines, and other related problems. These problems are motivated by applications in the fields of vehicle routing, delivery and transportation of goods, and robot path planning.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The New Zealand green lipped mussel preparation Lyprinol is available without a prescription from a supermarket, pharmacy or Web. The Food and Drug Administration have recently warned Lyprinol USA about their extravagant anti-inflammatory claims for Lyprinol appearing on the web. These claims are put to thorough review. Lyprinol does have anti-inflammatory mechanisms, and has anti-inflammatory effects in some animal models of inflammation. Lyprinol may have benefits in dogs with arthritis. There are design problems with the clinical trials of Lyprinol in humans as an anti-inflammatory agent in osteoarthritis and rheumatoid arthritis, making it difficult to give a definite answer to how effective Lyprinol is in these conditions, but any benefit is small. Lyprinol also has a small benefit in atopic allergy. As anti-inflammatory agents, there is little to choose between Lyprinol and fish oil. No adverse effects have been reported with Lyprinol. Thus, although it is difficult to conclude whether Lyprinol does much good, it can be concluded that Lyprinol probably does no major harm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider the problem of how to construct robust designs for Poisson regression models. An analytical expression is derived for robust designs for first-order Poisson regression models where uncertainty exists in the prior parameter estimates. Given certain constraints in the methodology, it may be necessary to extend the robust designs for implementation in practical experiments. With these extensions, our methodology constructs designs which perform similarly, in terms of estimation, to current techniques, and offers the solution in a more timely manner. We further apply this analytic result to cases where uncertainty exists in the linear predictor. The application of this methodology to practical design problems such as screening experiments is explored. Given the minimal prior knowledge that is usually available when conducting such experiments, it is recommended to derive designs robust across a variety of systems. However, incorporating such uncertainty into the design process can be a computationally intense exercise. Hence, our analytic approach is explored as an alternative.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider the problem of how to construct robust designs for Poisson regression models. An analytical expression is derived for robust designs for first-order Poisson regression models where uncertainty exists in the prior parameter estimates. Given certain constraints in the methodology, it may be necessary to extend the robust designs for implementation in practical experiments. With these extensions, our methodology constructs designs which perform similarly, in terms of estimation, to current techniques, and offers the solution in a more timely manner. We further apply this analytic result to cases where uncertainty exists in the linear predictor. The application of this methodology to practical design problems such as screening experiments is explored. Given the minimal prior knowledge that is usually available when conducting such experiments, it is recommended to derive designs robust across a variety of systems. However, incorporating such uncertainty into the design process can be a computationally intense exercise. Hence, our analytic approach is explored as an alternative.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In 1984 the School of Architecture and Built Environment within the University of Newcastle, Australia introduced an integrated program based on real design projects and using Integrated Problem Based Learning (IPBL) as the teaching method. Since 1984 there have been multiple changes arising from the expectations of the architectural fraternity, enrolling students, lecturers, available facilities, accreditation authorities and many others. These challenges have been successfully accommodated whilst maintaining the original purposes and principles of IPBL. The Architecture program has a combined two-degree structure consisting of a first degree, Bachelor of Science (Architecture), followed by a second degree, Bachelor of Architecture. The program is designed to simulate the problem-solving situations that face a working architect in every day practice. This paper will present the degree structure where each student is enrolled in a single course per semester incorporating design integration and study areas in design studies, professional studies, historical studies, technical studies, environmental studies and communication skills. Each year the design problems increase in complexity and duration set around an annual theme. With 20 years of successful delivery of any program there are highlights and challenges along the way and this paper will discuss some of the successes and barriers experienced within the School of Architecture and Built Environment in delivering IPBL. In addition, the reflective process investigates the currency of IPBL as an appropriate vehicle for delivering the curriculum in 2004 and any additional administrative or staff considerations required to enhance the continuing application of IPBL.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The benefits of reflective practice have been well established in the literature (Rogers, 2001), as have models to embed reflective thinking in higher education curriculum (Ryan and Ryan, 2012). Reflection is commonly envisaged as a textual practice, through which students ‘reflect in and on action’ (Schön 1983), and articulate their experiences, learning and outcomes in written portfolios, journals, or blogs. While such approaches to individual written reflection are undoubtedly beneficial for deepening insight and self-criticality, reflection can also provide other benefits when approached as a collaborative, oral activity. This poster presents a dialogic model of reflective practice that affords the opportunity for developing presentation skills, critique, community and professional identity formation. This dialogic approach to reflection is illustrated by a first year subject (‘KIB101 Visual Communication’ at QUT), in which students apply visual theory (presented in lectures) to communication and graphic design problems in the studio. In regular (fortnightly) presentations, they critically reflect upon their work in progress by aligning it with the concepts, design principles and professional language of the lectures. This iterative process facilitates responsive peer feedback, similarly couched in the formal terms of the discipline. This ‘mirrored reflection’ not only provides opportunities to incrementally improve, it also sets designs in a theoretical frame; provides the opportunity for comparative analysis (to see design principles applied by peers in different ways); to practice formal design language and presentation techniques of the discipline and; because peer critique is framed as an act of generosity; it affords the development of a supportive community of practice. In these ways, dialogic reflection helps students develop a professional voice and identity from first year. Evidence of impact is provided by quantitative and qualitative student feedback over several years, as well as institutional feedback and recognition.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we consider non-linear transceiver designs for multiuser multi-input multi-output (MIMO) down-link in the presence of imperfections in the channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas and each user terminal is equipped with multiple receive antennas. The BS employs Tomlinson-Harashima precoding (THP) for inter-user interference pre-cancellation at the transmitter. We investigate robust THP transceiver designs based on the minimization of BS transmit power with mean square error (MSE) constraints, and balancing of MSE among users with a constraint on the total BS transmit power. We show that these design problems can be solved by iterative algorithms, wherein each iteration involves a pair of convex optimization problems. The robustness of the proposed algorithms to imperfections in CSIT is illustrated through simulations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

It has been said that we are living in a golden age of innovation. New products, systems and services aimed to enable a better future, have emerged from novel interconnections between design and design research with science, technology and the arts. These intersections are now, more than ever, catalysts that enrich daily activities for health and safety, education, personal computing, entertainment and sustainability, to name a few. Interactive functions made possible by new materials, technology, and emerging manufacturing solutions demonstrate an ongoing interplay between cross-disciplinary knowledge and research. Such interactive interplay bring up questions concerning: (i) how art and design provide a focus for developing design solutions and research in technology; (ii) how theories emerging from the interactions of cross-disciplinary knowledge inform both the practice and research of design and (iii) how research and design work together in a mutually beneficial way. The IASDR2015 INTERPLAY EXHIBITION provides some examples of these interconnections of design research with science, technology and the arts. This is done through the presentation of objects, artefacts and demonstrations that are contextualised into everyday activities across various areas including health, education, safety, furniture, fashion and wearable design. The exhibits provide a setting to explore the various ways in which design research interacts across discipline knowledge and approaches to stimulate innovation. In education, Designing South African Children’s Health Education as Generative Play (A Bennett, F Cassim, M van der Merwe, K van Zijil, and M Ribbens) presents a set of toolkits that resulted from design research entailing generative play. The toolkits are systems that engender pleasure and responsibility, and are aimed at cultivating South African’s youth awareness of nutrition, hygiene, disease awareness and prevention, and social health. In safety, AVAnav: Avalanche Rescue Helmet (Jason Germany) delivers an interactive system as a tool to contribute to reduce the time to locate buried avalanche victims. Helmet-mounted this system responds to the contextual needs of rescuers and has since led to further design research on the interface design of rescuing devices. In apparel design and manufacturing, Shrinking Violets: Fashion design for disassembly (Alice Payne) proposes a design for disassembly through the use of beautiful reversible mono-material garments that interactively responds to the challenges of garment construction in the fashion industry, capturing the metaphor for the interplay between technology and craft in the fashion manufacturing industry. Harvest: A biotextile future (Dean Brough and Alice Payne), explores the interplay of biotechnology, materiality and textile design in the creation of sustainable, biodegradable vegan textile through the process of a symbiotic culture of bacteria and yeast (SCOBY). SCOBY is a pellicle curd that can be harvested, machine washed, dried and cut into a variety of designs and texture combinations. The exploration of smart materials, wearable design and micro-electronics led to creative and aesthetically coherent stimulus-reactive jewellery; Symbiotic Microcosms: Crafting Digital Interaction (K Vones). This creation aims to bridge the gap between craft practitioner and scientific discovery, proposing a move towards the notion of a post-human body, where wearable design is seen as potential ground for new human-computer interactions, affording the development of visually engaging multifunctional enhancements. In furniture design, Smart Assistive chair for older adults (Chao Zhao) demonstrates how cross-disciplinary knowledge interacting with design strategies provide solution that employed new technological developments in older aged care, and the participation of multiple stakeholders: designers, health care system and community based health systems. In health, Molecular diagnosis system for newborns deafness genetic screening (Chao Zhao) presents an ambitious and complex project that includes a medical device aimed at resolving a number of challenges: technical feasibility for city and rural contexts, compatibility with standard laboratory and hospital systems, access to health system, and support the work of different hospital specialists. The interplay between cross-disciplines is evident in this work, demonstrating how design research moves forward through technology developments. These works exemplify the intersection between domains as a means to innovation. Novel design problems are identified as design intersects with the various areas. Research informs this process, and in different ways. We see the background investigation into the contextualising domain (e.g. on-snow studies, garment recycling, South African health concerns, the post human body) to identify gaps in the area and design criteria; the technologies and materials reviews (e.g. AR, biotextiles) to offer plausible technical means to solve these, as well as design criteria. Theoretical reviews can also inform the design (e.g. play, flow). These work together to equip the design practitioner with a robust set of ‘tools’ for design innovation – tools that are based in research. The process identifies innovative opportunity and criteria for design and this, in turn, provides a means for evaluating the success of the design outcomes. Such an approach has the potential to come full circle between research and design – where the design can function as an exemplar, evidencing how the research-articulated problems can be solved. Core to this, however, is the evaluation of the design outcome itself and identifying knowledge outcomes. In some cases, this is fairly straightforward that is, easily measurable. For example the efficacy of Jason Germany’s helmet can be determined by measuring the reduced response time in the rescuer. Similarly the improved ability to recycle Payne’s panel garments can be clearly determined by comparing it to those recycling processes (and her identified criteria of separating textile elements!); while the sustainability and durability of the Brough & Payne’s biotextile can be assessed by documenting the growth and decay processes, or comparative strength studies. There are however situations where knowledge outcomes and insights are not so easily determined. Many of the works here are open-ended in their nature, as they emphasise the holistic experience of one or more designs, in context: “the end result of the art activity that provides the health benefit or outcome but rather, the value lies in the delivery and experience of the activity” (Bennet et al.) Similarly, reconfiguring layers of laser cut silk in Payne’s Shrinking Violets constitutes a customisable, creative process of clothing oneself since it “could be layered to create multiple visual effects”. Symbiotic Microcosms also has room for facilitating experience, as the work is described to facilitate “serendipitous discovery”. These examples show the diverse emphasis of enquiry as on the experience versus the product. Open-ended experiences are ambiguous, multifaceted and differ from person to person and moment to moment (Eco 1962). Determining the success is not always clear or immediately discernible; it may also not be the most useful question to ask. Rather, research that seeks to understand the nature of the experience afforded by the artefact is most useful in these situations. It can inform the design practitioner by helping them with subsequent re-design as well as potentially being generalizable to other designers and design contexts. Bennett et. al exemplify how this may be approached from a theoretical perspective. This work is concerned with facilitating engaging experiences to educate and, ultimately impact on that community. The research is concerned with the nature of that experience as well, and in order to do so the authors have employed theoretical lenses – here these are of flow, pleasure, play. An alternative or complementary approach to using theory, is using qualitative studies such as interviews with users to ask them about what they experienced? Here the user insights become evidence for generalising across, potentially revealing insight into relevant concerns – such as the range of possible ‘playful’ or experiences that may be afforded, or the situation that preceded a ‘serendipitous discovery’. As shown, IASDR2015 INTERPLAY EXHIBITION provides a platform for exploration, discussion and interrogation around the interplay of design research across diverse domains. We look forward with excitement as IASDR continues to bring research and design together, and as our communities of practitioners continue to push the envelope of what is design and how this can be expanded and better understood with research to foster new work and ultimately, stimulate innovation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Combining the advanced techniques of optimal dynamic inversion and model-following neuro-adaptive control design, an innovative technique is presented to design an automatic drug administration strategy for effective treatment of chronic myelogenous leukemia (CML). A recently developed nonlinear mathematical model for cell dynamics is used to design the controller (medication dosage). First, a nominal controller is designed based on the principle of optimal dynamic inversion. This controller can treat the nominal model patients (patients who can be described by the mathematical model used here with the nominal parameter values) effectively. However, since the system parameters for a realistic model patient can be different from that of the nominal model patients, simulation studies for such patients indicate that the nominal controller is either inefficient or, worse, ineffective; i.e. the trajectory of the number of cancer cells either shows non-satisfactory transient behavior or it grows in an unstable manner. Hence, to make the drug dosage history more realistic and patient-specific, a model-following neuro-adaptive controller is augmented to the nominal controller. In this adaptive approach, a neural network trained online facilitates a new adaptive controller. The training process of the neural network is based on Lyapunov stability theory, which guarantees both stability of the cancer cell dynamics as well as boundedness of the network weights. From simulation studies, this adaptive control design approach is found to be very effective to treat the CML disease for realistic patients. Sufficient generality is retained in the mathematical developments so that the technique can be applied to other similar nonlinear control design problems as well.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sequence design problems are considered in this paper. The problem of sum power minimization in a spread spectrum system can be reduced to the problem of sum capacity maximization, and vice versa. A solution to one of the problems yields a solution to the other. Subsequently, conceptually simple sequence design algorithms known to hold for the white-noise case are extended to the colored noise case. The algorithms yield an upper bound of 2N - L on the number of sequences where N is the processing gain and L the number of non-interfering subsets of users. If some users (at most N - 1) are allowed to signal along a limited number of multiple dimensions, then N orthogonal sequences suffice.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we consider robust joint designs of relay precoder and destination receive filters in a nonregenerative multiple-input multiple-output (MIMO) relay network. The network consists of multiple source-destination node pairs assisted by a MIMO-relay node. The channel state information (CSI) available at the relay node is assumed to be imperfect. We consider robust designs for two models of CSI error. The first model is a stochastic error (SE) model, where the probability distribution of the CSI error is Gaussian. This model is applicable when the imperfect CSI is mainly due to errors in channel estimation. For this model, we propose robust minimum sum mean square error (SMSE), MSE-balancing, and relay transmit power minimizing precoder designs. The next model for the CSI error is a norm-bounded error (NBE) model, where the CSI error can be specified by an uncertainty set. This model is applicable when the CSI error is dominated by quantization errors. In this case, we adopt a worst-case design approach. For this model, we propose a robust precoder design that minimizes total relay transmit power under constraints on MSEs at the destination nodes. We show that the proposed robust design problems can be reformulated as convex optimization problems that can be solved efficiently using interior-point methods. We demonstrate the robust performance of the proposed design through simulations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.

In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.

The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.

In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.

The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.

Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work presents active control of high-frequency vibration using skyhook dampers. The choice of the damper gain and its optimal location is crucial for the effective implementation of active vibration control. In vibration control, certain sensor/actuator locations are preferable for reducing structural vibration while using minimum control effort. In order to perform optimisation on a general built-up structure to control vibration, it is necessary to have a good modelling technique to predict the performance of the controller. The present work exploits the hybrid modelling approach, which combines the finite element method (FEM) and statistical energy analysis (SEA) to provide efficient response predictions at medium to high frequencies. The hybrid method is implemented here for a general network of plates, coupled via springs, to allow study of a variety of generic control design problems. By combining the hybrid method with numerical optimisation using a genetic algorithm, optimal skyhook damper gains and locations are obtained. The optimal controller gain and location found from the hybrid method are compared with results from a deterministic modelling method. Good agreement between the results is observed, whereas results from the hybrid method are found in a significantly reduced amount of time. © 2012 Elsevier Ltd. All rights reserved.