937 resultados para Robust localisation systems
Resumo:
Despite extensive progress on the theoretical aspects of spectral efficient communication systems, hardware impairments, such as phase noise, are the key bottlenecks in next generation wireless communication systems. The presence of non-ideal oscillators at the transceiver introduces time varying phase noise and degrades the performance of the communication system. Significant research literature focuses on joint synchronization and decoding based on joint posterior distribution, which incorporate both the channel and code graph. These joint synchronization and decoding approaches operate on well designed sum-product algorithms, which involves calculating probabilistic messages iteratively passed between the channel statistical information and decoding information. Channel statistical information, generally entails a high computational complexity because its probabilistic model may involve continuous random variables. The detailed knowledge about the channel statistics for these algorithms make them an inadequate choice for real world applications due to power and computational limitations. In this thesis, novel phase estimation strategies are proposed, in which soft decision-directed iterative receivers for a separate A Posteriori Probability (APP)-based synchronization and decoding are proposed. These algorithms do not require any a priori statistical characterization of the phase noise process. The proposed approach relies on a Maximum A Posteriori (MAP)-based algorithm to perform phase noise estimation and does not depend on the considered modulation/coding scheme as it only exploits the APPs of the transmitted symbols. Different variants of APP-based phase estimation are considered. The proposed algorithm has significantly lower computational complexity with respect to joint synchronization/decoding approaches at the cost of slight performance degradation. With the aim to improve the robustness of the iterative receiver, we derive a new system model for an oversampled (more than one sample per symbol interval) phase noise channel. We extend the separate APP-based synchronization and decoding algorithm to a multi-sample receiver, which exploits the received information from the channel by exchanging the information in an iterative fashion to achieve robust convergence. Two algorithms based on sliding block-wise processing with soft ISI cancellation and detection are proposed, based on the use of reliable information from the channel decoder. Dually polarized systems provide a cost-and spatial-effective solution to increase spectral efficiency and are competitive candidates for next generation wireless communication systems. A novel soft decision-directed iterative receiver, for separate APP-based synchronization and decoding, is proposed. This algorithm relies on an Minimum Mean Square Error (MMSE)-based cancellation of the cross polarization interference (XPI) followed by phase estimation on the polarization of interest. This iterative receiver structure is motivated from Master/Slave Phase Estimation (M/S-PE), where M-PE corresponds to the polarization of interest. The operational principle of a M/S-PE block is to improve the phase tracking performance of both polarization branches: more precisely, the M-PE block tracks the co-polar phase and the S-PE block reduces the residual phase error on the cross-polar branch. Two variants of MMSE-based phase estimation are considered; BW and PLP.
Resumo:
We consider an inversion-based neurocontroller for solving control problems of uncertain nonlinear systems. Classical approaches do not use uncertainty information in the neural network models. In this paper we show how we can exploit knowledge of this uncertainty to our advantage by developing a novel robust inverse control method. Simulations on a nonlinear uncertain second order system illustrate the approach.
Resumo:
This research is concerned with the development of distributed real-time systems, in which software is used for the control of concurrent physical processes. These distributed control systems are required to periodically coordinate the operation of several autonomous physical processes, with the property of an atomic action. The implementation of this coordination must be fault-tolerant if the integrity of the system is to be maintained in the presence of processor or communication failures. Commit protocols have been widely used to provide this type of atomicity and ensure consistency in distributed computer systems. The objective of this research is the development of a class of robust commit protocols, applicable to the coordination of distributed real-time control systems. Extended forms of the standard two phase commit protocol, that provides fault-tolerant and real-time behaviour, were developed. Petri nets are used for the design of the distributed controllers, and to embed the commit protocol models within these controller designs. This composition of controller and protocol model allows the analysis of the complete system in a unified manner. A common problem for Petri net based techniques is that of state space explosion, a modular approach to both the design and analysis would help cope with this problem. Although extensions to Petri nets that allow module construction exist, generally the modularisation is restricted to the specification, and analysis must be performed on the (flat) detailed net. The Petri net designs for the type of distributed systems considered in this research are both large and complex. The top down, bottom up and hybrid synthesis techniques that are used to model large systems in Petri nets are considered. A hybrid approach to Petri net design for a restricted class of communicating processes is developed. Designs produced using this hybrid approach are modular and allow re-use of verified modules. In order to use this form of modular analysis, it is necessary to project an equivalent but reduced behaviour on the modules used. These projections conceal events local to modules that are not essential for the purpose of analysis. To generate the external behaviour, each firing sequence of the subnet is replaced by an atomic transition internal to the module, and the firing of these transitions transforms the input and output markings of the module. Thus local events are concealed through the projection of the external behaviour of modules. This hybrid design approach preserves properties of interest, such as boundedness and liveness, while the systematic concealment of local events allows the management of state space. The approach presented in this research is particularly suited to distributed systems, as the underlying communication model is used as the basis for the interconnection of modules in the design procedure. This hybrid approach is applied to Petri net based design and analysis of distributed controllers for two industrial applications that incorporate the robust, real-time commit protocols developed. Temporal Petri nets, which combine Petri nets and temporal logic, are used to capture and verify causal and temporal aspects of the designs in a unified manner.
Resumo:
Liposome systems are well reported for their activity as vaccine adjuvants; however novel lipid-based microbubbles have also been reported to enhance the targeting of antigens into dendritic cells (DCs) in cancer immunotherapy (Suzuki et al 2009). This research initially focused on the formulation of gas-filled lipid coated microbubbles and their potential activation of macrophages using in vitro models. Further studies in the thesis concentrated on aqueous-filled liposomes as vaccine delivery systems. Initial work involved formulating and characterising four different methods of producing lipid-coated microbubbles (sometimes referred to as gas-filled liposomes), by homogenisation, sonication, a gas-releasing chemical reaction and agitation/pressurisation in terms of stability and physico-chemical characteristics. Two of the preparations were tested as pressure probes in MRI studies. The first preparation composed of a standard phospholipid (DSPC) filled with air or nitrogen (N2), whilst in the second method the microbubbles were composed of a fluorinated phospholipid (F-GPC) filled with a fluorocarbon saturated gas. The studies showed that whilst maintaining high sensitivity, a novel contrast agent which allows stable MRI measurements of fluid pressure over time, could be produced using lipid-coated microbubbles. The F-GPC microbubbles were found to withstand pressures up to 2.6 bar with minimal damage as opposed to the DSPC microbubbles, which were damaged at above 1.3 bar. However, it was also found that DSPC-filled with N2 microbubbles were also extremely robust to pressure and their performance was similar to that of F-GPC based microbubbles. Following on from the MRI studies, the DSPC-air and N2 filled lipid-based microbubbles were assessed for their potential activation of macrophages using in vitro models and compared to equivalent aqueous-filled liposomes. The microbubble formulations did not stimulate macrophage uptake, so studies thereafter focused on aqueous-filled liposomes. Further studies concentrated on formulating and characterising, both physico-chemically and immunologically, cationic liposomes based on the potent adjuvant dimethyldioctadecylammonium (DDA) and immunomodulatory trehalose dibehenate (TDB) with the addition of polyethylene glycol (PEG). One of the proposed hypotheses for the mechanism behind the immunostimulatory effect obtained with DDA:TDB is the ‘depot effect’ in which the liposomal carrier helps to retain the antigen at the injection site thereby increasing the time of vaccine exposure to the immune cells. The depot effect has been suggested to be primarily due to their cationic nature. Results reported within this thesis demonstrate that higher levels of PEG i.e. 25 % were able to significantly inhibit the formation of a liposome depot at the injection site and also severely limit the retention of antigen at the site. This therefore resulted in a faster drainage of the liposomes from the site of injection. The versatility of cationic liposomes based on DDA:TDB in combination with different immunostimulatory ligands including, polyinosinic-polycytidylic acid (poly (I:C), TLR 3 ligand), and CpG (TLR 9 ligand) either entrapped within the vesicles or adsorbed onto the liposome surface was investigated for immunogenic capacity as vaccine adjuvants. Small unilamellar (SUV) DDA:TDB vesicles (20-100 nm native size) with protein antigen adsorbed to the vesicle surface were the most potent in inducing both T cell (7-fold increase) and antibody (up to 2 log increase) antigen specific responses. The addition of TLR agonists poly(I:C) and CpG to SUV liposomes had small or no effect on their adjuvanticity. Finally, threitol ceramide (ThrCer), a new mmunostimulatory agent, was incorporated into the bilayers of liposomes composed of DDA or DSPC to investigate the uptake of ThrCer, by dendritic cells (DCs), and presentation on CD1d molecules to invariant natural killer T cells. These systems were prepared both as multilamellar vesicles (MLV) and Small unilamellar (SUV). It was demonstrated that the IFN-g secretion was higher for DDA SUV liposome formulation (p<0.05), suggesting that ThrCer encapsulation in this liposome formulation resulted in a higher uptake by DCs.
Resumo:
Particulate delivery systems such as liposomes and polymeric nano- and microparticles are attracting great interest for developing new vaccines. Materials and formulation properties essential for this purpose have been extensively studied, but relatively little is known about the influence of the administration route of such delivery systems on the type and strength of immune response elicited. Thus, the present study aimed at elucidating the influence on the immune response when of immunising mice by different routes, such as the subcutaneous, intradermal, intramuscular, and intralymphatic routes with ovalbumin-loaded liposomes, N-trimethyl chitosan (TMC) nanoparticles, and poly(lactide-co-glycolide) (PLGA) microparticles, all with and without specifically selected immune-response modifiers. The results showed that the route of administration caused only minor differences in inducing an antibody response of the IgG1 subclass, and any such differences were abolished upon booster immunisation with the various adjuvanted and non-adjuvanted delivery systems. In contrast, the administration route strongly affected both the kinetics and magnitude of the IgG2a response. A single intralymphatic administration of all evaluated delivery systems induced a robust IgG2a response, whereas subcutaneous administration failed to elicit a substantial IgG2a response even after boosting, except with the adjuvanted nanoparticles. The intradermal and intramuscular routes generated intermediate IgG2a titers. The benefit of the intralymphatic administration route for eliciting a Th1-type response was confirmed in terms of IFN-gamma production of isolated and re-stimulated splenocytes from animals previously immunised with adjuvanted and non-adjuvanted liposomes as well as with adjuvanted microparticles. Altogether the results show that the IgG2a associated with Th1-type immune responses are sensitive to the route of administration, whereas IgG1 response associated with Th2-type immune responses were relatively insensitive to the administration route of the particulate delivery systems. The route of administration should therefore be considered when planning and interpreting pre-clinical research or development on vaccine delivery systems.
Resumo:
This work sets out to evaluate the potential benefits and pit-falls in using a priori information to help solve the Magnetoencephalographic (MEG) inverse problem. In chapter one the forward problem in MEG is introduced, together with a scheme that demonstrates how a priori information can be incorporated into the inverse problem. Chapter two contains a literature review of techniques currently used to solve the inverse problem. Emphasis is put on the kind of a priori information that is used by each of these techniques and the ease with which additional constraints can be applied. The formalism of the FOCUSS algorithm is shown to allow for the incorporation of a priori information in an insightful and straightforward manner. In chapter three it is described how anatomical constraints, in the form of a realistically shaped source space, can be extracted from a subject’s Magnetic Resonance Image (MRI). The use of such constraints relies on accurate co-registration of the MEG and MRI co-ordinate systems. Variations of the two main co-registration approaches, based on fiducial markers or on surface matching, are described and the accuracy and robustness of a surface matching algorithm is evaluated. Figures of merit introduced in chapter four are shown to given insight into the limitations of a typical measurement set-up and potential value of a priori information. It is shown in chapter five that constrained dipole fitting and FOCUSS outperform unconstrained dipole fitting when data with low SNR is used. However, the effect of errors in the constraints can reduce this advantage. Finally, it is demonstrated in chapter six that the results of different localisation techniques give corroborative evidence about the location and activation sequence of the human visual cortical areas underlying the first 125ms of the visual magnetic evoked response recorded with a whole head neuromagnetometer.
Resumo:
This research primarily focused on identifying the formulation parameters which control the efficacy of liposomes as delivery systems to enhance the delivery of poorly soluble drugs. Preliminary studies focused on the drug loading of ibuprofen within vesicle systems. Initially both liposomal and niosomal formulations were screened for their drug-loading capacity: liposomal systems were shown to offer significantly higher ibuprofen loading and thereafter lipid based systems were further investigated. Given the key role cholesterol is known to play within the stability of bilayer vesicles. the optimum cholesterol content in terms of drug loading and release of poorly soluble drugs was then investigated. From these studies a concentration of 11 total molar % of cholesterol was used as a benchmark for all further formulations. Investigating the effect of liposomc composition on several low solubility drugs, drug loading was shown to be enhanced by adopting longer chain length lipids. cationic lipids and. decreasing drug molecular weight. Drug release was increased by using cationic lipids and lower molecular weight of drug; conversely, a reduction was noted when employing longer chain lipids thus supporting the rational of longer chain lipids producing more stable liposomes, a theory also supported by results obtained via Langmuir studies· although it was revealed that stability is also dependent on geometric features associated with the lipid chain moiety. Interestingly, reduction in drug loading appeared to be induced when symmetrical phospholipids were substituted for lipids constituting asymmetrical alkyl chain groups thus further highlighting the importance of lipid geometry. Combining a symmetrical lipid with an asymmetrical derivative enhanced encapsulation of a hydrophobic drug while reducing that of another suggesting the importance of drug characteristics. Phosphatidylcholine liposornes could successfully be prepared (and visualised using transmission electron microscopy) from fatty alcohols therefore offering an alternative liposomal stabiliser to cholesterol. Results obtained revealed that liposomes containing tetradecanol within their formulation shares similar vesicle size, drug encapsulation, surface charge. and toxicity profiles as liposomes formulated with cholesterol, however the tetradecanol preparation appeared to release considerably more drug during stability studies. Langmuir monolayer studies revealed that the condensing influence by tetradecanol is less than compared with cholesterol suggesting that this reduced intercalation by the former could explain why the tetradecanol formulation released more drug compared with cholesterol formulations. Environmental scanning electron microscopy (ESEM) was used to analyse the morphology and stability of liposomes. These investigations indicated that the presence of drugs within the liposomal bilayer were able to enhance the stability of the bilayers against collapse under reduced hydration conditions. In addition the presence of charged lipids within the formulation under reduced hydration conditions compared with its neutral counterpart. However the applicability of using ESEM as a new method to investigate liposome stability appears less valid than first hoped since the results are often open to varied interpretation and do not provide a robust set of data to support conclusions in some cases.
Resumo:
The international economic and business environment continues to develop at a rapid rate. Increasing interactions between economies, particularly between Europe and Asia, has raised many important issues regarding transport infrastructure, logistics and broader supply chain management. The potential exists to further stimulate trade provided that these issues are addressed in a logical and systematic manner. However, if this potential is to be realised in practice there is a need to re-evaluate current supply chain configurations. A mismatch currently exists between the technological capability and the supply chain or logistical reality. This mismatch has sharpened the focus on the need for robust approaches to supply chain re-engineering. Traditional approaches to business re-engineering have been based on manufacturing systems engineering and business process management. A recognition that all companies exist as part of bigger supply chains has fundamentally changed the focus of re-engineering. Inefficiencies anywhere in a supply chain result in the chain as a whole being unable to reach its true competitive potential. This reality, combined with the potentially radical impact on business and supply chain architectures of the technologies associated with electronic business, requires organisations to adopt innovative approaches to supply chain analysis and re-design. This paper introduces a systems approach to supply chain re-engineering which is aimed at addressing the challenges which the evolving business environment brings with it. The approach, which is based on work with a variety of both conventional and electronic supply chains, comprises underpinning principles, a methodology and guidelines on good working practice, as well as a suite of tools and techniques. The adoption of approaches such as that outlined in this paper helps to ensure that robust supply chains are designed and implemented in practice. This facilitates an integrated approach, with involvement of all key stakeholders throughout the design process.
Resumo:
Robust controllers for nonlinear stochastic systems with functional uncertainties can be consistently designed using probabilistic control methods. In this paper a generalised probabilistic controller design for the minimisation of the Kullback-Leibler divergence between the actual joint probability density function (pdf) of the closed loop control system, and an ideal joint pdf is presented emphasising how the uncertainty can be systematically incorporated in the absence of reliable systems models. To achieve this objective all probabilistic models of the system are estimated from process data using mixture density networks (MDNs) where all the parameters of the estimated pdfs are taken to be state and control input dependent. Based on this dependency of the density parameters on the input values, explicit formulations to the construction of optimal generalised probabilistic controllers are obtained through the techniques of dynamic programming and adaptive critic methods. Using the proposed generalised probabilistic controller, the conditional joint pdfs can be made to follow the ideal ones. A simulation example is used to demonstrate the implementation of the algorithm and encouraging results are obtained.
Resumo:
Tensor analysis plays an important role in modern image and vision computing problems. Most of the existing tensor analysis approaches are based on the Frobenius norm, which makes them sensitive to outliers. In this paper, we propose L1-norm-based tensor analysis (TPCA-L1), which is robust to outliers. Experimental results upon face and other datasets demonstrate the advantages of the proposed approach. © 2006 IEEE.
Resumo:
The Semantic Binary Data Model (SBM) is a viable alternative to the now-dominant relational data model. SBM would be especially advantageous for applications dealing with complex interrelated networks of objects provided that a robust efficient implementation can be achieved. This dissertation presents an implementation design method for SBM, algorithms, and their analytical and empirical evaluation. Our method allows building a robust and flexible database engine with a wider applicability range and improved performance. ^ Extensions to SBM are introduced and an implementation of these extensions is proposed that allows the database engine to efficiently support applications with a predefined set of queries. A New Record data structure is proposed. Trade-offs of employing Fact, Record and Bitmap Data structures for storing information in a semantic database are analyzed. ^ A clustering ID distribution algorithm and an efficient algorithm for object ID encoding are proposed. Mapping to an XML data model is analyzed and a new XML-based XSDL language facilitating interoperability of the system is defined. Solutions to issues associated with making the database engine multi-platform are presented. An improvement to the atomic update algorithm suitable for certain scenarios of database recovery is proposed. ^ Specific guidelines are devised for implementing a robust and well-performing database engine based on the extended Semantic Data Model. ^
Resumo:
The main challenges of multimedia data retrieval lie in the effective mapping between low-level features and high-level concepts, and in the individual users' subjective perceptions of multimedia content. ^ The objectives of this dissertation are to develop an integrated multimedia indexing and retrieval framework with the aim to bridge the gap between semantic concepts and low-level features. To achieve this goal, a set of core techniques have been developed, including image segmentation, content-based image retrieval, object tracking, video indexing, and video event detection. These core techniques are integrated in a systematic way to enable the semantic search for images/videos, and can be tailored to solve the problems in other multimedia related domains. In image retrieval, two new methods of bridging the semantic gap are proposed: (1) for general content-based image retrieval, a stochastic mechanism is utilized to enable the long-term learning of high-level concepts from a set of training data, such as user access frequencies and access patterns of images. (2) In addition to whole-image retrieval, a novel multiple instance learning framework is proposed for object-based image retrieval, by which a user is allowed to more effectively search for images that contain multiple objects of interest. An enhanced image segmentation algorithm is developed to extract the object information from images. This segmentation algorithm is further used in video indexing and retrieval, by which a robust video shot/scene segmentation method is developed based on low-level visual feature comparison, object tracking, and audio analysis. Based on shot boundaries, a novel data mining framework is further proposed to detect events in soccer videos, while fully utilizing the multi-modality features and object information obtained through video shot/scene detection. ^ Another contribution of this dissertation is the potential of the above techniques to be tailored and applied to other multimedia applications. This is demonstrated by their utilization in traffic video surveillance applications. The enhanced image segmentation algorithm, coupled with an adaptive background learning algorithm, improves the performance of vehicle identification. A sophisticated object tracking algorithm is proposed to track individual vehicles, while the spatial and temporal relationships of vehicle objects are modeled by an abstract semantic model. ^
Resumo:
The aim of this research was to demonstrate a high current and stable field emission (FE) source based on carbon nanotubes (CNTs) and electron multiplier microchannel plate (MCP) and design efficient field emitters. In recent years various CNT based FE devices have been demonstrated including field emission displays, x-ray source and many more. However to use CNTs as source in high powered microwave (HPM) devices higher and stable current in the range of few milli-amperes to amperes is required. To achieve such high current we developed a novel technique of introducing a MCP between CNT cathode and anode. MCP is an array of electron multipliers; it operates by avalanche multiplication of secondary electrons, which are generated when electrons strike channel walls of MCP. FE current from CNTs is enhanced due to avalanche multiplication of secondary electrons and in addition MCP also protects CNTs from irreversible damage during vacuum arcing. Conventional MCP is not suitable for this purpose due to the lower secondary emission properties of their materials. To achieve higher and stable currents we have designed and fabricated a unique ceramic MCP consisting of high SEY materials. The MCP was fabricated utilizing optimum design parameters, which include channel dimensions and material properties obtained from charged particle optics (CPO) simulation. Child Langmuir law, which gives the optimum current density from an electron source, was taken into account during the system design and experiments. Each MCP channel consisted of MgO coated CNTs which was chosen from various material systems due to its very high SEY. With MCP inserted between CNT cathode and anode stable and higher emission current was achieved. It was ∼25 times higher than without MCP. A brighter emission image was also evidenced due to enhanced emission current. The obtained results are a significant technological advance and this research holds promise for electron source in new generation lightweight, efficient and compact microwave devices for telecommunications in satellites or space applications. As part of this work novel emitters consisting of multistage geometry with improved FE properties were was also developed.
Resumo:
There are situations in which it is very important to quickly and positively identify an individual. Examples include suspects detained in the neighborhood of a bombing or terrorist incident, individuals detained attempting to enter or leave the country, and victims of mass disasters. Systems utilized for these purposes must be fast, portable, and easy to maintain. The goal of this project was to develop an ultra fast, direct PCR method for forensic genotyping of oral swabs. The procedure developed eliminates the need for cellular digestion and extraction of the sample by performing those steps in the PCR tube itself. Then, special high-speed polymerases are added which are capable of amplifying a newly developed 7 loci multiplex in under 16 minutes. Following the amplification, a postage stamp sized microfluidic device equipped with specially designed entangled polymer separation matrix, yields a complete genotype in 80 seconds. The entire process is rapid and reliable, reducing the time from sample to genotype from 1-2 days to under 20 minutes. Operation requires minimal equipment and can be easily performed with a small high-speed thermal-cycler, reagents, and a microfluidic device with a laptop. The system was optimized and validated using a number of test parameters and a small test population. The overall precision was better than 0.17 bp and provided a power of discrimination greater than 1 in 106. The small footprint, and ease of use will permit this system to be an effective tool to quickly screen and identify individuals detained at ports of entry, police stations and remote locations. The system is robust, portable and demonstrates to the forensic community a simple solution to the problem of rapid determination of genetic identity.