107 resultados para modular flap
Resumo:
The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.
Resumo:
Physical infrastructure assets are important components of our society and our economy. They are usually designed to last for many years, are expected to be heavily used during their lifetime, carry considerable load, and are exposed to the natural environment. They are also normally major structures, and therefore present a heavy investment, requiring constant management over their life cycle to ensure that they perform as required by their owners and users. Given a complex and varied infrastructure life cycle, constraints on available resources, and continuing requirements for effectiveness and efficiency, good management of infrastructure is important. While there is often no one best management approach, the choice of options is improved by better identification and analysis of the issues, by the ability to prioritise objectives, and by a scientific approach to the analysis process. The abilities to better understand the effect of inputs in the infrastructure life cycle on results, to minimise uncertainty, and to better evaluate the effect of decisions in a complex environment, are important in allocating scarce resources and making sound decisions. Through the development of an infrastructure management modelling and analysis methodology, this thesis provides a process that assists the infrastructure manager in the analysis, prioritisation and decision making process. This is achieved through the use of practical, relatively simple tools, integrated in a modular flexible framework that aims to provide an understanding of the interactions and issues in the infrastructure management process. The methodology uses a combination of flowcharting and analysis techniques. It first charts the infrastructure management process and its underlying infrastructure life cycle through the time interaction diagram, a graphical flowcharting methodology that is an extension of methodologies for modelling data flows in information systems. This process divides the infrastructure management process over time into self contained modules that are based on a particular set of activities, the information flows between which are defined by the interfaces and relationships between them. The modular approach also permits more detailed analysis, or aggregation, as the case may be. It also forms the basis of ext~nding the infrastructure modelling and analysis process to infrastructure networks, through using individual infrastructure assets and their related projects as the basis of the network analysis process. It is recognised that the infrastructure manager is required to meet, and balance, a number of different objectives, and therefore a number of high level outcome goals for the infrastructure management process have been developed, based on common purpose or measurement scales. These goals form the basis of classifYing the larger set of multiple objectives for analysis purposes. A two stage approach that rationalises then weights objectives, using a paired comparison process, ensures that the objectives required to be met are both kept to the minimum number required and are fairly weighted. Qualitative variables are incorporated into the weighting and scoring process, utility functions being proposed where there is risk, or a trade-off situation applies. Variability is considered important in the infrastructure life cycle, the approach used being based on analytical principles but incorporating randomness in variables where required. The modular design of the process permits alternative processes to be used within particular modules, if this is considered a more appropriate way of analysis, provided boundary conditions and requirements for linkages to other modules, are met. Development and use of the methodology has highlighted a number of infrastructure life cycle issues, including data and information aspects, and consequences of change over the life cycle, as well as variability and the other matters discussed above. It has also highlighted the requirement to use judgment where required, and for organisations that own and manage infrastructure to retain intellectual knowledge regarding that infrastructure. It is considered that the methodology discussed in this thesis, which to the author's knowledge has not been developed elsewhere, may be used for the analysis of alternatives, planning, prioritisation of a number of projects, and identification of the principal issues in the infrastructure life cycle.
Resumo:
The material presented in this thesis may be viewed as comprising two key parts, the first part concerns batch cryptography specifically, whilst the second deals with how this form of cryptography may be applied to security related applications such as electronic cash for improving efficiency of the protocols. The objective of batch cryptography is to devise more efficient primitive cryptographic protocols. In general, these primitives make use of some property such as homomorphism to perform a computationally expensive operation on a collective input set. The idea is to amortise an expensive operation, such as modular exponentiation, over the input. Most of the research work in this field has concentrated on its employment as a batch verifier of digital signatures. It is shown that several new attacks may be launched against these published schemes as some weaknesses are exposed. Another common use of batch cryptography is the simultaneous generation of digital signatures. There is significantly less previous work on this area, and the present schemes have some limited use in practical applications. Several new batch signatures schemes are introduced that improve upon the existing techniques and some practical uses are illustrated. Electronic cash is a technology that demands complex protocols in order to furnish several security properties. These typically include anonymity, traceability of a double spender, and off-line payment features. Presently, the most efficient schemes make use of coin divisibility to withdraw one large financial amount that may be progressively spent with one or more merchants. Several new cash schemes are introduced here that make use of batch cryptography for improving the withdrawal, payment, and deposit of electronic coins. The devised schemes apply both to the batch signature and verification techniques introduced, demonstrating improved performance over the contemporary divisible based structures. The solutions also provide an alternative paradigm for the construction of electronic cash systems. Whilst electronic cash is used as the vehicle for demonstrating the relevance of batch cryptography to security related applications, the applicability of the techniques introduced extends well beyond this.
Resumo:
Web is a powerful hypermedia-based information retrieval mechanism that provides a user-friendly access across all major computer platforms connected over Internet. This paper demonstrates the application of Web technology when used as an educational delivery tool. It also reports on the development of a prototype electronic publishing project where Web technology was used to deliver power engineering educational resources. The resulting hyperbook will contain diverse teaching resources such as hypermedia-based modular educational units and computer simulation programs that are linked in a meaningful and structured way. The use of Web for disseminating information of this nature has many advantages that cannot possibly be achieved otherwise. PREAMBLE The continual increase of low-cost functionality available in desktop computing has opened up a new possibility in learning within a wider educational framework. This technology also is supported by enhanced features offered by new and ...
Resumo:
With the increase in the level of global warming, renewable energy based distributed generators (DGs) will increasingly play a dominant role in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cells and micro turbines will gain considerable momentum in the near future. A microgrid consists of clusters of load and distributed generators that operate as a single controllable system. The interconnection of the DG to the utility/grid through power electronic converters has raised concern about safe operation and protection of the equipments. Many innovative control techniques have been used for enhancing the stability of microgrid as for proper load sharing. The most common method is the use of droop characteristics for decentralized load sharing. Parallel converters have been controlled to deliver desired real power (and reactive power) to the system. Local signals are used as feedback to control converters, since in a real system, the distance between the converters may make the inter-communication impractical. The real and reactive power sharing can be achieved by controlling two independent quantities, frequency and fundamental voltage magnitude. In this thesis, an angle droop controller is proposed to share power amongst converter interfaced DGs in a microgrid. As the angle of the output voltage can be changed instantaneously in a voltage source converter (VSC), controlling the angle to control the real power is always beneficial for quick attainment of steady state. Thus in converter based DGs, load sharing can be performed by drooping the converter output voltage magnitude and its angle instead of frequency. The angle control results in much lesser frequency variation compared to that with frequency droop. An enhanced frequency droop controller is proposed for better dynamic response and smooth transition between grid connected and islanded modes of operation. A modular controller structure with modified control loop is proposed for better load sharing between the parallel connected converters in a distributed generation system. Moreover, a method for smooth transition between grid connected and islanded modes is proposed. Power quality enhanced operation of a microgrid in presence of unbalanced and non-linear loads is also addressed in which the DGs act as compensators. The compensator can perform load balancing, harmonic compensation and reactive power control while supplying real power to the grid A frequency and voltage isolation technique between microgrid and utility is proposed by using a back-to-back converter. As utility and microgrid are totally isolated, the voltage or frequency fluctuations in the utility side do not affect the microgrid loads and vice versa. Another advantage of this scheme is that a bidirectional regulated power flow can be achieved by the back-to-back converter structure. For accurate load sharing, the droop gains have to be high, which has the potential of making the system unstable. Therefore the choice of droop gains is often a tradeoff between power sharing and stability. To improve this situation, a supplementary droop controller is proposed. A small signal model of the system is developed, based on which the parameters of the supplementary controller are designed. Two methods are proposed for load sharing in an autonomous microgrid in rural network with high R/X ratio lines. The first method proposes power sharing without any communication between the DGs. The feedback quantities and the gain matrixes are transformed with a transformation matrix based on the line R/X ratio. The second method involves minimal communication among the DGs. The converter output voltage angle reference is modified based on the active and reactive power flow in the line connected at point of common coupling (PCC). It is shown that a more economical and proper power sharing solution is possible with the web based communication of the power flow quantities. All the proposed methods are verified through PSCAD simulations. The converters are modeled with IGBT switches and anti parallel diodes with associated snubber circuits. All the rotating machines are modeled in detail including their dynamics.
Resumo:
Through a grant received from the Australian Library and Information Association (ALIA), members of Health Libraries Australia (HLA) are collaborating with a researcher/educator to conduct a twelve month research project with the goal of developing an educational framework for the Australian health librarianship workforce of the future. The collaboration comprises the principal researcher and a representative group of practitioners from different sectors of the health industry who are affiliated with ALIA in various committees, advisory groups and roles. The research has two main aims: to determine the future skills requirements for the health librarian workforce in Australia; and to develop a structured, modular education framework for specialist post-graduate qualifications together with a structure for ongoing continuing professional development. The paper highlights some of the major trends in the health sector and some of the main environmental influences that may act as drivers for change for health librarianship as a profession, and particularly for educating the future workforce. The research methodology is outlined and the main results are described; the findings are discussed with regard to their implications for the development of a structured, competency-based education framework.
Resumo:
BACKGROUND.: Microvascular free tissue transfer has become increasingly popular in the reconstruction of head and neck defects, but it also has its disadvantages. Tissue engineering allows the generation of neo-tissue for implantation, but these tissues are often avascular. We propose to combine tissue-engineering techniques together with flap prefabrication techniques to generate a prefabricated vascularized soft tissue flap. METHODS: Human dermal fibroblasts (HDFs) labeled with fluorescein diacetate were static seeded onto polylactic-co-glycolic acid-collagen (PLGA-c) mesh. Controls were plain PLGA-c mesh. The femoral artery and vein of the nude rat was ligated and used as a vascular carrier for the constructs. After 4 weeks of implantation, the constructs were assessed by gross morphology, routine histology, Masson trichrome, and cell viability determined by green fluorescence. RESULTS: All the constructs maintained their initial shape and dimensions. Angiogenesis was evident in all the constructs with neo-capillary formation within the PLGA-c mesh seen. HDFs proliferated and filled the interyarn spaces of the PLGA-c mesh, while unseeded PLGA-c mesh remained relatively acellular. Cell tracer study indicated that the seeded HDFs remained viable and closely associated to remaining PLGA-c fibers. Collagen formation was more abundant in the constructs seeded with HDFs. CONCLUSIONS: PLGA-c, enveloped by a cell sheet composed of fibroblasts, can serve as a suitable scaffold for generation of a soft tissue flap. A ligated arteriovenous pedicle can serve as a vascular carrier for the generation of a tissue engineered vascularized flap.
Resumo:
In a digital world, users’ Personally Identifiable Information (PII) is normally managed with a system called an Identity Management System (IMS). There are many types of IMSs. There are situations when two or more IMSs need to communicate with each other (such as when a service provider needs to obtain some identity information about a user from a trusted identity provider). There could be interoperability issues when communicating parties use different types of IMS. To facilitate interoperability between different IMSs, an Identity Meta System (IMetS) is normally used. An IMetS can, at least theoretically, join various types of IMSs to make them interoperable and give users the illusion that they are interacting with just one IMS. However, due to the complexity of an IMS, attempting to join various types of IMSs is a technically challenging task, let alone assessing how well an IMetS manages to integrate these IMSs. The first contribution of this thesis is the development of a generic IMS model called the Layered Identity Infrastructure Model (LIIM). Using this model, we develop a set of properties that an ideal IMetS should provide. This idealized form is then used as a benchmark to evaluate existing IMetSs. Different types of IMS provide varying levels of privacy protection support. Unfortunately, as observed by Jøsang et al (2007), there is insufficient privacy protection in many of the existing IMSs. In this thesis, we study and extend a type of privacy enhancing technology known as an Anonymous Credential System (ACS). In particular, we extend the ACS which is built on the cryptographic primitives proposed by Camenisch, Lysyanskaya, and Shoup. We call this system the Camenisch, Lysyanskaya, Shoup - Anonymous Credential System (CLS-ACS). The goal of CLS-ACS is to let users be as anonymous as possible. Unfortunately, CLS-ACS has problems, including (1) the concentration of power to a single entity - known as the Anonymity Revocation Manager (ARM) - who, if malicious, can trivially reveal a user’s PII (resulting in an illegal revocation of the user’s anonymity), and (2) poor performance due to the resource-intensive cryptographic operations required. The second and third contributions of this thesis are the proposal of two protocols that reduce the trust dependencies on the ARM during users’ anonymity revocation. Both protocols distribute trust from the ARM to a set of n referees (n > 1), resulting in a significant reduction of the probability of an anonymity revocation being performed illegally. The first protocol, called the User Centric Anonymity Revocation Protocol (UCARP), allows a user’s anonymity to be revoked in a user-centric manner (that is, the user is aware that his/her anonymity is about to be revoked). The second protocol, called the Anonymity Revocation Protocol with Re-encryption (ARPR), allows a user’s anonymity to be revoked by a service provider in an accountable manner (that is, there is a clear mechanism to determine which entity who can eventually learn - and possibly misuse - the identity of the user). The fourth contribution of this thesis is the proposal of a protocol called the Private Information Escrow bound to Multiple Conditions Protocol (PIEMCP). This protocol is designed to address the performance issue of CLS-ACS by applying the CLS-ACS in a federated single sign-on (FSSO) environment. Our analysis shows that PIEMCP can both reduce the amount of expensive modular exponentiation operations required and lower the risk of illegal revocation of users’ anonymity. Finally, the protocols proposed in this thesis are complex and need to be formally evaluated to ensure that their required security properties are satisfied. In this thesis, we use Coloured Petri nets (CPNs) and its corresponding state space analysis techniques. All of the protocols proposed in this thesis have been formally modeled and verified using these formal techniques. Therefore, the fifth contribution of this thesis is a demonstration of the applicability of CPN and its corresponding analysis techniques in modeling and verifying privacy enhancing protocols. To our knowledge, this is the first time that CPN has been comprehensively applied to model and verify privacy enhancing protocols. From our experience, we also propose several CPN modeling approaches, including complex cryptographic primitives (such as zero-knowledge proof protocol) modeling, attack parameterization, and others. The proposed approaches can be applied to other security protocols, not just privacy enhancing protocols.
Resumo:
This book is based on a study of a complex project proposal by governments and corporations for a futuristic city, the Multifunction Polis (MFP). It encompasses issues and challenges symptomatic of growth initiatives in the global competitive environment. Academic rigor is applied using corporate strategy and business principles to undertake a detailed analysis of the project proposal & feasibility study and to subsequently construct practical guidelines on how to effectively manage the interpretation & implementation of a large-scale collaborative venture. It specifically addresses a venture which involves fragmented groups representing a diversity of interests but which aspire to related goals and, to this end, there is a need for cooperation & synergy across the planning process.This is an easy to read book of general interest and well suited to practitioners and academics alike. Its relevance is far-reaching, extending to venture situations defined by location, industry, community or social interest, the context, scale and scope of the project, and the role of organization management, project management, market and industry development and public policy. flap text of book
Resumo:
This paper examines current teaching practice within the context of the Bachelor of Design (Fashion) programme at AUT University and compares it to the approach adopted in previous years. In recent years, staff on the Bachelor of Design (Fashion) adopted a holistic approach to the assessment of design projects similar to the successful ideas and methods put forward by Stella Lange at the FINZ conference, 2005. Prior to adopting this holistic approach, the teaching culture at AUT University was modular and divorced the development of conceptual design ideas from the technical processes of patternmaking and garment construction, thus limiting the creative potential of integrated project work. Fashion Design is not just about drawing pretty pictures but is rather an entire process that encapsulates conceptual design ideas and technical processes within the context of a target market. Fashion design at AUT being under the umbrella of a wider Bachelor of Design must encourage a more serious view of Fashion and Fashion Design as a whole. In the development of the Bachelor of Design degree at AUT, the university recognised that design education would be best serviced by an inclusive approach. At inception, Core Studio and Core Theory papers formed the first semester of the programme across the discipline areas of Fashion, Spatial Design, Graphic Design and Digital Design. These core papers reinforce the reality that there is a common skill set that transcends all design disciplines with the differentiation between disciplines being determined by the techniques and processes they adopt. Studio based teaching within the scope of a major design project was recognised and introduced some time ago for students in their graduating year, however it was also expected that by year 3 the student had amassed the basic skills required to be able to work in this way. The opinion concerning teaching these basic skills was that they were best serviced by a modular approach. Prior attempts to manage design project delivery leant towards deconstructing the newly formed integrated papers in order to ensure key technical skills were covered in enough depth. So, whilst design projects have played an integral part in the delivery of fashion design over the year levels, the earlier projects were timetabled by discipline and unconvincingly connected. This paper discusses how the holistic approach to assessment must be coupled with an integrated approach to delivery. The methods and processes used are demonstrated and some recently trialled developments are shown to have resulted in achieving the integrated approach in both delivery and assessment.
Resumo:
Background and purpose: The appropriate fixation method for hemiarthroplasty of the hip as it relates to implant survivorship and patient mortality is a matter of ongoing debate. We examined the influence of fixation method on revision rate and mortality.----- ----- Methods: We analyzed approximately 25,000 hemiarthroplasty cases from the AOA National Joint Replacement Registry. Deaths at 1 day, 1 week, 1 month, and 1 year were compared for all patients and among subgroups based on implant type.----- ----- Results: Patients treated with cemented monoblock hemiarthroplasty had a 1.7-times higher day-1 mortality compared to uncemented monoblock components (p < 0.001). This finding was reversed by 1 week, 1 month, and 1 year after surgery (p < 0.001). Modular hemiarthroplasties did not reveal a difference in mortality between fixation methods at any time point.----- ----- Interpretation: This study shows lower (or similar) overall mortality with cemented hemiarthroplasty of the hip.
Resumo:
In this paper we present pyktree, an implementation of the K-tree algorithm in the Python programming language. The K-tree algorithm provides highly balanced search trees for vector quantization that scales up to very large data sets. Pyktree is highly modular and well suited for rapid-prototyping of novel distance measures and centroid representations. It is easy to install and provides a python package for library use as well as command line tools.
Resumo:
We present an automated verification method for security of Diffie–Hellman–based key exchange protocols. The method includes a Hoare-style logic and syntactic checking. The method is applied to protocols in a simplified version of the Bellare–Rogaway–Pointcheval model (2000). The security of the protocol in the complete model can be established automatically by a modular proof technique of Kudla and Paterson (2005).
Resumo:
A bioassay technique, based on surface-enhanced Raman scattering (SERS) tagged gold nanoparticles encapsulated with a biotin functionalised polymer, has been demonstrated through the spectroscopic detection of a streptavidin binding event. A methodical series of steps preceded these results: synthesis of nanoparticles which were found to give a reproducible SERS signal; design and synthesis of polymers with RAFT-functional end groups able to encapsulate the gold nanoparticle. The polymer also enabled the attachment of a biotin molecule functionalised so that it could be attached to the hybrid nanoparticle through a modular process. Finally, the demonstrations of a positive bioassay for this model construct using streptavidin/biotin binding. The synthesis of silver and gold nanoparticles was performed by using tri-sodium citrate as the reducing agent. The shape of the silver nanoparticles was quite difficult to control. Gold nanoparticles were able to be prepared in more regular shapes (spherical) and therefore gave a more consistent and reproducible SERS signal. The synthesis of gold nanoparticles with a diameter of 30 nm was the most reproducible and these were also stable over the longest periods of time. From the SERS results the optimal size of gold nanoparticles was found to be approximately 30 nm. Obtaining a consistent SERS signal with nanoparticles smaller than this was particularly difficult. Nanoparticles more than 50 nm in diameter were too large to remain suspended for longer than a day or two and formed a precipitate, rendering the solutions useless for our desired application. Gold nanoparticles dispersed in water were able to be stabilised by the addition of as-synthesised polymers dissolved in a water miscible solvent. Polymer stabilised AuNPs could not be formed from polymers synthesised by conventional free radical polymerization, i.e. polymers that did not possess a sulphur containing end-group. This indicated that the sulphur-containing functionality present within the polymers was essential for the self assembly process to occur. Polymer stabilization of the gold colloid was evidenced by a range of techniques including, visible spectroscopy, transmission electron microscopy, Fourier transform infrared spectroscopy, thermogravimetric analysis and Raman spectroscopy. After treatment of the hybrid nanoparticles with a series of SERS tags, focussing on 2-quinolinethiol the SERS signals were found to have comparable signal intensity to the citrate stabilised gold nanoparticles. This finding illustrates that the stabilization process does not interfere with the ability of gold nanoparticles to act as substrates for the SERS effect. Incorporation of a biotin moiety into the hybrid nanoparticles was achieved through a =click‘ reaction between an alkyne-functionalised polymer and an azido-functionalised biotin analogue. This functionalized biotin was prepared through a 4-step synthesis from biotin. Upon exposure of the surface-bound streptavidin to biotin-functionalised polymer hybrid gold nanoparticles, then washing, a SERS signal was obtained from the 2-quinolinethiol which was attached to the gold nanoparticles (positive assay). After exposure to functionalised polymer hybrid gold nanoparticles without biotin present then washing a SERS signal was not obtained as the nanoparticles did not bind to the streptavidin (negative assay). These results illustrate the applicability of the use of SERS active functional-polymer encapsulated gold nanoparticles for bioassay application.