914 resultados para misorientation distribution function
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.
Resumo:
In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us to understand the impact of estimation error on the performance of in-sample optimal portfolios. Key Words: minimum-variance frontier; efficiency set constants; finite sample distribution
Resumo:
A comprehensive voltage imbalance sensitivity analysis and stochastic evaluation based on the rating and location of single-phase grid-connected rooftop photovoltaic cells (PVs) in a residential low voltage distribution network are presented. The voltage imbalance at different locations along a feeder is investigated. In addition, the sensitivity analysis is performed for voltage imbalance in one feeder when PVs are installed in other feeders of the network. A stochastic evaluation based on Monte Carlo method is carried out to investigate the risk index of the non-standard voltage imbalance in the network in the presence of PVs. The network voltage imbalance characteristic based on different criteria of PV rating and location and network conditions is generalized. Improvement methods are proposed for voltage imbalance reduction and their efficacy is verified by comparing their risk index using Monte Carlo simulations.
Resumo:
The refractive error of a human eye varies across the pupil and therefore may be treated as a random variable. The probability distribution of this random variable provides a means for assessing the main refractive properties of the eye without the necessity of traditional functional representation of wavefront aberrations. To demonstrate this approach, the statistical properties of refractive error maps are investigated. Closed-form expressions are derived for the probability density function (PDF) and its statistical moments for the general case of rotationally-symmetric aberrations. A closed-form expression for a PDF for a general non-rotationally symmetric wavefront aberration is difficult to derive. However, for specific cases, such as astigmatism, a closed-form expression of the PDF can be obtained. Further, interpretation of the distribution of the refractive error map as well as its moments is provided for a range of wavefront aberrations measured in real eyes. These are evaluated using a kernel density and sample moments estimators. It is concluded that the refractive error domain allows non-functional analysis of wavefront aberrations based on simple statistics in the form of its sample moments. Clinicians may find this approach to wavefront analysis easier to interpret due to the clinical familiarity and intuitive appeal of refractive error maps.
Resumo:
In this paper, the placement and sizing of Distributed Generators (DG) in distribution networks are determined optimally. The objective is to minimize the loss and to improve the reliability. The constraints are the bus voltage, feeder current and the reactive power flowing back to the source side. The placement and size of DGs are optimized using a combination of Discrete Particle Swarm Optimization (DPSO) and Genetic Algorithm (GA). This increases the diversity of the optimizing variables in DPSO not to be stuck in the local minima. To evaluate the proposed algorithm, the semi-urban 37-bus distribution system connected at bus 2 of the Roy Billinton Test System (RBTS), which is located at the secondary side of a 33/11 kV distribution substation, is used. The results finally illustrate the efficiency of the proposed method.
Resumo:
Low back pain is an increasing problem in industrialised countries and although it is a major socio-economic problem in terms of medical costs and lost productivity, relatively little is known about the processes underlying the development of the condition. This is in part due to the complex interactions between bone, muscle, nerves and other soft tissues of the spine, and the fact that direct observation and/or measurement of the human spine is not possible using non-invasive techniques. Biomechanical models have been used extensively to estimate the forces and moments experienced by the spine. These models provide a means of estimating the internal parameters which can not be measured directly. However, application of most of the models currently available is restricted to tasks resembling those for which the model was designed due to the simplified representation of the anatomy. The aim of this research was to develop a biomechanical model to investigate the changes in forces and moments which are induced by muscle injury. In order to accurately simulate muscle injuries a detailed quasi-static three dimensional model representing the anatomy of the lumbar spine was developed. This model includes the nine major force generating muscles of the region (erector spinae, comprising the longissimus thoracis and iliocostalis lumborum; multifidus; quadratus lumborum; latissimus dorsi; transverse abdominis; internal oblique and external oblique), as well as the thoracolumbar fascia through which the transverse abdominis and parts of the internal oblique and latissimus dorsi muscles attach to the spine. The muscles included in the model have been represented using 170 muscle fascicles each having their own force generating characteristics and lines of action. Particular attention has been paid to ensuring the muscle lines of action are anatomically realistic, particularly for muscles which have broad attachments (e.g. internal and external obliques), muscles which attach to the spine via the thoracolumbar fascia (e.g. transverse abdominis), and muscles whose paths are altered by bony constraints such as the rib cage (e.g. iliocostalis lumborum pars thoracis and parts of the longissimus thoracis pars thoracis). In this endeavour, a separate sub-model which accounts for the shape of the torso by modelling it as a series of ellipses has been developed to model the lines of action of the oblique muscles. Likewise, a separate sub-model of the thoracolumbar fascia has also been developed which accounts for the middle and posterior layers of the fascia, and ensures that the line of action of the posterior layer is related to the size and shape of the erector spinae muscle. Published muscle activation data are used to enable the model to predict the maximum forces and moments that may be generated by the muscles. These predictions are validated against published experimental studies reporting maximum isometric moments for a variety of exertions. The model performs well for fiexion, extension and lateral bend exertions, but underpredicts the axial twist moments that may be developed. This discrepancy is most likely the result of differences between the experimental methodology and the modelled task. The application of the model is illustrated using examples of muscle injuries created by surgical procedures. The three examples used represent a posterior surgical approach to the spine, an anterior approach to the spine and uni-lateral total hip replacement surgery. Although the three examples simulate different muscle injuries, all demonstrate the production of significant asymmetrical moments and/or reduced joint compression following surgical intervention. This result has implications for patient rehabilitation and the potential for further injury to the spine. The development and application of the model has highlighted a number of areas where current knowledge is deficient. These include muscle activation levels for tasks in postures other than upright standing, changes in spinal kinematics following surgical procedures such as spinal fusion or fixation, and a general lack of understanding of how the body adjusts to muscle injuries with respect to muscle activation patterns and levels, rate of recovery from temporary injuries and compensatory actions by other muscles. Thus the comprehensive and innovative anatomical model which has been developed not only provides a tool to predict the forces and moments experienced by the intervertebral joints of the spine, but also highlights areas where further clinical research is required.
Resumo:
Network induced delay in networked control systems (NCS) is inherently non-uniformly distributed and behaves with multifractal nature. However, such network characteristics have not been well considered in NCS analysis and synthesis. Making use of the information of the statistical distribution of NCS network induced delay, a delay distribution based stochastic model is adopted to link Quality-of-Control and network Quality-of-Service for NCS with uncertainties. From this model together with a tighter bounding technology for cross terms, H∞ NCS analysis is carried out with significantly improved stability results. Furthermore, a memoryless H∞ controller is designed to stabilize the NCS and to achieve the prescribed disturbance attenuation level. Numerical examples are given to demonstrate the effectiveness of the proposed method.
Resumo:
Executive function (EF) emerges in infancy and continues to develop throughout childhood. Executive dysfunction is believed to contribute to learning and attention problems in children at school age. Children born very preterm are more prone to these problems than their full-term peers.
Resumo:
Recent research on particle size distributions and particle concentrations near a busy road cannot be explained by the conventional mechanisms for particle evolution of combustion aerosols. Specifically they appear to be inadequate to explain the experimental observations of particle transformation and the evolution of the total number concentration. This resulted in the development of a new mechanism based on their thermal fragmentation, for the evolution of combustion aerosol nano-particles. A complex and comprehensive pattern of evolution of combustion aerosols, involving particle fragmentation, was then proposed and justified. In that model it was suggested that thermal fragmentation occurs in aggregates of primary particles each of which contains a solid graphite/carbon core surrounded by volatile molecules bonded to the core by strong covalent bonds. Due to the presence of strong covalent bonds between the core and the volatile (frill) molecules, such primary composite particles can be regarded as solid, despite the presence of significant (possibly, dominant) volatile component. Fragmentation occurs when weak van der Waals forces between such primary particles are overcome by their thermal (Brownian) motion. In this work, the accepted concept of thermal fragmentation is advanced to determine whether fragmentation is likely in liquid composite nano-particles. It has been demonstrated that at least at some stages of evolution, combustion aerosols contain a large number of composite liquid particles containing presumably several components such as water, oil, volatile compounds, and minerals. It is possible that such composite liquid particles may also experience thermal fragmentation and thus contribute to, for example, the evolution of the total number concentration as a function of distance from the source. Therefore, the aim of this project is to examine theoretically the possibility of thermal fragmentation of composite liquid nano-particles consisting of immiscible liquid v components. The specific focus is on ternary systems which include two immiscible liquid droplets surrounded by another medium (e.g., air). The analysis shows that three different structures are possible, the complete encapsulation of one liquid by the other, partial encapsulation of the two liquids in a composite particle, and the two droplets separated from each other. The probability of thermal fragmentation of two coagulated liquid droplets is discussed and examined for different volumes of the immiscible fluids in a composite liquid particle and their surface and interfacial tensions through the determination of the Gibbs free energy difference between the coagulated and fragmented states, and comparison of this energy difference with the typical thermal energy kT. The analysis reveals that fragmentation was found to be much more likely for a partially encapsulated particle than a completely encapsulated particle. In particular, it was found that thermal fragmentation was much more likely when the volume ratio of the two liquid droplets that constitute the composite particle are very different. Conversely, when the two liquid droplets are of similar volumes, the probability of thermal fragmentation is small. It is also demonstrated that the Gibbs free energy difference between the coagulated and fragmented states is not the only important factor determining the probability of thermal fragmentation of composite liquid particles. The second essential factor is the actual structure of the composite particle. It is shown that the probability of thermal fragmentation is also strongly dependent on the distance that each of the liquid droplets should travel to reach the fragmented state. In particular, if this distance is larger than the mean free path for the considered droplets in the air, the probability of thermal fragmentation should be negligible. In particular, it follows form here that fragmentation of the composite particle in the state with complete encapsulation is highly unlikely because of the larger distance that the two droplets must travel in order to separate. The analysis of composite liquid particles with the interfacial parameters that are expected in combustion aerosols demonstrates that thermal fragmentation of these vi particles may occur, and this mechanism may play a role in the evolution of combustion aerosols. Conditions for thermal fragmentation to play a significant role (for aerosol particles other than those from motor vehicle exhaust) are determined and examined theoretically. Conditions for spontaneous transformation between the states of composite particles with complete and partial encapsulation are also examined, demonstrating the possibility of such transformation in combustion aerosols. Indeed it was shown that for some typical components found in aerosols that transformation could take place on time scales less than 20 s. The analysis showed that factors that influenced surface and interfacial tension played an important role in this transformation process. It is suggested that such transformation may, for example, result in a delayed evaporation of composite particles with significant water component, leading to observable effects in evolution of combustion aerosols (including possible local humidity maximums near a source, such as a busy road). The obtained results will be important for further development and understanding of aerosol physics and technologies, including combustion aerosols and their evolution near a source.
Resumo:
Purpose: The component modules in the standard BEAMnrc distribution may appear to be insufficient to model micro-multileaf collimators that have tri-faceted leaf ends and complex leaf profiles. This note indicates, however, that accurate Monte Carlo simulations of radiotherapy beams defined by a complex collimation device can be completed using BEAMnrc's standard VARMLC component module.---------- Methods: That this simple collimator model can produce spatially and dosimetrically accurate micro-collimated fields is illustrated using comparisons with ion chamber and film measurements of the dose deposited by square and irregular fields incident on planar, homogeneous water phantoms.---------- Results: Monte Carlo dose calculations for on- and off-axis fields are shown to produce good agreement with experimental values, even upon close examination of the penumbrae.--------- Conclusions: The use of a VARMLC model of the micro-multileaf collimator, along with a commissioned model of the associated linear accelerator, is therefore recommended as an alternative to the development or use of in-house or third-party component modules for simulating stereotactic radiotherapy and radiosurgery treatments. Simulation parameters for the VARMLC model are provided which should allow other researchers to adapt and use this model to study clinical stereotactic radiotherapy treatments.
Resumo:
Local climate is a critical element in the design of energy efficient buildings. In this paper, ten years of historical weather data in Australia's eight capital cities were profiled and analysed to characterize the variations of climatic variables in Australia. The method of descriptive statistics was employed. Either the pattern of cumulative distribution and/or the profile of percentage distribution are presented. It was found that although weather variables vary with different locations, there is often a good, nearly linear relation between a weather variable and its cumulative percentage for the majority of middle part of the cumulative curves. By comparing the slopes of these distribution profiles, it may be possible to determine the relative range of changes of the particular weather variables for a given city. The implications of these distribution profiles of key weather variables on energy efficient building design are also discussed.
Resumo:
Cardiovascular diseases refer to the class of diseases that involve the heart or blood vessels (arteries and veins). Examples of medical devices for treating the cardiovascular diseases include ventricular assist devices (VADs), artificial heart valves and stents. Metallic biomaterials such as titanium and its alloy are commonly used for ventricular assist devices. However, titanium and its alloy show unacceptable thrombosis, which represents a major obstacle to be overcome. Polyurethane (PU) polymer has better blood compatibility and has been used widely in cardiovascular devices. Thus one aim of the project was to coat a PU polymer onto a titanium substrate by increasing the surface roughness, and surface functionality. Since the endothelium of a blood vessel has the most ideal non-thrombogenic properties, it was the target of this research project to grow an endothelial cell layer as a biological coating based on the tissue engineering strategy. However, seeding endothelial cells on the smooth PU coating surfaces is problematic due to the quick loss of seeded cells which do not adhere to the PU surface. Thus it was another aim of the project to create a porous PU top layer on the dense PU pre-layer-coated titanium substrate. The method of preparing the porous PU layer was based on the solvent casting/particulate leaching (SCPL) modified with centrifugation. Without the step of centrifugation, the distribution of the salt particles was not uniform within the polymer solution, and the degree of interconnection between the salt particles was not well controlled. Using the centrifugal treatment, the pore distribution became uniform and the pore interconnectivity was improved even at a high polymer solution concentration (20%) as the maximal salt weight was added in the polymer solution. The titanium surfaces were modified by alkli and heat treatment, followed by functionlisation using hydrogen peroxide. A silane coupling agent was coated before the application of the dense PU pre-layer and the porous PU top layer. The ability of the porous top layer to grow and retain the endothelial cells was also assessed through cell culture techniques. The bonding strengths of the PU coatings to the modified titanium substrates were measured and related to the surface morphologies. The outcome of the project is that it has laid a foundation to achieve the strategy of endothelialisation for the blood compatibility of medical devices. This thesis is divided into seven chapters. Chapter 2 describes the current state of the art in the field of surface modification in cardiovascular devices such as ventricular assist devices (VADs). It also analyses the pros and cons of the existing coatings, particularly in the context of this research. The surface coatings for VADs have evolved from early organic/ inorganic (passive) coatings, to bioactive coatings (e.g. biomolecules), and to cell-based coatings. Based on the commercial applications and the potential of the coatings, the relevant review is focused on the following six types of coatings: (1) titanium nitride (TiN) coatings, (2) diamond-like carbon (DLC) coatings, (3) 2-methacryloyloxyethyl phosphorylcholine (MPC) polymer coatings, (4) heparin coatings, (5) textured surfaces, and (6) endothelial cell lining. Chapter 3 reviews the polymer scaffolds and one relevant fabrication method. In tissue engineering, the function of a polymeric material is to provide a 3-dimensional architecture (scaffold) which is typically used to accommodate transplanted cells and to guide their growth and the regeneration of tissue. The success of these systems is dependent on the design of the tissue engineering scaffolds. Chapter 4 describes chemical surface treatments for titanium and titanium alloys to increase the bond strength to polymer by altering the substrate surface, for example, by increasing surface roughness or changing surface chemistry. The nature of the surface treatment prior to bonding is found to be a major factor controlling the bonding strength. By increasing surface roughness, an increase in surface area occurs, which allows the adhesive to flow in and around the irregularities on the surface to form a mechanical bond. Changing surface chemistry also results in the formation of a chemical bond. Chapter 5 shows that bond strengths between titanium and polyurethane could be significantly improved by surface treating the titanium prior to bonding. Alkaline heat treatment and H2O2 treatment were applied to change the surface roughness and the surface chemistry of titanium. Surface treatment increases the bond strength by altering the substrate surface in a number of ways, including increasing the surface roughness and changing the surface chemistry. Chapter 6 deals with the characterization of the polyurethane scaffolds, which were fabricated using an enhanced solvent casting/particulate (salt) leaching (SCPL) method developed for preparing three-dimensional porous scaffolds for cardiac tissue engineering. The enhanced method involves the combination of a conventional SCPL method and a step of centrifugation, with the centrifugation being employed to improve the pore uniformity and interconnectivity of the scaffolds. It is shown that the enhanced SCPL method and a collagen coating resulted in a spatially uniform distribution of cells throughout the collagen-coated PU scaffolds.In Chapter 7, the enhanced SCPL method is used to form porous features on the polyurethane-coated titanium substrate. The cavities anchored the endothelial cells to remain on the blood contacting surfaces. It is shown that the surface porosities created by the enhanced SCPL may be useful in forming a stable endothelial layer upon the blood contacting surface. Chapter 8 finally summarises the entire work performed on the fabrication and analysis of the polymer-Ti bonding, the enhanced SCPL method and the PU microporous surface on the metallic substrate. It then outlines the possibilities for future work and research in this area.
Resumo:
The existence of any film genre depends on the effective operation of distribution networks. Contingencies of distribution play an important role in determining the content of individual texts and the characteristics of film genres; they enable new genres to emerge at the same time as they impose limits on generic change. This article sets out an alternative way of doing genre studies, based on an analysis of distributive circuits rather than film texts or generic categories. Our objective is to provide a conceptual framework that can account for the multiple ways in which distribution networks leave their traces on film texts and audience expectations, with specific reference to international horror networks, and to offer some preliminary suggestions as to how distribution analysis can be integrated into existing genre studies methodologies.