363 resultados para mean value

em Queensland University of Technology - ePrints Archive


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Analytical expressions are derived for the mean and variance, of estimates of the bispectrum of a real-time series assuming a cosinusoidal model. The effects of spectral leakage, inherent in discrete Fourier transform operation when the modes present in the signal have a nonintegral number of wavelengths in the record, are included in the analysis. A single phase-coupled triad of modes can cause the bispectrum to have a nonzero mean value over the entire region of computation owing to leakage. The variance of bispectral estimates in the presence of leakage has contributions from individual modes and from triads of phase-coupled modes. Time-domain windowing reduces the leakage. The theoretical expressions for the mean and variance of bispectral estimates are derived in terms of a function dependent on an arbitrary symmetric time-domain window applied to the record. the number of data, and the statistics of the phase coupling among triads of modes. The theoretical results are verified by numerical simulations for simple test cases and applied to laboratory data to examine phase coupling in a hypothesis testing framework

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Introduction: The motivation for developing megavoltage (and kilovoltage) cone beam CT (MV CBCT) capabilities in the radiotherapy treatment room was primarily based on the need to improve patient set-up accuracy. There has recently been an interest in using the cone beam CT data for treatment planning. Accurate treatment planning, however, requires knowledge of the electron density of the tissues receiving radiation in order to calculate dose distributions. This is obtained from CT, utilising a conversion between CT number and electron density of various tissues. The use of MV CBCT has particular advantages compared to treatment planning with kilovoltage CT in the presence of high atomic number materials and requires the conversion of pixel values from the image sets to electron density. Therefore, a study was undertaken to characterise the pixel value to electron density relationship for the Siemens MV CBCT system, MVision, and determine the effect, if any, of differing the number of monitor units used for acquisition. If a significant difference with number of monitor units was seen then pixel value to ED conversions may be required for each of the clinical settings. The calibration of the MV CT images for electron density offers the possibility for a daily recalculation of the dose distribution and the introduction of new adaptive radiotherapy treatment strategies. Methods: A Gammex Electron Density CT Phantom was imaged with the MVCB CT system. The pixel value for each of the sixteen inserts, which ranged from 0.292 to 1.707 relative electron density to the background solid water, was determined by taking the mean value from within a region of interest centred on the insert, over 5 slices within the centre of the phantom. These results were averaged and plotted against the relative electron densities of each insert with a linear least squares fit was preformed. This procedure was performed for images acquired with 5, 8, 15 and 60 monitor units. Results: The linear relationship between MVCT pixel value and ED was demonstrated for all monitor unit settings and over a range of electron densities. The number of monitor units utilised was found to have no significant impact on this relationship. Discussion: It was found that the number of MU utilised does not significantly alter the pixel value obtained for different ED materials. However, to ensure the most accurate and reproducible MV to ED calibration, one MU setting should be chosen and used routinely. To ensure accuracy for the clinical situation this MU setting should correspond to that which is used clinically. If more than one MU setting is used clinically then an average of the CT values acquired with different numbers of MU could be utilized without loss in accuracy. Conclusions: No significant differences have been shown between the pixel value to ED conversion for the Siemens MV CT cone beam unit with change in monitor units. Thus as single conversion curve could be utilised for MV CT treatment planning. To fully utilise MV CT imaging for radiotherapy treatment planning further work will be undertaken to ensure all corrections have been made and dose calculations verified. These dose calculations may be either for treatment planning purposes or for reconstructing the delivered dose distribution from transit dosimetry measurements made using electronic portal imaging devices. This will potentially allow the cumulative dose distribution to be determined through the patient’s multi-fraction treatment and adaptive treatment strategies developed to optimize the tumour response.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The emission factors of a bus fleet consisting of approximately three hundreds diesel powered buses were measured in a tunnel study under well controlled conditions during a two-day monitoring campaign in Brisbane. The number concentration of particles in the size range 0.017-0.7 m was monitored simultaneously by two Scanning Mobility Particle Sizers located at the tunnel’s entrance and exit. The mean value of the number emission factors was found to be (2.44±1.41)×1014 particles km-1. The results are in good agreement with the emission factors determined from steady-state dynamometer testing of 12 buses from the same Brisbane City bus fleet, thus indicating that when carefully designed, both approaches, the dynamometer and on-road studies, can provide comparable results, applicable for the assessment of the effect of traffic emissions on airborne particle pollution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Estimating and predicting degradation processes of engineering assets is crucial for reducing the cost and insuring the productivity of enterprises. Assisted by modern condition monitoring (CM) technologies, most asset degradation processes can be revealed by various degradation indicators extracted from CM data. Maintenance strategies developed using these degradation indicators (i.e. condition-based maintenance) are more cost-effective, because unnecessary maintenance activities are avoided when an asset is still in a decent health state. A practical difficulty in condition-based maintenance (CBM) is that degradation indicators extracted from CM data can only partially reveal asset health states in most situations. Underestimating this uncertainty in relationships between degradation indicators and health states can cause excessive false alarms or failures without pre-alarms. The state space model provides an efficient approach to describe a degradation process using these indicators that can only partially reveal health states. However, existing state space models that describe asset degradation processes largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires that failures and inspections only happen at fixed intervals. The discrete state assumption entails discretising continuous degradation indicators, which requires expert knowledge and often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This research proposes a Gamma-based state space model that does not have discrete time, discrete state, linear and Gaussian assumptions to model partially observable degradation processes. Monte Carlo-based algorithms are developed to estimate model parameters and asset remaining useful lives. In addition, this research also develops a continuous state partially observable semi-Markov decision process (POSMDP) to model a degradation process that follows the Gamma-based state space model and is under various maintenance strategies. Optimal maintenance strategies are obtained by solving the POSMDP. Simulation studies through the MATLAB are performed; case studies using the data from an accelerated life test of a gearbox and a liquefied natural gas industry are also conducted. The results show that the proposed Monte Carlo-based EM algorithm can estimate model parameters accurately. The results also show that the proposed Gamma-based state space model have better fitness result than linear and Gaussian state space models when used to process monotonically increasing degradation data in the accelerated life test of a gear box. Furthermore, both simulation studies and case studies show that the prediction algorithm based on the Gamma-based state space model can identify the mean value and confidence interval of asset remaining useful lives accurately. In addition, the simulation study shows that the proposed maintenance strategy optimisation method based on the POSMDP is more flexible than that assumes a predetermined strategy structure and uses the renewal theory. Moreover, the simulation study also shows that the proposed maintenance optimisation method can obtain more cost-effective strategies than a recently published maintenance strategy optimisation method by optimising the next maintenance activity and the waiting time till the next maintenance activity simultaneously.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Client owners usually need an estimate or forecast of their likely building costs in advance of detailed design in order to confirm the financial feasibility of their projects. Because of their timing in the project life cycle, these early stage forecasts are characterized by the minimal amount of information available concerning the new (target) project to the point that often only its size and type are known. One approach is to use the mean contract sum of a sample, or base group, of previous projects of a similar type and size to the project for which the estimate is needed. Bernoulli’s law of large numbers implies that this base group should be as large as possible. However, increasing the size of the base group inevitably involves including projects that are less and less similar to the target project. Deciding on the optimal number of base group projects is known as the homogeneity or pooling problem. A method of solving the homogeneity problem is described involving the use of closed form equations to compare three different sampling arrangements of previous projects for their simulated forecasting ability by a cross-validation method, where a series of targets are extracted, with replacement, from the groups and compared with the mean value of the projects in the base groups. The procedure is then demonstrated with 450 Hong Kong projects (with different project types: Residential, Commercial centre, Car parking, Social community centre, School, Office, Hotel, Industrial, University and Hospital) clustered into base groups according to their type and size.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A significant issue encountered when fusing data received from multiple sensors is the accuracy of the timestamp associated with each piece of data. This is particularly important in applications such as Simultaneous Localisation and Mapping (SLAM) where vehicle velocity forms an important part of the mapping algorithms; on fastmoving vehicles, even millisecond inconsistencies in data timestamping can produce errors which need to be compensated for. The timestamping problem is compounded in a robot swarm environment due to the use of non-deterministic readily-available hardware (such as 802.11-based wireless) and inaccurate clock synchronisation protocols (such as Network Time Protocol (NTP)). As a result, the synchronisation of the clocks between robots can be out by tens-to-hundreds of milliseconds making correlation of data difficult and preventing the possibility of the units performing synchronised actions such as triggering cameras or intricate swarm manoeuvres. In this thesis, a complete data fusion unit is designed, implemented and tested. The unit, named BabelFuse, is able to accept sensor data from a number of low-speed communication buses (such as RS232, RS485 and CAN Bus) and also timestamp events that occur on General Purpose Input/Output (GPIO) pins referencing a submillisecondaccurate wirelessly-distributed "global" clock signal. In addition to its timestamping capabilities, it can also be used to trigger an attached camera at a predefined start time and frame rate. This functionality enables the creation of a wirelessly-synchronised distributed image acquisition system over a large geographic area; a real world application for this functionality is the creation of a platform to facilitate wirelessly-distributed 3D stereoscopic vision. A ‘best-practice’ design methodology is adopted within the project to ensure the final system operates according to its requirements. Initially, requirements are generated from which a high-level architecture is distilled. This architecture is then converted into a hardware specification and low-level design, which is then manufactured. The manufactured hardware is then verified to ensure it operates as designed and firmware and Linux Operating System (OS) drivers are written to provide the features and connectivity required of the system. Finally, integration testing is performed to ensure the unit functions as per its requirements. The BabelFuse System comprises of a single Grand Master unit which is responsible for maintaining the absolute value of the "global" clock. Slave nodes then determine their local clock o.set from that of the Grand Master via synchronisation events which occur multiple times per-second. The mechanism used for synchronising the clocks between the boards wirelessly makes use of specific hardware and a firmware protocol based on elements of the IEEE-1588 Precision Time Protocol (PTP). With the key requirement of the system being submillisecond-accurate clock synchronisation (as a basis for timestamping and camera triggering), automated testing is carried out to monitor the o.sets between each Slave and the Grand Master over time. A common strobe pulse is also sent to each unit for timestamping; the correlation between the timestamps of the di.erent units is used to validate the clock o.set results. Analysis of the automated test results show that the BabelFuse units are almost threemagnitudes more accurate than their requirement; clocks of the Slave and Grand Master units do not di.er by more than three microseconds over a running time of six hours and the mean clock o.set of Slaves to the Grand Master is less-than one microsecond. The common strobe pulse used to verify the clock o.set data yields a positive result with a maximum variation between units of less-than two microseconds and a mean value of less-than one microsecond. The camera triggering functionality is verified by connecting the trigger pulse output of each board to a four-channel digital oscilloscope and setting each unit to output a 100Hz periodic pulse with a common start time. The resulting waveform shows a maximum variation between the rising-edges of the pulses of approximately 39¥ìs, well below its target of 1ms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Selenium (Se) is an essential trace element and the clinical consequences of Se deficiency have been well-documented. Se is primarily obtained through the diet and recent studies have suggested that the level of Se in Australian foods is declining. Currently there is limited data on the Se status of the Australian population so the aim of this study was to determine the plasma concentration of Se and glutathione peroxidase (GSH-Px), a well-established biomarker of Se status. Furthermore, the effect of gender, age and presence of cardiovascular disease (CVD) was also examined. Blood plasma samples from healthy subjects (140 samples, mean age = 54 years; range, 20-86 years) and CVD patients (112 samples, mean age = 67 years; range, 40-87 years) were analysed for Se concentration and GSH-Px activity. The results revealed that the healthy Australian cohort had a mean plasma Se level of 100.2 +/- 1.3 microg Se/L and a mean GSH-Px activity of 108.8 +/- 1.7 U/L. Although the mean value for plasma Se reached the level required for optimal GSH-Px activity (i.e. 100 microg Se/L), 47% of the healthy individuals tested fell below this level. Further evaluation revealed that certain age groups were more at risk of a lowered Se status, in particular, the oldest age group of over 81 years (females = 97.6 +/- 6.1 microg Se/L; males = 89.4 +/- 3.8 microg Se/L). The difference in Se status between males and females was not found to be significant. The presence of CVD did not appear to influence Se status, with the exception of the over 81 age group, which showed a trend for a further decline in Se status with disease (plasma Se, 93.5 +/- 3.6 microg Se/L for healthy versus 88.2 +/- 5.3 microg Se/L for CVD; plasma GSH-Px, 98.3 +/- 3.9 U/L for healthy versus 87.0 +/- 6.5 U/L for CVD). These findings emphasise the importance of an adequate dietary intake of Se for the maintenance of a healthy ageing population, especially in terms of cardiovascular health.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

MOST PAN stages in Australian factories use only five or six batch pans for the high grade massecuite production and operate these in a fairly rigid repeating production schedule. It is common that some of the pans are of large dropping capacity e.g. 150 to 240 t. Because of the relatively small number and large sizes of the pans, steam consumption varies widely through the schedule, often by ±30% about the mean value. Large fluctuations in steam consumption have implications for the steam generation/condensate management of the factory and the evaporators when bleed vapour is used. One of the objectives of a project to develop a supervisory control system for a pan stage is to (a) reduce the average steam consumption and (b) reduce the variation in the steam consumption. The operation of each of the high grade pans within the schedule at Macknade Mill was analysed to determine the idle (or buffer) time, time allocations for essential but unproductive operations (e.g. pan turn round, charging, slow ramping up of steam rates on pan start etc.), and productive time i.e. the time during boil-on of liquor and molasses feed. Empirical models were developed for each high grade pan on the stage to define the interdependence of the production rate and the evaporation rate for the different phases of each pan’s cycle. The data were analysed in a spreadsheet model to try to reduce and smooth the total steam consumption. This paper reports on the methodology developed in the model and the results of the investigations for the pan stage at Macknade Mill. It was found that the operation of the schedule severely restricted the ability to reduce the average steam consumption and smooth the steam flows. While longer cycle times provide increased flexibility the steam consumption profile was changed only slightly. The ability to cut massecuite on the run among pans, or the use of a high grade seed vessel, would assist in reducing the average steam consumption and the magnitude of the variations in steam flow.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An effective control of the ion current distribution over large-area (up to 103 cm2) substrates with the magnetic fields of a complex structure by using two additional magnetic coils installed under the substrate exposed to vacuum arc plasmas is demonstrated. When the magnetic field generated by the additional coils is aligned with the direction of the magnetic field generated by the guiding and focusing coils of the vacuum arc source, a narrow ion density distribution with the maximum current density 117 A m-2 is achieved. When one of the additional coils is set to generate the magnetic field of the opposite direction, an area almost uniform over the substrate of 103 cm2 ion current distribution with the mean value of 45 A m-2 is achieved. Our findings suggest that the system with the vacuum arc source and two additional magnetic coils can be effectively used for the effective, high throughput, and highly controllable plasma processing.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: In recent years, there have been investigations concerning upper-limbs kinematics by various devices. The latest generation of smartphones often includes inertial sensors with subunits which can detect inertial kinematics. The use of smartphones is presented as a convenient and portable analysis method for studying kinematics in terms of angular mobility and linear acceleration Objective: The aim of this study was to study humerus kinematics through six physical properties that correspond to angular mobility and acceleration in the three axes of space, obtained by a smartphone. Methods: This cross-sectional study recruited healthy young adult subjects. Descriptive and anthropometric independent variables related to age, gender, weight, size, and BMI were included. Six physical properties were included corresponding to two dependent variables for each of three special axes: mobility angle (degrees) and lineal acceleration (meters/seconds2), which were obtained thought the inertial measurement sensor embedded in the iPhone4 smartphone equipped with three two elements for the detection of kinematic variables: a gyroscope and an accelerometer. Apple uses an LIS302DL accelerometer in the iPhone4. The application used to obtain kinematic data was xSensor Pro, Crossbow Technology, Inc., available at the Apple AppStore. The iPhone4 has storage capacity of 20MB. The data-sampling rate was set to 32 Hz, and the data for each analytical task was transmitted as email for analysis and postprocessing The iPhone4 was placed in the right half of the body of each subject located in the middle third of the humerus slightly posterior snugly secured by a neoprene fixation belt. Tasks were explained concisely and clearly. The beginning and the end were decided by a verbal order by the researcher. Participants were placed standing, starting from neutral position, performing the following analytical tasks: 180º right shoulder abduction (eight repetitions) and, after a break of about 3 minutes, 180º right shoulder flexion (eight repetitions). Both tasks were performed with the elbow extended, wrist in neutral position and the palmar area of the hand toward the midline at the beginning and end of the movement. Results: A total of 11 subjects (8 men, 3 woman) were measured, whose mean of age was 24.7 years (SD = 4.22 years) and their average BMI was 22.64 Kg/m2 (SD = 2.29 Kg/m2). The mean of angular mobility collected by the smartphone was bigger in pitch axis for flexion (= 157.28°, SD= 12.35°) and abduction (= 151.71°, SD= 9.70°). With regard to acceleration, the highest peak mean value was shown in the Y motion axis during flexion (= 19.5°/s2, SD = 0.8°/s2) and abduction (= 19.4°/s2, SD = 0.8°/s2). Also, descriptive graphics of analytical tasks performed were obtained. Conclusions: This study shows how humerus contributes to upper-limb motion and it identified movement patterns. Therefore, it supports smartphone as a useful device to analyze upper-limb kinematics. Thanks to this study it´s possible to develop a simple application that facilitates the evaluation of the patient.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

his paper identifies some scaling relationships between solar activity and geomagnetic activity. We examine the scaling properties of hourly data for two geomagnetic indices (ap and AE), two solar indices (solar X-rays Xl and solar flux F10.7), and two inner heliospheric indices (ion density Ni and flow speed Vs) over the period 1995–2001 by the universal multifractal approach and the traditional multifractal analysis. We found that the universal multifractal model (UMM) provides a good fit to the empirical K(q) and τ(q) curves of these time series. The estimated values of the Lévy index α in the UMM indicate that multifractality exists in the time series for ap, AE, Xl, and Ni, while those for F10.7 and Vs are monofractal. The estimated values of the nonconservation parameter H of this model confirm that these time series are conservative which indicate that the mean value of the process is constant for varying resolution. Additionally, the multifractal K(q) and τ(q) curves, and the estimated values of the sparseness parameter C1 of the UMM indicate that there are three pairs of indices displaying similar scaling properties, namely ap and Xl, AE and Ni, and F10.7 and Vs. The similarity in the scaling properties of pairs (ap,Xl) and (AE,Ni) suggests that ap and Xl, AE and Ni are better correlated—in terms of scaling—than previous thought, respectively. But our results still cannot be used to advance forecasting of ap and AE by Xl and Ni, respectively, due to some reasons

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose This study evaluated the impact of patient set-up errors on the probability of pulmonary and cardiac complications in the irradiation of left-sided breast cancer. Methods and Materials Using the CMS XiO Version 4.6 (CMS Inc., St Louis, MO) radiotherapy planning system's NTCP algorithm and the Lyman -Kutcher-Burman (LKB) model, we calculated the DVH indices for the ipsilateral lung and heart and the resultant normal tissue complication probabilities (NTCP) for radiation-induced pneumonitis and excess cardiac mortality in 12 left-sided breast cancer patients. Results Isocenter shifts in the posterior direction had the greatest effect on the lung V20, heart V25, mean and maximum doses to the lung and the heart. Dose volume histograms (DVH) results show that the ipsilateral lung V20 tolerance was exceeded in 58% of the patients after 1cm posterior shifts. Similarly, the heart V25 tolerance was exceeded after 1cm antero-posterior and left-right isocentric shifts in 70% of the patients. The baseline NTCPs for radiation-induced pneumonitis ranged from 0.73% - 3.4% with a mean value of 1.7%. The maximum reported NTCP for radiation-induced pneumonitis was 5.8% (mean 2.6%) after 1cm posterior isocentric shift. The NTCP for excess cardiac mortality were 0 % in 100% of the patients (n=12) before and after setup error simulations. Conclusions Set-up errors in left sided breast cancer patients have a statistically significant impact on the Lung NTCPs and DVH indices. However, with a central lung distance of 3cm or less (CLD <3cm), and a maximum heart distance of 1.5cm or less (MHD<1.5cm), the treatment plans could tolerate set-up errors of up to 1cm without any change in the NTCP to the heart.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The current approach for protecting the receiving water environment from urban stormwater pollution is the adoption of structural measures commonly referred to as Water Sensitive Urban Design (WSUD). The treatment efficiency of WSUD measures closely depends on the design of the specific treatment units. As stormwater quality is influenced by rainfall characteristics, the selection of appropriate rainfall events for treatment design is essential to ensure the effectiveness of WSUD systems. Based on extensive field investigations in four urban residential catchments based at Gold Coast, Australia, and computer modelling, this paper details a technically robust approach for the selection of rainfall events for stormwater treatment design using a three-component model. The modelling results confirmed that high intensity-short duration events produce 58.0% of TS load while they only generated 29.1% of total runoff volume. Additionally, rainfall events smaller than 6-month average recurrence interval (ARI) generates a greater cumulative runoff volume (68.4% of the total annual runoff volume) and TS load (68.6% of the TS load exported) than the rainfall events larger than 6-month ARI. The results suggest that for the study catchments, stormwater treatment design could be based on the rainfall which had a mean value of 31 mm/h average intensity and 0.4 h duration. These outcomes also confirmed that selecting smaller ARI rainfall events with high intensity-short duration as the threshold for treatment system design is the most feasible approach since these events cumulatively generate a major portion of the annual pollutant load compared to the other types of events, despite producing a relatively smaller runoff volume. This implies that designs based on small and more frequent rainfall events rather than larger rainfall events would be appropriate in the context of efficiency in treatment performance, cost-effectiveness and possible savings in land area needed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we are interested in financial risk and the instrument we want to use is Value-at-Risk (VaR). VaR is the maximum loss over a given period of time at a given confidence level. Many definitions of VaR exist and some will be introduced throughout this thesis. There two main ways to measure risk and VaR: through volatility and through percentiles. Large volatility in financial returns implies greater probability of large losses, but also larger probability of large profits. Percentiles describe tail behaviour. The estimation of VaR is a complex task. It is important to know the main characteristics of financial data to choose the best model. The existing literature is very wide, maybe controversial, but helpful in drawing a picture of the problem. It is commonly recognised that financial data are characterised by heavy tails, time-varying volatility, asymmetric response to bad and good news, and skewness. Ignoring any of these features can lead to underestimating VaR with a possible ultimate consequence being the default of the protagonist (firm, bank or investor). In recent years, skewness has attracted special attention. An open problem is the detection and modelling of time-varying skewness. Is skewness constant or there is some significant variability which in turn can affect the estimation of VaR? This thesis aims to answer this question and to open the way to a new approach to model simultaneously time-varying volatility (conditional variance) and skewness. The new tools are modifications of the Generalised Lambda Distributions (GLDs). They are four-parameter distributions, which allow the first four moments to be modelled nearly independently: in particular we are interested in what we will call para-moments, i.e., mean, variance, skewness and kurtosis. The GLDs will be used in two different ways. Firstly, semi-parametrically, we consider a moving window to estimate the parameters and calculate the percentiles of the GLDs. Secondly, parametrically, we attempt to extend the GLDs to include time-varying dependence in the parameters. We used the local linear regression to estimate semi-parametrically conditional mean and conditional variance. The method is not efficient enough to capture all the dependence structure in the three indices —ASX 200, S&P 500 and FT 30—, however it provides an idea of the DGP underlying the process and helps choosing a good technique to model the data. We find that GLDs suggest that moments up to the fourth order do not always exist, there existence appears to vary over time. This is a very important finding, considering that past papers (see for example Bali et al., 2008; Hashmi and Tay, 2007; Lanne and Pentti, 2007) modelled time-varying skewness, implicitly assuming the existence of the third moment. However, the GLDs suggest that mean, variance, skewness and in general the conditional distribution vary over time, as already suggested by the existing literature. The GLDs give good results in estimating VaR on three real indices, ASX 200, S&P 500 and FT 30, with results very similar to the results provided by historical simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We review the theory of intellectual property (IP) in the creative industries (CI) from the evolutionary economic perspective based on evidence from China. We argue that many current confusions and dysfunctions about IP can be traced to three widely overlooked aspects of the growth of knowledge context of IP in the CI: (1) the effect of globalization; (2) the dominating relative economic value of reuse of creative output over monopoly incentives to create input; and (3) the evolution of business models in response to institutional change. We conclude that a substantial weakening of copyright will, in theory, produce positive net public and private gain due to the evolutionary dynamics of all three dimensions.