956 resultados para Time ratio
Resumo:
This paper will investigate the suitability of existing performance measures under the assumption of a clearly defined benchmark. A range of measures are examined including the Sortino Ratio, the Sharpe Selection ratio (SSR), the Student’s t-test and a decay rate measure. A simulation study is used to assess the power and bias of these measures based on variations in sample size and mean performance of two simulated funds. The Sortino Ratio is found to be the superior performance measure exhibiting more power and less bias than the SSR when the distribution of excess returns are skewed.
Resumo:
This paper presents the preliminary results in establishing a strategy for predicting Zenith Tropospheric Delay (ZTD) and relative ZTD (rZTD) between Continuous Operating Reference Stations (CORS) in near real-time. It is anticipated that the predicted ZTD or rZTD can assist the network-based Real-Time Kinematic (RTK) performance over long inter-station distances, ultimately, enabling a cost effective method of delivering precise positioning services to sparsely populated regional areas, such as Queensland. This research firstly investigates two ZTD solutions: 1) the post-processed IGS ZTD solution and 2) the near Real-Time ZTD solution. The near Real-Time solution is obtained through the GNSS processing software package (Bernese) that has been deployed for this project. The predictability of the near Real-Time Bernese solution is analyzed and compared to the post-processed IGS solution where it acts as the benchmark solution. The predictability analyses were conducted with various prediction time of 15, 30, 45, and 60 minutes to determine the error with respect to timeliness. The predictability of ZTD and relative ZTD is determined (or characterized) by using the previously estimated ZTD as the predicted ZTD of current epoch. This research has shown that both the ZTD and relative ZTD predicted errors are random in nature; the STD grows from a few millimeters to sub-centimeters while the predicted delay interval ranges from 15 to 60 minutes. Additionally, the RZTD predictability shows very little dependency on the length of tested baselines of up to 1000 kilometers. Finally, the comparison of near Real-Time Bernese solution with IGS solution has shown a slight degradation in the prediction accuracy. The less accurate NRT solution has an STD error of 1cm within the delay of 50 minutes. However, some larger errors of up to 10cm are observed.
Resumo:
We evaluate the performance of several specification tests for Markov regime-switching time-series models. We consider the Lagrange multiplier (LM) and dynamic specification tests of Hamilton (1996) and Ljung–Box tests based on both the generalized residual and a standard-normal residual constructed using the Rosenblatt transformation. The size and power of the tests are studied using Monte Carlo experiments. We find that the LM tests have the best size and power properties. The Ljung–Box tests exhibit slight size distortions, though tests based on the Rosenblatt transformation perform better than the generalized residual-based tests. The tests exhibit impressive power to detect both autocorrelation and autoregressive conditional heteroscedasticity (ARCH). The tests are illustrated with a Markov-switching generalized ARCH (GARCH) model fitted to the US dollar–British pound exchange rate, with the finding that both autocorrelation and GARCH effects are needed to adequately fit the data.
Resumo:
Visiting a modern shopping center is becoming vital in our society nowadays. The fast growth of shopping center, transportation system, and modern vehicles has given more choices for consumers in shopping. Although there are many reasons for the consumers in visiting the shopping center, the influence of travel time and size of shopping center are important things to be considered towards the frequencies of visiting customers in shopping centers. A survey to the customers of three major shopping centers in Surabaya has been conducted to evaluate the Ellwood’s model and Huff’s model. A new exponent value N of 0.48 and n of 0.50 has been found from the Ellwood’s model, while a coefficient of 0.267 and an add value of 0.245 have been found from the Huff’s model.
Resumo:
The New Zealand creative sector was responsible for almost 121,000 jobs at the time of the 2006 Census (6.3% of total employment). These are divided between • 35,751 creative specialists – persons employed doing creative work in creative industries • 42,300 support workers - persons providing management and support services in creative industries • 42,792 embedded creative workers – persons engaged in creative work in other types of enterprise The most striking feature of this breakdown is the fact that the largest group of creative workers are employed outside the creative industries, i.e. in other types of businesses. Even within the creative industries, there are fewer people directly engaged in creative work than in providing management and support. Creative sector employees earned incomes of approximately $52,000 per annum at the time of the 2006 Census. This is relatively uniform across all three types of creative worker, and is significantly above the average for all employed persons (of approximately $40,700). Creative employment and incomes were growing strongly over both five year periods between the 1996, 2001 and 2006 Censuses. However, when we compare creative and general trends, we see two distinct phases in the development of the creative sector: • rapid structural growth over the five years to 2001 (especially led by developments in ICT), with creative employment and incomes increasing rapidly at a time when they were growing modestly across the whole economy; • subsequent consolidation, with growth driven by more by national economic expansion than structural change, and creative employment and incomes moving in parallel with strong economy-wide growth. Other important trends revealed by the data are that • the strongest growth during the decade was in embedded creative workers, especially over the first five years. The weakest growth was in creative specialists, with support workers in creative industries in the middle rank, • by far the strongest growth in creative industries’ employment was in Software & digital content, which trebled in size over the decade Comparing New Zealand with the United Kingdom and Australia, the two southern hemisphere nations have significantly lower proportions of total employment in the creative sector (both in creative industries and embedded employment). New Zealand’s and Australia’s creative shares in 2001 were similar (5.4% each), but in the following five years, our share has expanded (to 5.7%) whereas Australia’s fell slightly (to 5.2%) – in both cases, through changes in creative industries’ employment. The creative industries generated $10.5 billion in total gross output in the March 2006 year. Resulting from this was value added totalling $5.1b, representing 3.3% of New Zealand’s total GDP. Overall, value added in the creative industries represents 49% of industry gross output, which is higher than the average across the whole economy, 45%. This is a reflection of the relatively high labour intensity and high earnings of the creative industries. Industries which have an above-average ratio of value added to gross output are usually labour-intensive, especially when wages and salaries are above average. This is true for Software & Digital Content and Architecture, Design & Visual Arts, with ratios of 60.4% and 55.2% respectively. However there is significant variation in this ratio between different parts of the creative industries, with some parts (e.g. Software & Digital Content and Architecture, Design & Visual Arts) generating even higher value added relative to output, and others (e.g. TV & Radio, Publishing and Music & Performing Arts) less, because of high capital intensity and import content. When we take into account the impact of the creative industries’ demand for goods and services from its suppliers and consumption spending from incomes earned, we estimate that there is an addition to economic activity of: • $30.9 billion in gross output, $41.4b in total • $15.1b in value added, $20.3b in total • 158,100 people employed, 234,600 in total The total economic impact of the creative industries is approximately four times their direct output and value added, and three times their direct employment. Their effect on output and value added is roughly in line with the average over all industries, although the effect on employment is significantly lower. This is because of the relatively high labour intensity (and high earnings) of the creative industries, which generate below-average demand from suppliers, but normal levels of demand though expenditure from incomes. Drawing on these numbers and conclusions, we suggest some (slightly speculative) directions for future research. The goal is to better understand the contribution the creative sector makes to productivity growth; in particular, the distinctive contributions from creative firms and embedded creative workers. The ideas for future research can be organised into the several categories: • Understanding the categories of the creative sector– who is doing the business? In other words, examine via more fine grained research (at a firm level perhaps) just what is the creative contribution from the different aspects of the creative sector industries. It may be possible to categorise these in terms of more or less striking innovations. • Investigate the relationship between the characteristics and the performance of the various creative industries/ sectors; • Look more closely at innovation at an industry level e.g. using an index of relative growth of exports, and see if this can be related to intensity of use of creative inputs; • Undertake case studies of the creative sector; • Undertake case studies of the embedded contribution to growth in the firms and industries that employ them, by examining taking several high performing noncreative industries (in the same way as proposed for the creative sector). • Look at the aggregates – drawing on the broad picture of the extent of the numbers of creative workers embedded within the different industries, consider the extent to which these might explain aspects of the industries’ varied performance in terms of exports, growth and so on. • This might be able to extended to examine issues like the type of creative workers that are most effective when embedded, or test the hypothesis that each industry has its own particular requirements for embedded creative workers that overwhelms any generic contributions from say design, or IT.
Resumo:
Employing multilevel inverters is a proper solution to reduce harmonic content of output voltage and electromagnetic interference in high power electronic applications. In this paper, a new pulse width modulation method for multilevel inverters is proposed in which power devices’ on-off switching times have been considered. This method can be surveyed in order to analyse the effect of switching time on harmonic contents of output voltage in high frequency applications when a switching time is not negligible compared to a switching cycle. Fast Fourier transform calculation and analysis of output voltage waveforms and harmonic contents with regard to switching time variation are presented in this paper for a single phase (3, 5)-level inverters used in high voltage and high frequency converters. Mathematical analysis and MATLAB simulation results have been carried out to validate the proposed method.
Resumo:
Many surveillance applications (object tracking, abandoned object detection) rely on detecting changes in a scene. Foreground segmentation is an effective way to extract the foreground from the scene, but these techniques cannot discriminate between objects that have temporarily stopped and those that are moving. We propose a series of modifications to an existing foreground segmentation system\cite{Butler2003} so that the foreground is further segmented into two or more layers. This yields an active layer of objects currently in motion and a passive layer of objects that have temporarily ceased motion which can itself be decomposed into multiple static layers. We also propose a variable threshold to cope with variable illumination, a feedback mechanism that allows an external process (i.e. surveillance system) to alter the motion detectors state, and a lighting compensation process and a shadow detector to reduce errors caused by lighting inconsistencies. The technique is demonstrated using outdoor surveillance footage, and is shown to be able to effectively deal with real world lighting conditions and overlapping objects.
Resumo:
Automated crowd counting allows excessive crowding to be detected immediately, without the need for constant human surveillance. Current crowd counting systems are location specific, and for these systems to function properly they must be trained on a large amount of data specific to the target location. As such, configuring multiple systems to use is a tedious and time consuming exercise. We propose a scene invariant crowd counting system which can easily be deployed at a different location to where it was trained. This is achieved using a global scaling factor to relate crowd sizes from one scene to another. We demonstrate that a crowd counting system trained at one viewpoint can achieve a correct classification rate of 90% at a different viewpoint.
Resumo:
The type and quality of youth identities ascribed to young people living in residual housing areas present opportunities for action as well as structural constraints. In this book three ethnographies, based on a youth work practitioner's observations, interviews and participation in local networks, identify young people's resistant identities. Through an analysis of social exclusion, youth policies and interviews with young people, youth workers and their managers, the book outlines a contingent network of relationships that hinder informal learning. Globalisation, individualisation, welfare/education reform and the rise of cultural social movements act upon youth identities and steer youth policies to subordinate the notion of informal group learning. Drawing on Castells' and Touraine's sociological models of identity, the book explores youth as a category of time and residual housing areas as a category of space, as they pertain to local dynamics of social exclusion.
Resumo:
This paper proposes a clustered approach for blind beamfoming from ad-hoc microphone arrays. In such arrangements, microphone placement is arbitrary and the speaker may be close to one, all or a subset of microphones at a given time. Practical issues with such a configuration mean that some microphones might be better discarded due to poor input signal to noise ratio (SNR) or undesirable spatial aliasing effects from large inter-element spacings when beamforming. Large inter-microphone spacings may also lead to inaccuracies in delay estimation during blind beamforming. In such situations, using a cluster of microphones (ie, a sub-array), closely located both to each other and to the desired speech source, may provide more robust enhancement than the full array. This paper proposes a method for blind clustering of microphones based on the magnitude square coherence function, and evaluates the method on a database recorded using various ad-hoc microphone arrangements.
Resumo:
Recent research on particle size distributions and particle concentrations near a busy road cannot be explained by the conventional mechanisms for particle evolution of combustion aerosols. Specifically they appear to be inadequate to explain the experimental observations of particle transformation and the evolution of the total number concentration. This resulted in the development of a new mechanism based on their thermal fragmentation, for the evolution of combustion aerosol nano-particles. A complex and comprehensive pattern of evolution of combustion aerosols, involving particle fragmentation, was then proposed and justified. In that model it was suggested that thermal fragmentation occurs in aggregates of primary particles each of which contains a solid graphite/carbon core surrounded by volatile molecules bonded to the core by strong covalent bonds. Due to the presence of strong covalent bonds between the core and the volatile (frill) molecules, such primary composite particles can be regarded as solid, despite the presence of significant (possibly, dominant) volatile component. Fragmentation occurs when weak van der Waals forces between such primary particles are overcome by their thermal (Brownian) motion. In this work, the accepted concept of thermal fragmentation is advanced to determine whether fragmentation is likely in liquid composite nano-particles. It has been demonstrated that at least at some stages of evolution, combustion aerosols contain a large number of composite liquid particles containing presumably several components such as water, oil, volatile compounds, and minerals. It is possible that such composite liquid particles may also experience thermal fragmentation and thus contribute to, for example, the evolution of the total number concentration as a function of distance from the source. Therefore, the aim of this project is to examine theoretically the possibility of thermal fragmentation of composite liquid nano-particles consisting of immiscible liquid v components. The specific focus is on ternary systems which include two immiscible liquid droplets surrounded by another medium (e.g., air). The analysis shows that three different structures are possible, the complete encapsulation of one liquid by the other, partial encapsulation of the two liquids in a composite particle, and the two droplets separated from each other. The probability of thermal fragmentation of two coagulated liquid droplets is discussed and examined for different volumes of the immiscible fluids in a composite liquid particle and their surface and interfacial tensions through the determination of the Gibbs free energy difference between the coagulated and fragmented states, and comparison of this energy difference with the typical thermal energy kT. The analysis reveals that fragmentation was found to be much more likely for a partially encapsulated particle than a completely encapsulated particle. In particular, it was found that thermal fragmentation was much more likely when the volume ratio of the two liquid droplets that constitute the composite particle are very different. Conversely, when the two liquid droplets are of similar volumes, the probability of thermal fragmentation is small. It is also demonstrated that the Gibbs free energy difference between the coagulated and fragmented states is not the only important factor determining the probability of thermal fragmentation of composite liquid particles. The second essential factor is the actual structure of the composite particle. It is shown that the probability of thermal fragmentation is also strongly dependent on the distance that each of the liquid droplets should travel to reach the fragmented state. In particular, if this distance is larger than the mean free path for the considered droplets in the air, the probability of thermal fragmentation should be negligible. In particular, it follows form here that fragmentation of the composite particle in the state with complete encapsulation is highly unlikely because of the larger distance that the two droplets must travel in order to separate. The analysis of composite liquid particles with the interfacial parameters that are expected in combustion aerosols demonstrates that thermal fragmentation of these vi particles may occur, and this mechanism may play a role in the evolution of combustion aerosols. Conditions for thermal fragmentation to play a significant role (for aerosol particles other than those from motor vehicle exhaust) are determined and examined theoretically. Conditions for spontaneous transformation between the states of composite particles with complete and partial encapsulation are also examined, demonstrating the possibility of such transformation in combustion aerosols. Indeed it was shown that for some typical components found in aerosols that transformation could take place on time scales less than 20 s. The analysis showed that factors that influenced surface and interfacial tension played an important role in this transformation process. It is suggested that such transformation may, for example, result in a delayed evaporation of composite particles with significant water component, leading to observable effects in evolution of combustion aerosols (including possible local humidity maximums near a source, such as a busy road). The obtained results will be important for further development and understanding of aerosol physics and technologies, including combustion aerosols and their evolution near a source.
Resumo:
The rapidly evolving nursing working environment has seen the increased use of flexible non standard employment, including part-time, casual and itinerate workers. Evidence suggests that the nursing workforce has been at the forefront of the flexibility push which has seen the appearance of a dual workforce and marginalization of part- time and casual workers by their full-time peers and managers. The resulting fragmentation has meant that effective communication management has become difficult. Additionally, it is likely that poor organisational communication exacerbated by the increased use of non standard staff, is a factor underlying current discontent in the nursing industry and may impact on both recruitment and retention problems as well as patient outcomes. This literature review explores the relationship between the increasing casualisation of the nursing workforce and, among other things, the communication practices of nurses within healthcare organisations.
Resumo:
To reduce the damage of phishing and spyware attacks, banks, governments, and other security-sensitive industries are deploying one-time password systems, where users have many passwords and use each password only once. If a single password is compromised, it can be only be used to impersonate the user once, limiting the damage caused. However, existing practical approaches to one-time passwords have been susceptible to sophisticated phishing attacks. ---------- We give a formal security treatment of this important practical problem. We consider the use of one-time passwords in the context of password-authenticated key exchange (PAKE), which allows for mutual authentication, session key agreement, and resistance to phishing attacks. We describe a security model for the use of one-time passwords, explicitly considering the compromise of past (and future) one-time passwords, and show a general technique for building a secure one-time-PAKE protocol from any secure PAKE protocol. Our techniques also allow for the secure use of pseudorandomly generated and time-dependent passwords.
Resumo:
PURPOSE: To examine the association between neighborhood disadvantage and physical activity (PA). ---------- METHODS: We use data from the HABITAT multilevel longitudinal study of PA among mid-aged (40-65 years) men and women (n=11, 037, 68.5% response rate) living in 200 neighborhoods in Brisbane, Australia. PA was measured using three questions from the Active Australia Survey (general walking, moderate, and vigorous activity), one indicator of total activity, and two questions about walking and cycling for transport. The PA measures were operationalized using multiple categories based on time and estimated energy expenditure that were interpretable with reference to the latest PA recommendations. The association between neighborhood disadvantage and PA was examined using multilevel multinomial logistic regression and Markov Chain Monte Carlo simulation. The contribution of neighborhood disadvantage to between-neighborhood variation in PA was assessed using the 80% interval odds ratio. ---------- RESULTS: After adjustment for sex, age, living arrangement, education, occupation, and household income, reported participation in all measures and levels of PA varied significantly across Brisbane’s neighborhoods, and neighborhood disadvantage accounted for some of this variation. Residents of advantaged neighborhoods reported significantly higher levels of total activity, general walking, moderate, and vigorous activity; however, they were less likely to walk for transport. There was no statistically significant association between neighborhood disadvantage and cycling for transport. In terms of total PA, residents of advantaged neighborhoods were more likely to exceed PA recommendations. ---------- CONCLUSIONS: Neighborhoods may exert a contextual effect on residents’ likelihood of participating in PA. The greater propensity of residents in advantaged neighborhoods to do high levels of total PA may contribute to lower rates of cardiovascular disease and obesity in these areas