996 resultados para leave time
Resumo:
Prior to embarking on further study into the subject of relevance it is essential to consider why the concept of relevance has remained inconclusive, despite extensive research and its centrality to the discipline of information science. The approach taken in this paper is to reconstruct the science of information retrieval from first principles including the problem statement, role, scope and objective. This framework for document selection is put forward as a straw man for comparison with the historical relevance models. The paper examines five influential relevance models over the past 50 years. Each is examined with respect to its treatment of relevance and compared with the first principles model to identify contributions and deficiencies. The major conclusion drawn is that relevance is a significantly overloaded concept which is both confusing and detrimental to the science.
Resumo:
Recently Li and Xia have proposed a transmission scheme for wireless relay networks based on the Alamouti space time code and orthogonal frequency division multiplexing to combat the effect of timing errors at the relay nodes. This transmission scheme is amazingly simple and achieves a diversity order of two for any number of relays. Motivated by its simplicity, this scheme is extended to a more general transmission scheme that can achieve full cooperative diversity for any number of relays. The conditions on the distributed space time block code (DSTBC) structure that admit its application in the proposed transmission scheme are identified and it is pointed out that the recently proposed full diversity four group decodable DST-BCs from precoded co-ordinate interleaved orthogonal designs and extended Clifford algebras satisfy these conditions. It is then shown how differential encoding at the source can be combined with the proposed transmission scheme to arrive at a new transmission scheme that can achieve full cooperative diversity in asynchronous wireless relay networks with no channel information and also no timing error knowledge at the destination node. Finally, four group decodable distributed differential space time block codes applicable in this new transmission scheme for power of two number of relays are also provided.
Resumo:
The differential encoding/decoding setup introduced by Kiran et at, Oggier et al and Jing et al for wireless relay networks that use codebooks consisting of unitary matrices is extended to allow codebooks consisting of scaled unitary matrices. For such codebooks to be used in the Jing-Hassibi protocol for cooperative diversity, the conditions that need to be satisfied by the relay matrices and the codebook are identified. A class of previously known rate one, full diversity, four-group encodable and four-group decodable Differential Space-Time Codes (DSTCs) is proposed for use as Distributed DSTCs (DDSTCs) in the proposed set up. To the best of our knowledge, this is the first known low decoding complexity DDSTC scheme for cooperative wireless networks.
Resumo:
Building on the launch of an early prototype at Balance Unbalance 2013, we now offer a fully realised experience of the ‘Long Time, No See?’ site specific walking/visualisation project for conference users to engage with on a do it yourself basis, either before, during or after the event. ‘Long Time, No See?’ is a new form of participatory, environmental futures project, designed for individuals and groups. It uses a smartphone APP to guide processes of individual or group walking at any chosen location—encouraging walkers to think in radical new ways about how to best prepare for ‘stormy’ environmental futures ahead. As part of their personal journeys participants’ contribute site-specific micro narratives in the form of texts, images and sounds, captured via the APP during the loosely ‘guided’ walk. These responses are then uploaded and synthesised into an ever-building audiovisual and generative artwork/‘map’ of future-thinking affinities, viewable both online at long-time-no-see.org (in Chrome) (and at the same time on a large screen visualisations at QUT’s Cube Centre in Brisbane Australia). The artwork therefore spans both participants’ mobile devices and laptops. If desired outcomes can also be presented publicly in large screen format at the conference. ‘Long Time, No See?’ has been developed over the past two years by a team of leading Australian artists, designers, urban/environmental planners and programmers.
Resumo:
This thesis focuses on the issue of testing sleepiness quantitatively. The issue is relevant to policymakers concerned with traffic- and occupational safety; such testing provides a tool for safety legislation and -surveillance. The findings of this thesis provide guidelines for a posturographic sleepiness tester. Sleepiness ensuing from staying awake merely 17 h impairs our performance as much as the legally proscribed blood alcohol concentration 0.5 does. Hence, sleepiness is a major risk factor in transportation and occupational accidents. The lack of convenient, commercial sleepiness tests precludes testing impending sleepiness levels contrary to simply breath testing for alcohol intoxication. Posturography is a potential sleepiness test, since clinical diurnal balance testing suggests the hypothesis that time awake could be posturographically estimable. Relying on this hypothesis this thesis examines posturographic sleepiness testing for instrumentation purposes. Empirical results from 63 subjects for whom we tested balance with a force platform during wakefulness for maximum 36 h show that sustained wakefulness impairs balance. The results show that time awake is posturographically estimable with 88% accuracy and 97% precision which validates our hypothesis. Results also show that balance scores tested at 13:30 hours serve as a threshold to detect excessive sleepiness. Analytical results show that the test length has a marked effect on estimation accuracy: 18 s tests suffice to identify sleepiness related balance changes, but trades off some of the accuracy achieved with 30 s tests. The procedure to estimate time awake relies on equating the subject s test score to a reference table (comprising balance scores tested during sustained wakefulness, regressed against time awake). Empirical results showed that sustained wakefulness explains 60% of the diurnal balance variations, whereas the time of day explains 40% of the balance variations. The latter fact implies that time awake estimations also must rely on knowing the local times of both test and reference scores.
Resumo:
The quality of short-term electricity load forecasting is crucial to the operation and trading activities of market participants in an electricity market. In this paper, it is shown that a multiple equation time-series model, which is estimated by repeated application of ordinary least squares, has the potential to match or even outperform more complex nonlinear and nonparametric forecasting models. The key ingredient of the success of this simple model is the effective use of lagged information by allowing for interaction between seasonal patterns and intra-day dependencies. Although the model is built using data for the Queensland region of Australia, the method is completely generic and applicable to any load forecasting problem. The model’s forecasting ability is assessed by means of the mean absolute percentage error (MAPE). For day-ahead forecast, the MAPE returned by the model over a period of 11 years is an impressive 1.36%. The forecast accuracy of the model is compared with a number of benchmarks including three popular alternatives and one industrial standard reported by the Australia Energy Market Operator (AEMO). The performance of the model developed in this paper is superior to all benchmarks and outperforms the AEMO forecasts by about a third in terms of the MAPE criterion.
Resumo:
Terahertz time-domain spectroscopy has been carried out on a metallic film of polypyrrole (PPy doped by PF6). The sample was exposed to air to investigate how the conductivity of the film varies as a function of time. The absorption and dispersion of the film decrease during initial days, and then tend to saturate. The conductivity of unaged sample follows the Drude model, and upon aging the data fit to the localization-modified Drude model. The fitting parameters show that the number of charge carriers decreases during the aging process. The initial rapid decrease in conductivity suggests that some of the delocalized carriers are localized due to aging. (C) 2007 American Institute of Physics.
Resumo:
Accurate and stable time series of geodetic parameters can be used to help in understanding the dynamic Earth and its response to global change. The Global Positioning System, GPS, has proven to be invaluable in modern geodynamic studies. In Fennoscandia the first GPS networks were set up in 1993. These networks form the basis of the national reference frames in the area, but they also provide long and important time series for crustal deformation studies. These time series can be used, for example, to better constrain the ice history of the last ice age and the Earth s structure, via existing glacial isostatic adjustment models. To improve the accuracy and stability of the GPS time series, the possible nuisance parameters and error sources need to be minimized. We have analysed GPS time series to study two phenomena. First, we study the refraction in the neutral atmosphere of the GPS signal, and, second, we study the surface loading of the crust by environmental factors, namely the non-tidal Baltic Sea, atmospheric load and varying continental water reservoirs. We studied the atmospheric effects on the GPS time series by comparing the standard method to slant delays derived from a regional numerical weather model. We have presented a method for correcting the atmospheric delays at the observational level. The results show that both standard atmosphere modelling and the atmospheric delays derived from a numerical weather model by ray-tracing provide a stable solution. The advantage of the latter is that the number of unknowns used in the computation decreases and thus, the computation may become faster and more robust. The computation can also be done with any processing software that allows the atmospheric correction to be turned off. The crustal deformation due to loading was computed by convolving Green s functions with surface load data, that is to say, global hydrology models, global numerical weather models and a local model for the Baltic Sea. The result was that the loading factors can be seen in the GPS coordinate time series. Reducing the computed deformation from the vertical time series of GPS coordinates reduces the scatter of the time series; however, the long term trends are not influenced. We show that global hydrology models and the local sea surface can explain up to 30% of the GPS time series variation. On the other hand atmospheric loading admittance in the GPS time series is low, and different hydrological surface load models could not be validated in the present study. In order to be used for GPS corrections in the future, both atmospheric loading and hydrological models need further analysis and improvements.
Resumo:
Background The incidence of obesity amongst patients presenting for elective Total Hip Arthroplasty (THA) has increased in the last decade and the relationship between obesity and the need for joint replacement has been demonstrated. This study evaluates the effects of morbid obesity on outcomes following primary THA by comparing short-term outcomes in THA between a morbidly obese (BMI ≥40) and a normal weight (BMI 18.5 - <25) cohort at our institution between January 2003 and December 2010. Methods Thirty-nine patients included in the morbidly obese group were compared with 186 in the normal weight group. Operative time, length of stay, complications, readmission and length of readmission were compared. Results Operative time was increased in the morbidly obese group at 122 minutes compared with 100 minutes (p=0.002). Post-operatively there was an increased 30-day readmission rate related to surgery of 12.8% associated with BMI ≥40 compared with 2.7% (p= 0.005) as well as a 5.1 fold increase in surgery related readmitted bed days - 0.32 bed days per patient for normal weight compared with 1.64 per patient for the morbidly obese (p=0.026). Conclusion Morbidly obese patients present a technical challenge and likely this and the resultant complications are underestimated. More work needs to be performed in order to enable suitable allocation of resources.
Resumo:
Arguments arising from quantum mechanics and gravitation theory as well as from string theory, indicate that the description of space-time as a continuous manifold is not adequate at very short distances. An important candidate for the description of space-time at such scales is provided by noncommutative space-time where the coordinates are promoted to noncommuting operators. Thus, the study of quantum field theory in noncommutative space-time provides an interesting interface where ordinary field theoretic tools can be used to study the properties of quantum spacetime. The three original publications in this thesis encompass various aspects in the still developing area of noncommutative quantum field theory, ranging from fundamental concepts to model building. One of the key features of noncommutative space-time is the apparent loss of Lorentz invariance that has been addressed in different ways in the literature. One recently developed approach is to eliminate the Lorentz violating effects by integrating over the parameter of noncommutativity. Fundamental properties of such theories are investigated in this thesis. Another issue addressed is model building, which is difficult in the noncommutative setting due to severe restrictions on the possible gauge symmetries imposed by the noncommutativity of the space-time. Possible ways to relieve these restrictions are investigated and applied and a noncommutative version of the Minimal Supersymmetric Standard Model is presented. While putting the results obtained in the three original publications into their proper context, the introductory part of this thesis aims to provide an overview of the present situation in the field.
Resumo:
We consider a scenario in which a wireless sensor network is formed by randomly deploying n sensors to measure some spatial function over a field, with the objective of computing a function of the measurements and communicating it to an operator station. We restrict ourselves to the class of type-threshold functions (as defined in the work of Giridhar and Kumar, 2005), of which max, min, and indicator functions are important examples: our discussions are couched in terms of the max function. We view the problem as one of message-passing distributed computation over a geometric random graph. The network is assumed to be synchronous, and the sensors synchronously measure values and then collaborate to compute and deliver the function computed with these values to the operator station. Computation algorithms differ in (1) the communication topology assumed and (2) the messages that the nodes need to exchange in order to carry out the computation. The focus of our paper is to establish (in probability) scaling laws for the time and energy complexity of the distributed function computation over random wireless networks, under the assumption of centralized contention-free scheduling of packet transmissions. First, without any constraint on the computation algorithm, we establish scaling laws for the computation time and energy expenditure for one-time maximum computation. We show that for an optimal algorithm, the computation time and energy expenditure scale, respectively, as Theta(radicn/log n) and Theta(n) asymptotically as the number of sensors n rarr infin. Second, we analyze the performance of three specific computation algorithms that may be used in specific practical situations, namely, the tree algorithm, multihop transmission, and the Ripple algorithm (a type of gossip algorithm), and obtain scaling laws for the computation time and energy expenditure as n rarr infin. In particular, we show that the computation time for these algorithms scales as Theta(radicn/lo- g n), Theta(n), and Theta(radicn log n), respectively, whereas the energy expended scales as , Theta(n), Theta(radicn/log n), and Theta(radicn log n), respectively. Finally, simulation results are provided to show that our analysis indeed captures the correct scaling. The simulations also yield estimates of the constant multipliers in the scaling laws. Our analyses throughout assume a centralized optimal scheduler, and hence, our results can be viewed as providing bounds for the performance with practical distributed schedulers.
Resumo:
Child sexual abuse is widespread and difficult to detect. To enhance case identification, many societies have enacted mandatory reporting laws requiring designated professionals, most often police, teachers, doctors and nurses, to report suspected cases to government child welfare agencies. Little research has explored the effects of introducing a reporting law on the number of reports made, and the outcomes of those reports. This study explored the impact of a new legislative mandatory reporting duty for child sexual abuse in the State of Western Australia over seven years. We analysed data about numbers and outcomes of reports by mandated reporters, for periods before the law (2006-08) and after the law (2009-12). Results indicate that the number of reports by mandated reporters of suspected child sexual abuse increased by a factor of 3.7, from an annual mean of 662 in the three year pre-law period to 2448 in the four year post-law period. The increase in the first two post-law years was contextually and statistically significant. Report numbers stabilised in 2010-12, at one report per 210 children. The number of investigated reports increased threefold, from an annual mean of 451 in the pre-law period to 1363 in the post-law period. Significant decline in the proportion of mandated reports that were investigated in the first two post-law years suggested the new level of reporting and investigative need exceeded what was anticipated. However, a subsequent significant increase restored the pre-law proportion, suggesting systemic adaptive capacity. The number of substantiated investigations doubled, from an annual mean of 160 in the pre-law period to 327 in the post-law period, indicating twice as many sexually abused children were being identified.
Resumo:
Instability in conventional haptic rendering destroys the perception of rigid objects in virtual environments. Inherent limitations in the conventional haptic loop restrict the maximum stiffness that can be rendered. In this paper we present a method to render virtual walls that are much stiffer than those achieved by conventional techniques. By removing the conventional digital haptic loop and replacing it with a part-continuous and part-discrete time hybrid haptic loop, we were able to render stiffer walls. The control loop is implemented as a combinational logic circuit on an field-programmable gate array. We compared the performance of the conventional haptic loop and our hybrid haptic loop on the same haptic device, and present mathematical analysis to show the limit of stability of our device. Our hybrid method removes the computer-intensive haptic loop from the CPU-this can free a significant amount of resources that can be used for other purposes such as graphical rendering and physics modeling. It is our hope that, in the future, similar designs will lead to a haptics processing unit (HPU).