342 resultados para Running Kinematics
Resumo:
The recommendations for the first aid treatment of burn injuries have previously been based upon conflicting published studies and as a result the recommendations have been vague with respect to optimal first aid treatment modality, temperature, duration and delay after which treatment is still effective. The public have also continued to use treatments such as ice and alternative therapies, however there is little evidence to support their use. Recently there have been several studies conducted by burn researchers in Australia which have enabled the recommendations to be clarified. First aid should consist of cool running water (2-15°C), applied for 20 minutes duration, as soon as possible but for up to 3 hours after the burn injury has occurred. Ice should not be used and alternative therapies should only be used to relieve pain as an adjunct to cold water treatment. Optimal first aid treatment significantly reduces tissue damage, hastens wound re-epithelialisation and reduces scarring and should be promoted widely to the public.
Resumo:
Long-running debates over the value of university-based journalism education have suffered from a lack of empirical foundation, leading to a wide range of assertions both from those who see journalism education playing a crucial role in moulding future journalists and those who do not. Based on a survey of 320 Australian journalism students from six universities across the country, this study provides an account of the professional views these future journalists hold. Findings show that students hold broadly similar priorities in their role perceptions, albeit to different intensities from working journalists. The results point to a relationship between journalism education and the way in which students' views of journalism's watchdog role and its market orientation change over the course of their degree – to the extent that, once they are near completion of their degree, students have been moulded in the image of industry professionals.
Resumo:
Cryptosystems based on the hardness of lattice problems have recently acquired much importance due to their average-case to worst-case equivalence, their conjectured resistance to quantum cryptanalysis, their ease of implementation and increasing practicality, and, lately, their promising potential as a platform for constructing advanced functionalities. In this work, we construct “Fuzzy” Identity Based Encryption from the hardness of the Learning With Errors (LWE) problem. We note that for our parameters, the underlying lattice problems (such as gapSVP or SIVP) are assumed to be hard to approximate within supexponential factors for adversaries running in subexponential time. We give CPA and CCA secure variants of our construction, for small and large universes of attributes. All our constructions are secure against selective-identity attacks in the standard model. Our construction is made possible by observing certain special properties that secret sharing schemes need to satisfy in order to be useful for Fuzzy IBE. We also discuss some obstacles towards realizing lattice-based attribute-based encryption (ABE).
Resumo:
In this paper we introduce a formalization of Logical Imaging applied to IR in terms of Quantum Theory through the use of an analogy between states of a quantum system and terms in text documents. Our formalization relies upon the Schrodinger Picture, creating an analogy between the dynamics of a physical system and the kinematics of probabilities generated by Logical Imaging. By using Quantum Theory, it is possible to model more precisely contextual information in a seamless and principled fashion within the Logical Imaging process. While further work is needed to empirically validate this, the foundations for doing so are provided.
Resumo:
The assumptions underlying the Probability Ranking Principle (PRP) have led to a number of alternative approaches that cater or compensate for the PRP’s limitations. All alternatives deviate from the PRP by incorporating dependencies. This results in a re-ranking that promotes or demotes documents depending upon their relationship with the documents that have been already ranked. In this paper, we compare and contrast the behaviour of state-of-the-art ranking strategies and principles. To do so, we tease out analytical relationships between the ranking approaches and we investigate the document kinematics to visualise the effects of the different approaches on document ranking.
Resumo:
In this thesis we investigate the use of quantum probability theory for ranking documents. Quantum probability theory is used to estimate the probability of relevance of a document given a user's query. We posit that quantum probability theory can lead to a better estimation of the probability of a document being relevant to a user's query than the common approach, i. e. the Probability Ranking Principle (PRP), which is based upon Kolmogorovian probability theory. Following our hypothesis, we formulate an analogy between the document retrieval scenario and a physical scenario, that of the double slit experiment. Through the analogy, we propose a novel ranking approach, the quantum probability ranking principle (qPRP). Key to our proposal is the presence of quantum interference. Mathematically, this is the statistical deviation between empirical observations and expected values predicted by the Kolmogorovian rule of additivity of probabilities of disjoint events in configurations such that of the double slit experiment. We propose an interpretation of quantum interference in the document ranking scenario, and examine how quantum interference can be effectively estimated for document retrieval. To validate our proposal and to gain more insights about approaches for document ranking, we (1) analyse PRP, qPRP and other ranking approaches, exposing the assumptions underlying their ranking criteria and formulating the conditions for the optimality of the two ranking principles, (2) empirically compare three ranking principles (i. e. PRP, interactive PRP, and qPRP) and two state-of-the-art ranking strategies in two retrieval scenarios, those of ad-hoc retrieval and diversity retrieval, (3) analytically contrast the ranking criteria of the examined approaches, exposing similarities and differences, (4) study the ranking behaviours of approaches alternative to PRP in terms of the kinematics they impose on relevant documents, i. e. by considering the extent and direction of the movements of relevant documents across the ranking recorded when comparing PRP against its alternatives. Our findings show that the effectiveness of the examined ranking approaches strongly depends upon the evaluation context. In the traditional evaluation context of ad-hoc retrieval, PRP is empirically shown to be better or comparable to alternative ranking approaches. However, when we turn to examine evaluation contexts that account for interdependent document relevance (i. e. when the relevance of a document is assessed also with respect to other retrieved documents, as it is the case in the diversity retrieval scenario) then the use of quantum probability theory and thus of qPRP is shown to improve retrieval and ranking effectiveness over the traditional PRP and alternative ranking strategies, such as Maximal Marginal Relevance, Portfolio theory, and Interactive PRP. This work represents a significant step forward regarding the use of quantum theory in information retrieval. It demonstrates in fact that the application of quantum theory to problems within information retrieval can lead to improvements both in modelling power and retrieval effectiveness, allowing the constructions of models that capture the complexity of information retrieval situations. Furthermore, the thesis opens up a number of lines for future research. These include: (1) investigating estimations and approximations of quantum interference in qPRP; (2) exploiting complex numbers for the representation of documents and queries, and; (3) applying the concepts underlying qPRP to tasks other than document ranking.
Resumo:
The 48 hour game making challenge has been running since 2007. In recent years, we have not only been running a 'game jam' for the local community but we have also been exploring the way in which the event itself and the place of the event has the potential to create its own stories. The 2014 challenge is part of a series of data collection opportunities focussed on the game jam itself and the meaning making that the participants engage in about the event. We continued the data collection commenced in 2012: "Game jams are the creative festivals of the game development community and a game jam is very much an event or performance; its stories are those of subjective experience. Participants return year after year and recount personal stories from previous challenges; arrival in the 48hr location typically inspires instances of individual memory and narration more in keeping with those of a music festival or an oft frequented holiday destination. Since its inception, the 48hr has been heavily documented, from the photo-blogging of our first jam and the twitter streams of more recent events to more formal interviews and documentaries (see Anderson, 2012). We have even had our own moments of Gonzo journalism with an on-site press room one year and an ‘embedded’ journalist another year (Keogh, 2011). In the last two years of the 48hr we have started to explore ways and means to collect more abstract data during the event, that is, empirical data about movement and activity. The intent behind this form of data collection was to explore graphic and computer generated visualisations of the event, not for the purpose of formal analysis but in the service of further story telling." [exerpt from truna aka j.turner, Thomas & Owen, 2013) See: truna aka j.turner, Thomas & Owen (2013) Living the indie life: mapping creative teams in a 48 hour game jam and playing with data, Proceedings of the 9th Australasian Conference on Interactive Entertainment, IE'2013, September 30 - October 01 2013, Melbourne, VIC, Australia
Resumo:
In the field of information retrieval (IR), researchers and practitioners are often faced with a demand for valid approaches to evaluate the performance of retrieval systems. The Cranfield experiment paradigm has been dominant for the in-vitro evaluation of IR systems. Alternative to this paradigm, laboratory-based user studies have been widely used to evaluate interactive information retrieval (IIR) systems, and at the same time investigate users’ information searching behaviours. Major drawbacks of laboratory-based user studies for evaluating IIR systems include the high monetary and temporal costs involved in setting up and running those experiments, the lack of heterogeneity amongst the user population and the limited scale of the experiments, which usually involve a relatively restricted set of users. In this paper, we propose an alternative experimental methodology to laboratory-based user studies. Our novel experimental methodology uses a crowdsourcing platform as a means of engaging study participants. Through crowdsourcing, our experimental methodology can capture user interactions and searching behaviours at a lower cost, with more data, and within a shorter period than traditional laboratory-based user studies, and therefore can be used to assess the performances of IIR systems. In this article, we show the characteristic differences of our approach with respect to traditional IIR experimental and evaluation procedures. We also perform a use case study comparing crowdsourcing-based evaluation with laboratory-based evaluation of IIR systems, which can serve as a tutorial for setting up crowdsourcing-based IIR evaluations.
Resumo:
Background Improvised explosive devices have become the characteristic weapon of conflicts in Iraq and Afghanistan. While little can be done to mitigate against the effects of blast in free-field explosions, scaled blast simulations have shown that the combat boot can attenuate the effects on the vehicle occupants of anti-vehicular mine blasts. Although the combat boot offers some protection to the lower limb, its behaviour at the energies seen in anti-vehicular mine blast has not been documented previously. Methods The sole of eight same-size combat boots from two brands currently used by UK troops deployed to Iraq and Afghanistan were impacted at energies of up to 518 J, using a spring-assisted drop rig. Results The results showed that the Meindl Desert Fox combat boot consistently experienced a lower peak force at lower impact energies and a longer time-to-peak force at higher impact energies when compared with the Lowa Desert Fox combat boot. Discussion This reduction in the peak force and extended rise time, resulting in a lower energy transfer rate, is a potentially positive mitigating effect in terms of the trauma experienced by the lower limb. Conclusion Currently, combat boots are tested under impact at the energies seen during heel strike in running. Through the identification of significantly different behaviours at high loading, this study has shown that there is rationale in adding the performance of combat boots under impact at energies above those set out in international standards to the list of criteria for the selection of a combat boot.
Resumo:
A multi-secret sharing scheme allows several secrets to be shared amongst a group of participants. In 2005, Shao and Cao developed a verifiable multi-secret sharing scheme where each participant’s share can be used several times which reduces the number of interactions between the dealer and the group members. In addition some secrets may require a higher security level than others involving the need for different threshold values. Recently Chan and Chang designed such a scheme but their construction only allows a single secret to be shared per threshold value. In this article we combine the previous two approaches to design a multiple time verifiable multi-secret sharing scheme where several secrets can be shared for each threshold value. Since the running time is an important factor for practical applications, we will provide a complexity comparison of our combined approach with respect to the previous schemes.
Resumo:
The motion of marine vessels has traditionally been studied using two different approaches: manoeuvring and seakeeping. These two approaches use different reference frames and coordinate systems to describe the motion. This paper derives the kinematic models that characterize the transformation of motion variables (position, velocity, accelerations) and forces between the different coordinate systems used in these theories. The derivations hereby presented are done in terms of the formalism adopted in robotics. The advantage of this formulation is the use of matrix notation and operations. As an application, the transformation of linear equations of motion used in seakeeping into body-fixed coordinates is considered for both zero and forward speed.
Resumo:
The aim of this study was to examine the effect of endurance training on skeletal muscle phospholipid molecular species from high-fat fed rats. Twelve female Sprague-Dawley rats were fed a high-fat diet (78.1% energy). The rats were randomly divided into two groups, a sedentary control group and a trained group (125 min of treadmill running at 8 m/min, 4 days/wk for 4 weeks). Forty-eight hours after their last training bout phospholipids were extracted from the red and white vastus lateralis and analyzed by electrospray-ionization mass spectrometry. Exercise training was associated with significant alterations in the relative abundance of a number of phospholipid molecular species. These changes were more prominent in red vastus lateralis than white vastus lateralis. The largest observed change was an increase of similar to 30% in the abundance of 1-palmitoyl-2-linoleoyl phosphatidylcholine ions in oxidative fibers. Reductions in the relative abundance of a number of phospholipids containing long-chain n-3 polyunsaturated fatty acids were also observed. These data suggest a possible reduction in phospholipid remodeling in the trained animals. This results in a decrease in the phospholipid n-3 to n-6 ratio that may in turn influence endurance capacity.
Resumo:
We have determined the effect of two exercise-training intensities on the phospholipid profile of both glycolytic and oxidative muscle fibers of female Sprague-Dawley rats using electrospray-ionization mass spectrometry. Animals were randomly divided into three training groups: control, which performed no exercise training; low-intensity (8 m/min) treadmill running; or high-intensity (28 m/min) treadmill running. All exercise-trained rats ran 1,000 m/session for 4 days/wk for 4 wk and were killed 48 h after the last training bout. Exercise training was found to produce no novel phospholipid species but was associated with significant alterations in the relative abundance of a number of phospholipid molecular species. These changes were more prominent in glycolytic (white vastus lateralis) than in oxidative (red vastus lateralis) muscle fibers. The largest observed change was a decrease of ∼20% in the abundance of 1-stearoyl-2-docosahexaenoyl-phosphatidylethanolamine [PE(18:0/22:6); P < 0.001] ions in both the low- and high-intensity training regimes in glycolytic fibers. Increases in the abundance of 1-oleoyl-2-linoleoyl phopshatidic acid [PA(18:1/18:2); P < 0.001] and 1-alkenylpalmitoyl-2-linoleoyl phosphatidylethanolamine [plasmenyl PE (16:0/18:2); P < 0.005] ions were also observed for both training regimes in glycolytic fibers. We conclude that exercise training results in a remodeling of phospholipids in rat skeletal muscle. Even though little is known about the physiological or pathophysiological role of specific phospholipid molecular species in skeletal muscle, it is likely that this remodeling will have an impact on a range of cellular functions.
Resumo:
The absence of comparative validity studies has prevented researchers from reaching consensus regarding the application of intensity-related accelerometer cut points for children and adolescents. PURPOSE This study aimed to evaluate the classification accuracy of five sets of independently developed ActiGraph cut points using energy expenditure, measured by indirect calorimetry, as a criterion reference standard. METHODS A total of 206 participants between the ages of 5 and 15 yr completed 12 standardized activity trials. Trials consisted of sedentary activities (lying down, writing, computer game), lifestyle activities (sweeping, laundry, throw and catch, aerobics, basketball), and ambulatory activities (comfortable walk, brisk walk, brisk treadmill walk, running). During each trial, participants wore an ActiGraph GT1M, and VO 2 was measured breath-by-breath using the Oxycon Mobile portable metabolic system. Physical activity intensity was estimated using five independently developed cut points: Freedson/Trost (FT), Puyau (PU), Treuth (TR), Mattocks (MT), and Evenson (EV). Classification accuracy was evaluated via weighted κ statistics and area under the receiver operating characteristic curve (ROC-AUC). RESULTS Across all four intensity levels, the EV (κ = 0.68) and FT (κ = 0.66) cut points exhibited significantly better agreement than TR (κ = 0.62), MT (κ = 0.54), and PU (κ = 0.36). The EV and FT cut points exhibited significantly better classification accuracy for moderate-to vigorous-intensity physical activity (ROC-AUC = 0.90) than TR, PU, or MT cut points (ROC-AUC = 0.77-0.85). Only the EV cut points provided acceptable classification accuracy for all four levels of physical activity intensity and performed well among children of all ages. The widely applied sedentary cut point of 100 counts per minute exhibited excellent classification accuracy (ROC-AUC = 0.90). CONCLUSIONS On the basis of these findings, we recommend that researchers use the EV ActiGraph cut points to estimate time spent in sedentary, light-, moderate-, and vigorous-intensity activity in children and adolescents. Copyright © 2011 by the American College of Sports Medicine.
Resumo:
Purpose The purpose of this study was to evaluate the validity of the CSA activity monitor as a measure of children's physical activity using energy expenditure (EE) as a criterion measure. Methods Thirty subjects aged 10 to 14 performed three 5-min treadmill bouts at 3, 4, and 6 mph, respectively. While on the treadmill, subjects wore CSA (WAM 7164) activity monitors on the right and left hips. (V) over dot O-2 was monitored continuously by an automated system. EE was determined by multiplying the average (V) over dot O-2 by the caloric equivalent of the mean respiratory exchange ratio. Results Repeated measures ANOVA indicated that both CSA monitors were sensitive to changes in treadmill speed. Mean activity counts from each CSA unit were not significantly different and the intraclass reliability coefficient for the two CSA units across all speeds was 0.87. Activity counts from both CSA units were strongly correlated with EE (r = 0.86 and 0.87, P < 0.001). An EE prediction equation was developed from 20 randomly selected subjects and cross-validated on the remaining 10. The equation predicted mean EE within 0.01 kcal.min(-1). The correlation between actual and predicted values was 0.93 (P < 0.01) and the SEE was 0.93 kcal.min(-1). Conclusion These data indicate that the CSA monitor is a valid and reliable tool for quantifying treadmill walking and running in children.