739 resultados para Internet of Things, Physical Web, Vending Machines, Beacon, Eddystone
Resumo:
This paper reports on two lengthy studies in Physical education teacher education (PETE) conducted independently but which are epistemologically and methodologically linked. The paper describes how personal construct theory (PCT) and its associated methods provided a means for PETE students to reflexively construct their ideas about teaching physical education over an extended period. Data are drawn from each study in the form of a story of a single participant to indicate how this came about. Furthermore we suggest that PCT might be both a useful research strategy and an effective approach to facilitate professional development in a teacher education setting.
Resumo:
Conventions of the studio presuppose the artist as the active agent, imposing his/her will upon and through objects that remain essentially inert. However, this characterisation of practice overlooks the complex object dynamics that underpin the art-making process. Far from passive entities, objects are resistant, ‘speaking back’ to the artist, impressing their will upon their surroundings. Objects stick to one another, fall over, drip, spill, spatter and chip one another. Objects support, dismantle, cover and transform one another. Objects are both the apparatus of the studio and its products. It can be argued that the work of art is as much shaped by objects as it is by human impulse. Within this alternate ontology, the artist becomes but one element in a constellation of objects. Drawing upon Graham Harman’s Object-Oriented Ontology and a selection of photographs of my studio processes, this practice-led paper will explore the notion of agentive objects and the ways in which the contemporary art studio can be reconsidered as a primary site for the production of new object relationships.
Resumo:
This paper unpacks some of the complexities of the female comic project, focussing on the creation of physical comedy, via multiple readings of the term “serious”. Does female desire to be taken seriously in the public realm compromise female-driven comedy? Historically, female seriousness has been a weapon in the hands of such female-funniness sceptics as the late Christopher Hitchens (2007), who (in)famously declared that women are too concerned with the grave importance of their reproductive responsibility to make good comedy. The dilemma is clear: for the woman attempting to elicit laughs, she’s not serious enough outside the home, and far too serious inside it.
Resumo:
We examined whether self-ratings of “being active” among older people living in four different settings (major city high and lower density suburbs, a regional city, and a rural area) were associated with out-of-home participation and outdoor physical activity. A mixed-methods approach (survey, travel diary, and GPS tracking over a one-week period) was used to gather data from 48 individuals aged over 55 years. Self-ratings of “being active” were found to be positively correlated with the number of days older people spent time away from home but unrelated to time traveled by active means (walking and biking). No significant differences in active travel were found between the four study locations, despite differences in their respective built environments.The findings suggest that additional strategies to the creation of “age-friendly” environments are needed if older people are to increase their levels of outdoor physical activity. “Active aging” promotion campaigns may need to explicitly identify the benefits of walking outdoors to ambulatory older people as a means of maintaining their overall health, functional ability, and participation within society in the long-term and also encourage the development of community-based programs in order to facilitate regular walking for this group.
Resumo:
Although a number of studies have examined the role of gastric emptying (GE) in obesity, the influences of habitual physical activity level, body composition and energy expenditure (EE) on GE have received very little consideration. In this study, we have compared GE in active and inactive males, and we have characterised relationships with body composition (fat and fat free mass) and EE. Forty-four males (Active: n=22, Inactive: n=22; range BMI 21-36kg/m2; range percent fat mass 9-42%) were studied, with GE of a standardised (1676 kJ) pancake meal being assessed by 13C-octanoic acid breath test, body composition by air displacement plethysmography, resting metabolic rate (RMR) by indirect calorimetry and activity EE (AEE) by accelerometry. Results showed that GE was faster in active compared to inactive males (mean ±SD half time (t1/2): Active: 157±18 and Inactive: 179±21 min, p<0.001). When data from both groups were pooled, GE t1/2 was associated with percent fat mass (r=0.39, p<0.01) and AEE (r =-0.46, p<0.01). After controlling for habitual physical activity status, the association between AEE and GE remained, but not that for percent fat mass and GE. BMI and RMR were not associated with GE. In summary, faster GE is considered to be a marker of a habitually active lifestyle in males, and is associated with a higher AEE and lower percent fat mass. The possibility that GE contributes to a gross physiological regulation (or dysregulation) of food intake with physical activity level deserves further investigation.
Resumo:
Being able to accurately predict the risk of falling is crucial in patients with Parkinson’s dis- ease (PD). This is due to the unfavorable effect of falls, which can lower the quality of life as well as directly impact on survival. Three methods considered for predicting falls are decision trees (DT), Bayesian networks (BN), and support vector machines (SVM). Data on a 1-year prospective study conducted at IHBI, Australia, for 51 people with PD are used. Data processing are conducted using rpart and e1071 packages in R for DT and SVM, con- secutively; and Bayes Server 5.5 for the BN. The results show that BN and SVM produce consistently higher accuracy over the 12 months evaluation time points (average sensitivity and specificity > 92%) than DT (average sensitivity 88%, average specificity 72%). DT is prone to imbalanced data so needs to adjust for the misclassification cost. However, DT provides a straightforward, interpretable result and thus is appealing for helping to identify important items related to falls and to generate fallers’ profiles.
Resumo:
Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.
Resumo:
With an objective to understand the nature of forces which contribute to the disjoining pressure of a thin water film on a steel substrate being pressed by an oil droplet, two independent sets of experiments were done. (i) A spherical silica probe approaches the three substrates; mica, PTFE and steel, in a 10 mM electrolyte solution at two different pHs (3 and 10). (ii) The silica probe with and without a smeared oil film approaches the same three substrates in water (pH = 6). The surface potential of the oil film/water was measured using a dynamic light scattering experiment. Assuming the capacity of a substrate for ion exchange the total interaction force for each experiment was estimated to include the Derjaguin-Landau-Verwey-Overbeek (DLVO) force, hydration repulsion, hydrophobic attraction and oil-capillary attraction. The best fit of these estimates to the force-displacement characteristics obtained from the two sets of experiment gives the appropriate surface potentials of the substrates. The procedure allows an assessment of the relevance of a specific physical interaction to an experimental configuration. Two of the principal observations of this work are: (i) The presence of a surface at constant charge, as in the presence of an oil film on the probe, significantly enhances the counterion density over what is achieved when both the surfaces allow ion exchange. This raises the corresponding repulsion barrier greatly. (ii) When the substrate surface is wettable by oil, oil-capillary attraction contributes substantially to the total interaction. If it is not wettable the oil film is deformed and squeezed out. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
An experimental setup using radiative heating has been used to understand the thermo-physical phenomena and chemical transformations inside acoustically levitated cerium nitrate precursor droplets. In this transformation process, through infrared thermography and high speed imaging, events such as vaporization, precipitation and chemical reaction have been recorded at high temporal resolution, leading to nanoceria formation with a porous morphology. The cerium nitrate droplet undergoes phase and shape changes throughout the vaporization process. Four distinct stages were delineated during the entire vaporization process namely pure evaporation, evaporation with precipitate formation, chemical reaction with phase change and formation of final porous precipitate. The composition was examined using scanning and transmission electron microscopy that revealed nanostructures and confirmed highly porous morphology with trapped gas pockets. Transmission electron microscopy (TEM) and high speed imaging of the final precipitate revealed the presence of trapped gases in the form of bubbles. TEM also showed the presence of nanoceria crystalline structures at 70 degrees C. The current study also looked into the effect of different heating powers on the process. At higher power, each phase is sustained for smaller duration and higher maximum temperature. In addition, the porosity of the final precipitate increased with power. A non-dimensional time scale is proposed to correlate the effect of laser intensity and vaporization rate of the solvent (water). The effect of acoustic levitation was also studied. Due to acoustic streaming, the solute selectively gets transported to the bottom portion of the droplet due to strong circulation, providing it rigidity and allows it become bowl shaped. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The performance analysis of adaptive physical layer network-coded two-way relaying scenario is presented which employs two phases: Multiple access (MA) phase and Broadcast (BC) phase. The deep channel fade conditions which occur at the relay referred as the singular fade states fall in the following two classes: (i) removable and (ii) non-removable singular fade states. With every singular fade state, we associate an error probability that the relay transmits a wrong network-coded symbol during the BC phase. It is shown that adaptive network coding provides a coding gain over fixed network coding, by making the error probabilities associated with the removable singular fade states contributing to the average Symbol Error Rate (SER) fall as SNR-2 instead of SNR-1. A high SNR upper-bound on the average end-to-end SER for the adaptive network coding scheme is derived, for a Rician fading scenario, which is found to be tight through simulations. Specifically, it is shown that for the adaptive network coding scheme, the probability that the relay node transmits a wrong network-coded symbol is upper-bounded by twice the average SER of a point-to-point fading channel, at high SNR. Also, it is shown that in a Rician fading scenario, it suffices to remove the effect of only those singular fade states which contribute dominantly to the average SER.
Resumo:
The analysis of modulation schemes for the physical layer network-coded two way relaying scenario is presented which employs two phases: Multiple access (MA) phase and Broadcast (BC) phase. Depending on the signal set used at the end nodes, the minimum distance of the effective constellation seen at the relay becomes zero for a finite number of channel fade states referred as the singular fade states. The singular fade states fall into the following two classes: (i) the ones which are caused due to channel outage and whose harmful effect cannot be mitigated by adaptive network coding called the non-removable singular fade states and (ii) the ones which occur due to the choice of the signal set and whose harmful effects can be removed called the removable singular fade states. In this paper, we derive an upper bound on the average end-to-end Symbol Error Rate (SER), with and without adaptive network coding at the relay, for a Rician fading scenario. It is shown that without adaptive network coding, at high Signal to Noise Ratio (SNR), the contribution to the end-to-end SER comes from the following error events which fall as SNR-1: the error events associated with the removable and nonremovable singular fade states and the error event during the BC phase. In contrast, for the adaptive network coding scheme, the error events associated with the removable singular fade states fall as SNR-2, thereby providing a coding gain over the case when adaptive network coding is not used. Also, it is shown that for a Rician fading channel, the error during the MA phase dominates over the error during the BC phase. Hence, adaptive network coding, which improves the performance during the MA phase provides more gain in a Rician fading scenario than in a Rayleigh fading scenario. Furthermore, it is shown that for large Rician factors, among those removable singular fade states which have the same magnitude, those which have the least absolute value of the phase - ngle alone contribute dominantly to the end-to-end SER and it is sufficient to remove the effect of only such singular fade states.
Resumo:
Clock synchronization is highly desirable in distributed systems, including many applications in the Internet of Things and Humans. It improves the efficiency, modularity, and scalability of the system, and optimizes use of event triggers. For IoTH, BLE - a subset of the recent Bluetooth v4.0 stack - provides a low-power and loosely coupled mechanism for sensor data collection with ubiquitous units (e.g., smartphones and tablets) carried by humans. This fundamental design paradigm of BLE is enabled by a range of broadcast advertising modes. While its operational benefits are numerous, the lack of a common time reference in the broadcast mode of BLE has been a fundamental limitation. This article presents and describes CheepSync, a time synchronization service for BLE advertisers, especially tailored for applications requiring high time precision on resource constrained BLE platforms. Designed on top of the existing Bluetooth v4.0 standard, the CheepSync framework utilizes low-level time-stamping and comprehensive error compensation mechanisms for overcoming uncertainties in message transmission, clock drift, and other system-specific constraints. CheepSync was implemented on custom designed nRF24Cheep beacon platforms (as broadcasters) and commercial off-the-shelf Android ported smartphones (as passive listeners). We demonstrate the efficacy of CheepSync by numerous empirical evaluations in a variety of experimental setups, and show that its average (single-hop) time synchronization accuracy is in the 10 mu s range.
Resumo:
The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.
The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.
We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.