335 resultados para Time and movements
Resumo:
Objectives This efficacy study assessed the added impact real time computer prompts had on a participatory approach to reduce occupational sedentary exposure and increase physical activity. Design Quasi-experimental. Methods 57 Australian office workers (mean [SD]; age = 47 [11] years; BMI = 28 [5] kg/m2; 46 men) generated a menu of 20 occupational ‘sit less and move more’ strategies through participatory workshops, and were then tasked with implementing strategies for five months (July–November 2014). During implementation, a sub-sample of workers (n = 24) used a chair sensor/software package (Sitting Pad) that gave real time prompts to interrupt desk sitting. Baseline and intervention sedentary behaviour and physical activity (GENEActiv accelerometer; mean work time percentages), and minutes spent sitting at desks (Sitting Pad; mean total time and longest bout) were compared between non-prompt and prompt workers using a two-way ANOVA. Results Workers spent close to three quarters of their work time sedentary, mostly sitting at desks (mean [SD]; total desk sitting time = 371 [71] min/day; longest bout spent desk sitting = 104 [43] min/day). Intervention effects were four times greater in workers who used real time computer prompts (8% decrease in work time sedentary behaviour and increase in light intensity physical activity; p < 0.01). Respective mean differences between baseline and intervention total time spent sitting at desks, and the longest bout spent desk sitting, were 23 and 32 min/day lower in prompt than in non-prompt workers (p < 0.01). Conclusions In this sample of office workers, real time computer prompts facilitated the impact of a participatory approach on reductions in occupational sedentary exposure, and increases in physical activity.
Resumo:
In this paper, we consider a time fractional diffusion equation on a finite domain. The equation is obtained from the standard diffusion equation by replacing the first-order time derivative by a fractional derivative (of order $0<\alpha<1$ ). We propose a computationally effective implicit difference approximation to solve the time fractional diffusion equation. Stability and convergence of the method are discussed. We prove that the implicit difference approximation (IDA) is unconditionally stable, and the IDA is convergent with $O(\tau+h^2)$, where $\tau$ and $h$ are time and space steps, respectively. Some numerical examples are presented to show the application of the present technique.
Resumo:
As part of a large study investigating indoor air in residential houses in Brisbane, Australia, the purpose of this work was to quantify indoor exposure to submicrometer particles and PM2.5 for the inhabitants of 14 houses. Particle concentrations were measured simultaneously for more than 48 hours in the kitchens of all the houses by using a condensation particle counter (CPC) and a photometer (DustTrak). The occupants of the houses were asked to fill in a diary, noting the time and duration of any activity occurring throughout the house during measurement, as well as their presence or absence from home. From the time series concentration data and the information about indoor activities, exposure to the inhabitants of the houses was calculated for the entire time they spent at home as well as during indoor activities resulting in particle generation. The results show that the highest median concentration level occurred during cooking periods for both particle number concentration (47.5´103 particles cm-3) and PM2.5 concentration (13.4 mg m-3). The highest residential exposure period was the sleeping period for both particle number exposure (31%) and PM2.5 exposure (45.6%). The percentage of the average residential particle exposure level in total 24h particle exposure level was approximating 70% for both particle number and PM2.5 exposure.
Resumo:
A month-long intensive measurement campaign was conducted in March/April 2007 at Agnes Water, a remote coastal site just south of the Great Barrier Reef on the east coast of Australia. Particle and ion size distributions were continuously measured during the campaign. Coastal nucleation events were observed in clean, marine air masses coming from the south-east on 65% of the days. The events usually began at ~10:00 local time and lasted for 1-4 hrs. They were characterised by the appearance of a nucleation mode with a peak diameter of ~10 nm. The freshly nucleated particles grew within 1-4 hrs up to sizes of 20-50 nm. The events occurred when solar intensity was high (~1000 W m-2) and RH was low (~60%). Interestingly, the events were not related to tide height. The volatile and hygroscopic properties of freshly nucleated particles (17-22.5 nm), simultaneously measured with a volatility-hygroscopicity-tandem differential mobility analyser (VH-TDMA), were used to infer chemical composition. The majority of the volume of these particles was attributed to internally mixed sulphate and organic components. After ruling out coagulation as a source of significant particle growth, we conclude that the condensation of sulphate and/or organic vapours was most likely responsible for driving particle growth during the nucleation events. We cannot make any direct conclusions regarding the chemical species that participated in the initial particle nucleation. However, we suggest that nucleation may have resulted from the photo-oxidation products of unknown sulphur or organic vapours emitted from the waters of Hervey Bay, or from the formation of DMS-derived sulphate clusters over the open ocean that were activated to observable particles by condensable vapours emitted from the nutrient rich waters around Fraser Island or Hervey Bay. Furthermore, a unique and particularly strong nucleation event was observed during northerly wind. The event began early one morning (08:00) and lasted almost the entire day resulting in the production of a large number of ~80 nm particles (average modal concentration during the event was 3200 cm-3). The Great Barrier Reef was the most likely source of precursor vapours responsible for this event.
Resumo:
Design as seen from the designer's perspective is a series of amazing imaginative jumps or creative leaps. But design as seen by the design historian is a smooth progression or evolution of ideas that they seem self-evident and inevitable after the event. But the next step is anything but obvious for the artist/creator/inventor/designer stuck at that point just before the creative leap. They know where they have come from and have a general sense of where they are going, but often do not have a precise target or goal. This is why it is misleading to talk of design as a problem-solving activity - it is better defined as a problem-finding activity. This has been very frustrating for those trying to assist the design process with computer-based, problem-solving techniques. By the time the problem has been defined, it has been solved. Indeed the solution is often the very definition of the problem. Design must be creative-or it is mere imitation. But since this crucial creative leap seem inevitable after the event, the question must arise, can we find some way of searching the space ahead? Of course there are serious problems of knowing what we are looking for and the vastness of the search space. It may be better to discard altogether the term "searching" in the context of the design process: Conceptual analogies such as search, search spaces and fitness landscapes aim to elucidate the design process. However, the vastness of the multidimensional spaces involved make these analogies misguided and they thereby actually result in further confounding the issue. The term search becomes a misnomer since it has connotations that imply that it is possible to find what you are looking for. In such vast spaces the term search must be discarded. Thus, any attempt at searching for the highest peak in the fitness landscape as an optimal solution is also meaningless. Futhermore, even the very existence of a fitness landscape is fallacious. Although alternatives in the same region of the vast space can be compared to one another, distant alternatives will stem from radically different roots and will therefore not be comparable in any straightforward manner (Janssen 2000). Nevertheless we still have this tantalizing possibility that if a creative idea seems inevitable after the event, then somehow might the process be rserved? This may be as improbable as attempting to reverse time. A more helpful analogy is from nature, where it is generally assumed that the process of evolution is not long-term goal directed or teleological. Dennett points out a common minsunderstanding of Darwinism: the idea that evolution by natural selection is a procedure for producing human beings. Evolution can have produced humankind by an algorithmic process, without its being true that evolution is an algorithm for producing us. If we were to wind the tape of life back and run this algorithm again, the likelihood of "us" being created again is infinitesimally small (Gould 1989; Dennett 1995). But nevertheless Mother Nature has proved a remarkably successful, resourceful, and imaginative inventor generating a constant flow of incredible new design ideas to fire our imagination. Hence the current interest in the potential of the evolutionary paradigm in design. These evolutionary methods are frequently based on techniques such as the application of evolutionary algorithms that are usually thought of as search algorithms. It is necessary to abandon such connections with searching and see the evolutionary algorithm as a direct analogy with the evolutionary processes of nature. The process of natural selection can generate a wealth of alternative experiements, and the better ones survive. There is no one solution, there is no optimal solution, but there is continuous experiment. Nature is profligate with her prototyping and ruthless in her elimination of less successful experiments. Most importantly, nature has all the time in the world. As designers we cannot afford prototyping and ruthless experiment, nor can we operate on the time scale of the natural design process. Instead we can use the computer to compress space and time and to perform virtual prototyping and evaluation before committing ourselves to actual prototypes. This is the hypothesis underlying the evolutionary paradigm in design (1992, 1995).
Resumo:
Urban infrastructure development in Korea has recently shifted from an old paradigm of conventional infrastructure planning to a new paradigm of intelligent infrastructure provision. This new paradigm, so called ubiquitous infrastructure, is based on a combination of urban infrastructure, information and communication technologies and digital networks. Ubiquitous infrastructure basically refers to an urban infrastructure where any citizen could access any infrastructure and services via any electronic device regardless of time and location. This paper introduces this new paradigm of intellectual infrastructure planning and its design schemes. The paper also examines the ubiquitous infrastructure development in Korea and discusses the positive effects of ubiquitous infrastructure on sustainable urban development.
Resumo:
Sleeper is an 18'00" musical work for live performer and laptop computer which exists as both a live performance work and a recorded work for audio CD. The work has been presented at a range of international performance events and survey exhibitions. These include the 2003 International Computer Music Conference (Singapore) where it was selected for CD publication, Variable Resistance (San Francisco Museum of Modern Art, USA), and i.audio, a survey of experimental sound at the Performance Space, Sydney. The source sound materials are drawn from field recordings made in acoustically resonant spaces in the Australian urban environment, amplified and acoustic instruments, radio signals, and sound synthesis procedures. The processing techniques blur the boundaries between, and exploit, the perceptual ambiguities of de-contextualised and processed sound. The work thus challenges the arbitrary distinctions between sound, noise and music and attempts to reveal the inherent musicality in so-called non-musical materials via digitally re-processed location audio. Thematically the work investigates Paul Virilio’s theory that technology ‘collapses space’ via the relationship of technology to speed. Technically this is explored through the design of a music composition process that draws upon spatially and temporally dispersed sound materials treated using digital audio processing technologies. One of the contributions to knowledge in this work is a demonstration of how disparate materials may be employed within a compositional process to produce music through the establishment of musically meaningful morphological, spectral and pitch relationships. This is achieved through the design of novel digital audio processing networks and a software performance interface. The work explores, tests and extends the music perception theories of ‘reduced listening’ (Schaeffer, 1967) and ‘surrogacy’ (Smalley, 1997), by demonstrating how, through specific audio processing techniques, sounds may shifted away from ‘causal’ listening contexts towards abstract aesthetic listening contexts. In doing so, it demonstrates how various time and frequency domain processing techniques may be used to achieve this shift.
Resumo:
Principal Topic A small firm is unlikely to possess internally the full range of knowledge and skills that it requires or could benefit from for the development of its business. The ability to acquire suitable external expertise - defined as knowledge or competence that is rare in the firm and acquired from the outside - when needed thus becomes a competitive factor in itself. Access to external expertise enables the firm to focus on its core competencies and removes the necessity to internalize every skill and competence. However, research on how small firms access external expertise is still scarce. The present study contributes to this under-developed discussion by analysing the role of trust and strong ties in the small firm's selection and evaluation of sources of external expertise (henceforth referred to as the 'business advisor' or 'advisor'). Granovetter (1973, 1361) defines the strength of a network tie as 'a (probably linear) combination of the amount of time, the emotional intensity, the intimacy (mutual confiding) and the reciprocal services which characterize the tie'. Strong ties in the context of the present investigation refer to sources of external expertise who are well known to the owner-manager, and who may be either informal (e.g., family, friends) or professional advisors (e.g., consultants, enterprise support officers, accountants or solicitors). Previous research has suggested that strong and weak ties have different fortes and the choice of business advisors could thus be critical to business performance) While previous research results suggest that small businesses favour previously well known business advisors, prior studies have also pointed out that an excessive reliance on a network of well known actors might hamper business development, as the range of expertise available through strong ties is limited. But are owner-managers of small businesses aware of this limitation and does it matter to them? Or does working with a well-known advisor compensate for it? Hence, our research model first examines the impact of the strength of tie on the business advisor's perceived performance. Next, we ask what encourages a small business owner-manager to seek advice from a strong tie. A recent exploratory study by Welter and Kautonen (2005) drew attention to the central role of trust in this context. However, while their study found support for the general proposition that trust plays an important role in the choice of advisors, how trust and its different dimensions actually affect this choice remained ambiguous. The present paper develops this discussion by considering the impact of the different dimensions of perceived trustworthiness, defined as benevolence, integrity and ability, on the strength of tie. Further, we suggest that the dimensions of perceived trustworthiness relevant in the choice of a strong tie vary between professional and informal advisors. Methodology/Key Propositions Our propositions are examined empirically based on survey data comprising 153 Finnish small businesses. The data are analysed utilizing the partial least squares (PLS) approach to structural equation modelling with SmartPLS 2.0. Being non-parametric, the PLS algorithm is particularly well-suited to analysing small datasets with non-normally distributed variables. Results and Implications The path model shows that the stronger the tie, the more positively the advisor's performance is perceived. Hypothesis 1, that strong ties will be associated with higher perceptions of performance is clearly supported. Benevolence is clearly the most significant predictor of the choice of a strong tie for external expertise. While ability also reaches a moderate level of statistical significance, integrity does not have a statistically significant impact on the choice of a strong tie. Hence, we found support for two out of three independent variables included in Hypothesis 2. Path coefficients differed between the professional and informal advisor subsamples. The results of the exploratory group comparison show that Hypothesis 3a regarding ability being associated with strong ties more pronouncedly when choosing a professional advisor was not supported. Hypothesis 3b arguing that benevolence is more strongly associated with strong ties in the context of choosing an informal advisor received some support because the path coefficient in the informal advisor subsample was much larger than in the professional advisor subsample. Hypothesis 3c postulating that integrity would be more strongly associated with strong ties in the choice of a professional advisor was supported. Integrity is the most important dimension of trustworthiness in this context. However, integrity is of no concern, or even negative, when using strong ties to choose an informal advisor. The findings of this study have practical relevance to the enterprise support community. First of all, given that the strength of tie has a significant positive impact on the advisor's perceived performance, this implies that small business owners appreciate working with advisors in long-term relationships. Therefore, advisors are well advised to invest into relationship building and maintenance in their work with small firms. Secondly, the results show that, especially in the context of professional advisors, the advisor's perceived integrity and benevolence weigh more than ability. This again emphasizes the need to invest time and effort into building a personal relationship with the owner-manager, rather than merely maintaining a professional image and credentials. Finally, this study demonstrates that the dimensions of perceived trustworthiness are orthogonal with different effects on the strength of tie and ultimately perceived performance. This means that entrepreneurs and advisors should consider the specific dimensions of ability, benevolence and integrity, rather than rely on general perceptions of trustworthiness in their advice relationships.
Resumo:
Objective: The study investigated previous research findings and clinical impressions which indicated that the intensity of grief for parents who had lost a child was likely to be higher than that for widows/widowers, who in turn were likely to have more intense reactions than adult children losing a parent. Method: In order to compare the intensities of the bereavement reactions among representative community samples of bereaved spouses (n = 44), adult children (n = 40) and parents (n = 36), and to follow the course of such phenomena, a detailed Bereavement Questionnaire was administered at four time points over a 13-month period following the loss. Results: Measures based on items central to the construct of bereavement showed significant time and group differences in accordance with the proposed hypothesis. More global items associated with the construct of resolution showed a significant time effect, but without significant group differences. Conclusions: Evidence from this study supports the hypothesis that in non-clinical, community-based populations the frequency with which core bereavement phenomena are experienced is in the order: bereaved parents bereaved spouses bereaved adult children.
Resumo:
Planar busbar is a good candidate to reduce interconnection inductance in high power inverters compared with cables. However, power switching components with fast switching combined with hard switched-converters produce high di/dt during turn off time and busbar stray inductance then becomes an important issue which creates overvoltage. It is necessary to keep the busbar stray inductance as low as possible to decrease overvoltage and Electromagnetic Interference (EMI) noise. In this paper, the effect of different transient current loops on busbar physical structure of the high-voltage high-level diode-clamped converters will be highlighted. Design considerations of proper planar busbar will also be presented to optimise the overall design of diode-clamped converters.
Resumo:
In this chapter I propose a theoretical framework for understanding the role of mediation processes in the inculcation, maintenance, and change of evaluative meaning systems, or axiologies, and how such a perspective can provide a useful and complementary dimension to analysis for SFL and CDA. I argue that an understanding of mediation—the movement of meaning across time and space—is essential for the analysis of meaning. Using two related texts as examples, I show how an understanding of mediation can aid SFL and CDA practitioners in the analysis of social change.
Resumo:
In this paper, we use time series analysis to evaluate predictive scenarios using search engine transactional logs. Our goal is to develop models for the analysis of searchers’ behaviors over time and investigate if time series analysis is a valid method for predicting relationships between searcher actions. Time series analysis is a method often used to understand the underlying characteristics of temporal data in order to make forecasts. In this study, we used a Web search engine transactional log and time series analysis to investigate users’ actions. We conducted our analysis in two phases. In the initial phase, we employed a basic analysis and found that 10% of searchers clicked on sponsored links. However, from 22:00 to 24:00, searchers almost exclusively clicked on the organic links, with almost no clicks on sponsored links. In the second and more extensive phase, we used a one-step prediction time series analysis method along with a transfer function method. The period rarely affects navigational and transactional queries, while rates for transactional queries vary during different periods. Our results show that the average length of a searcher session is approximately 2.9 interactions and that this average is consistent across time periods. Most importantly, our findings shows that searchers who submit the shortest queries (i.e., in number of terms) click on highest ranked results. We discuss implications, including predictive value, and future research.
Resumo:
Structural health monitoring (SHM) is the term applied to the procedure of monitoring a structure’s performance, assessing its condition and carrying out appropriate retrofitting so that it performs reliably, safely and efficiently. Bridges form an important part of a nation’s infrastructure. They deteriorate due to age and changing load patterns and hence early detection of damage helps in prolonging the lives and preventing catastrophic failures. Monitoring of bridges has been traditionally done by means of visual inspection. With recent developments in sensor technology and availability of advanced computing resources, newer techniques have emerged for SHM. Acoustic emission (AE) is one such technology that is attracting attention of engineers and researchers all around the world. This paper discusses the use of AE technology in health monitoring of bridge structures, with a special focus on analysis of recorded data. AE waves are stress waves generated by mechanical deformation of material and can be recorded by means of sensors attached to the surface of the structure. Analysis of the AE signals provides vital information regarding the nature of the source of emission. Signal processing of the AE waveform data can be carried out in several ways and is predominantly based on time and frequency domains. Short time Fourier transform and wavelet analysis have proved to be superior alternatives to traditional frequency based analysis in extracting information from recorded waveform. Some of the preliminary results of the application of these analysis tools in signal processing of recorded AE data will be presented in this paper.
Resumo:
Cognitive-energetical theories of information processing were used to generate predictions regarding the relationship between workload and fatigue within and across consecutive days of work. Repeated measures were taken on board a naval vessel during a non-routine and a routine patrol. Data were analyzed using growth curve modeling. Fatigue demonstrated a non-monotonic relationship within days in both patrols – fatigue was high at midnight, started decreasing until noontime and then increased again. Fatigue increased across days towards the end of the non-routine patrol, but remained stable across days in the routine patrol. The relationship between workload and fatigue changed over consecutive days in the non-routine patrol. At the beginning of the patrol, low workload was associated with fatigue. At the end of the patrol, high workload was associated with fatigue. This relationship could not be tested in the routine patrol, however it demonstrated a non-monotonic relationship between workload and fatigue – low and high workloads were associated with the highest fatigue. These results suggest that the optimal level of workload can change over time and thus have implications for the management of fatigue.
Resumo:
This research applies an archaeological lens to an inner-city master planned development in order to investigate the tension between the design of space and the use of space. The chosen case study for this thesis is Kelvin Grove Urban Village (KGUV), located in inner city Brisbane, Australia. The site of this urban village has strong links to the past. KGUV draws on both the history of the place in particular along with more general mythologies of village life in its design and subsequent marketing approaches. The design and marketing approach depends upon notions of an imagined past where life in a place shaped like a traditional village was better and more socially sustainable than modern urban spaces. The appropriation of this urban village concept has been criticised as a shallow marketing ploy. The translation and applicability of the urban village model across time and space is therefore contentious. KGUV was considered both in terms of its design and marketing and in terms of a reading of the actual use of this master planned place. Central to this analysis is the figure of the boundary and related themes of social heterogeneity, inclusion and exclusion. The refraction of history in the site is also an important theme. An interpretive archaeological approach was used overall as a novel method to derive this analysis.