915 resultados para state-space methods
Resumo:
We consider a two-dimensional space-fractional reaction diffusion equation with a fractional Laplacian operator and homogeneous Neumann boundary conditions. The finite volume method is used with the matrix transfer technique of Ilić et al. (2006) to discretise in space, yielding a system of equations that requires the action of a matrix function to solve at each timestep. Rather than form this matrix function explicitly, we use Krylov subspace techniques to approximate the action of this matrix function. Specifically, we apply the Lanczos method, after a suitable transformation of the problem to recover symmetry. To improve the convergence of this method, we utilise a preconditioner that deflates the smallest eigenvalues from the spectrum. We demonstrate the efficiency of our approach for a fractional Fisher’s equation on the unit disk.
Resumo:
A satellite based observation system can continuously or repeatedly generate a user state vector time series that may contain useful information. One typical example is the collection of International GNSS Services (IGS) station daily and weekly combined solutions. Another example is the epoch-by-epoch kinematic position time series of a receiver derived by a GPS real time kinematic (RTK) technique. Although some multivariate analysis techniques have been adopted to assess the noise characteristics of multivariate state time series, statistic testings are limited to univariate time series. After review of frequently used hypotheses test statistics in univariate analysis of GNSS state time series, the paper presents a number of T-squared multivariate analysis statistics for use in the analysis of multivariate GNSS state time series. These T-squared test statistics have taken the correlation between coordinate components into account, which is neglected in univariate analysis. Numerical analysis was conducted with the multi-year time series of an IGS station to schematically demonstrate the results from the multivariate hypothesis testing in comparison with the univariate hypothesis testing results. The results have demonstrated that, in general, the testing for multivariate mean shifts and outliers tends to reject less data samples than the testing for univariate mean shifts and outliers under the same confidence level. It is noted that neither univariate nor multivariate data analysis methods are intended to replace physical analysis. Instead, these should be treated as complementary statistical methods for a prior or posteriori investigations. Physical analysis is necessary subsequently to refine and interpret the results.
Resumo:
Qualitative research methods are widely accepted in Information Systems and multiple approaches have been successfully used in IS qualitative studies over the years. These approaches include narrative analysis, discourse analysis, grounded theory, case study, ethnography and phenomenological analysis. Guided by critical, interpretive and positivist epistemologies (Myers 1997), qualitative methods are continuously growing in importance in our research community. In this special issue, we adopt Van Maanen's (1979: 520) definition of qualitative research as an umbrella term to cover an “array of interpretive techniques that can describe, decode, translate, and otherwise come to terms with the meaning, not the frequency, of certain more or less naturally occurring phenomena in the social world”. In the call for papers, we stated that the aim of the special issue was to provide a forum within which we can present and debate the significant number of issues, results and questions arising from the pluralistic approach to qualitative research in Information Systems. We recognise both the potential and the challenges that qualitative approaches offers for accessing the different layers and dimensions of a complex and constructed social reality (Orlikowski, 1993). The special issue is also a response to the need to showcase the current state of the art in IS qualitative research and highlight advances and issues encountered in the process of continuous learning that includes questions about its ontology, epistemological tenets, theoretical contributions and practical applications.
Resumo:
Fractional mathematical models represent a new approach to modelling complex spatial problems in which there is heterogeneity at many spatial and temporal scales. In this paper, a two-dimensional fractional Fitzhugh-Nagumo-monodomain model with zero Dirichlet boundary conditions is considered. The model consists of a coupled space fractional diffusion equation (SFDE) and an ordinary differential equation. For the SFDE, we first consider the numerical solution of the Riesz fractional nonlinear reaction-diffusion model and compare it to the solution of a fractional in space nonlinear reaction-diffusion model. We present two novel numerical methods for the two-dimensional fractional Fitzhugh-Nagumo-monodomain model using the shifted Grunwald-Letnikov method and the matrix transform method, respectively. Finally, some numerical examples are given to exhibit the consistency of our computational solution methodologies. The numerical results demonstrate the effectiveness of the methods.
Resumo:
The space and time fractional Bloch–Torrey equation (ST-FBTE) has been used to study anomalous diffusion in the human brain. Numerical methods for solving ST-FBTE in three-dimensions are computationally demanding. In this paper, we propose a computationally effective fractional alternating direction method (FADM) to overcome this problem. We consider ST-FBTE on a finite domain where the time and space derivatives are replaced by the Caputo–Djrbashian and the sequential Riesz fractional derivatives, respectively. The stability and convergence properties of the FADM are discussed. Finally, some numerical results for ST-FBTE are given to confirm our theoretical findings.
Resumo:
Background: Previous attempts at costing infection control programmes have tended to focus on accounting costs rather than economic costs. For studies using economic costs, estimates tend to be quite crude and probably underestimate the true cost. One of the largest costs of any intervention is staff time, but this cost is difficult to quantify and has been largely ignored in previous attempts. Aim: To design and evaluate the costs of hospital-based infection control interventions or programmes. This article also discusses several issues to consider when costing interventions, and suggests strategies for overcoming these issues. Methods: Previous literature and techniques in both health economics and psychology are reviewed and synthesized. Findings: This article provides a set of generic, transferable costing guidelines. Key principles such as definition of study scope and focus on large costs, as well as pitfalls (e.g. overconfidence and uncertainty), are discussed. Conclusion: These new guidelines can be used by hospital staff and other researchers to cost their infection control programmes and interventions more accurately.
Resumo:
Groundwater flow models are usually characterized as being either transient flow models or steady state flow models. Given that steady state groundwater flow conditions arise as a long time asymptotic limit of a particular transient response, it is natural for us to seek a finite estimate of the amount of time required for a particular transient flow problem to effectively reach steady state. Here, we introduce the concept of mean action time (MAT) to address a fundamental question: How long does it take for a groundwater recharge process or discharge processes to effectively reach steady state? This concept relies on identifying a cumulative distribution function, $F(t;x)$, which varies from $F(0;x)=0$ to $F(t;x) \to \infty$ as $t\to \infty$, thereby providing us with a measurement of the progress of the system towards steady state. The MAT corresponds to the mean of the associated probability density function $f(t;x) = \dfrac{dF}{dt}$, and we demonstrate that this framework provides useful analytical insight by explicitly showing how the MAT depends on the parameters in the model and the geometry of the problem. Additional theoretical results relating to the variance of $f(t;x)$, known as the variance of action time (VAT), are also presented. To test our theoretical predictions we include measurements from a laboratory–scale experiment describing flow through a homogeneous porous medium. The laboratory data confirms that the theoretical MAT predictions are in good agreement with measurements from the physical model.
Resumo:
The first of three articles in this issue addressing the public space topic considers public space and young people in the light of a range of papers delivered at the 27th International Conference on 'Making Cities Livable', held in Vienna, Austria, in July 2000. Under the overarching concept of the "liveable city" the conference themes of 'Rediscovery of public space' and 'Cities for the wellbeing of children' attracted a broad mix of those interested in the planning, design and management of urban space. A number of themes percolated through the conference which stimulated the writers to examine the nexus between urban development, young people and public space. There is an ongoing need to examine the meaning of public space in the face of powerful urban development trends. A model of public space practice is required which incorporates a vision of inclusive public spaces, fosters the interactivity of design, planning, social policy and management, and resources for greater communication and strategic action between stake holders from the most local of levels to those at state and international levels. The speed and magnitude of contemporary urban development makes community input and influence difficult, particularly for those impacted on by the exclusionary tendencies of much urban development. It is critical that a range of meaningful and sustainable mechanisms are developed which allow young people’s conceptions of what constitutes youth-friendly space to be directly made and taken account of.
Resumo:
A recent Guest Editorial by Parenti & Ebach (2013, Journal of Biogeography, 40, 813–820) disagrees with the methods or interpretations in two of our recent papers. In addition, the authors open a debate on biogeographical concepts, and present an alternative philosophy for biogeographical research in the context of their recently described biogeographical subregion called ‘Pandora’. We disagree with their approach and conclusions, and comment on several issues related to our differing conceptual approaches for biogeographical research; namely, our use of molecular phylogenetic analyses, including time estimates; and Parenti & Ebach's reliance on taxon/general area cladograms. Finally, we re-examine their ‘tests’ supporting the existence of ‘Pandora’.
Resumo:
This thesis developed and evaluated strategies for social and ubiquitous computing designs that can enhance connected learning and networking opportunities for users in coworking spaces. Based on a social and a technical design intervention deployed at the State Library of Queensland, the research findings illustrate the potential of combining social, spatial and digital affordances in order to nourish peer-to-peer learning, creativity, inspiration, and innovation. The study proposes a hybrid notion of placemaking as a new way of thinking about the design of coworking and interactive learning spaces.
Resumo:
This article argues that a semantic shift in the crowd in Vietnam over the last decade has allowed public space to become a site through which transgressive ideologies and desires may have an outlet. At a time of accelerating social change, the state has effectively delimited public criticism yet a fragile but assertive form of Vietnamese democratic practice has arisen in public space, at the margins of official society, in sites previously equated with state control. Official state functions attract only small audiences, and rather than celebrating the dominance of the party, reveal the disengagement of the populace in the party's activities. Where crowds were always a component of state (stage)-managed events, now public spaces are attracting large numbers of people for supposedly non-political activities which may become transgressive acts condemned by the regime. In support of the notion that crowding is an opening up of the possibility of more subversive political actions, the paper presents an analysis of recent crowd formations and the state's reaction to them. The analysis reveals the modalities through which popular culture has provided the public with the means to transcend the constraints of official, authorized, and legitimate codes of behaviour in public space. Changes in the use of public space, it is argued, map the sets of relations between the public and the state, making these transforming relationships visible, although fraught with contradictions and anomalies.
Resumo:
Purpose Commencing selected workouts with low muscle glycogen availability augments several markers of training adaptation compared with undertaking the same sessions with normal glycogen content. However, low glycogen availability reduces the capacity to perform high-intensity (>85% of peak aerobic power (V·O2peak)) endurance exercise. We determined whether a low dose of caffeine could partially rescue the reduction in maximal self-selected power output observed when individuals commenced high-intensity interval training with low (LOW) compared with normal (NORM) glycogen availability. Methods Twelve endurance-trained cyclists/triathletes performed four experimental trials using a double-blind Latin square design. Muscle glycogen content was manipulated via exercise–diet interventions so that two experimental trials were commenced with LOW and two with NORM muscle glycogen availability. Sixty minutes before an experimental trial, subjects ingested a capsule containing anhydrous caffeine (CAFF, 3 mg-1·kg-1 body mass) or placebo (PLBO). Instantaneous power output was measured throughout high-intensity interval training (8 × 5-min bouts at maximum self-selected intensity with 1-min recovery). Results There were significant main effects for both preexercise glycogen content and caffeine ingestion on power output. LOW reduced power output by approximately 8% compared with NORM (P < 0.01), whereas caffeine increased power output by 2.8% and 3.5% for NORM and LOW, respectively, (P < 0.01). Conclusion We conclude that caffeine enhanced power output independently of muscle glycogen concentration but could not fully restore power output to levels commensurate with that when subjects commenced exercise with normal glycogen availability. However, the reported increase in power output does provide a likely performance benefit and may provide a means to further enhance the already augmented training response observed when selected sessions are commenced with reduced muscle glycogen availability. It has long been known that endurance training induces a multitude of metabolic and morphological adaptations that improve the resistance of the trained musculature to fatigue and enhance endurance capacity and/or exercise performance (13). Accumulating evidence now suggests that many of these adaptations can be modified by nutrient availability (9–11,21). Growing evidence suggests that training with reduced muscle glycogen using a “train twice every second day” compared with a more traditional “train once daily” approach can enhance the acute training response (29) and markers representative of endurance training adaptation after short-term (3–10 wk) training interventions (8,16,30). Of note is that the superior training adaptation in these previous studies was attained despite a reduction in maximal self-selected power output (16,30). The most obvious factor underlying the reduced intensity during a second training bout is the reduction in muscle glycogen availability. However, there is also the possibility that other metabolic and/or neural factors may be responsible for the power drop-off observed when two exercise bouts are performed in close proximity. Regardless of the precise mechanism(s), there remains the intriguing possibility that the magnitude of training adaptation previously reported in the face of a reduced training intensity (Hulston et al. (16) and Yeo et al.) might be further augmented, and/or other aspects of the training stimulus better preserved, if power output was not compromised. Caffeine ingestion is a possible strategy that might “rescue” the aforementioned reduction in power output that occurs when individuals commence high-intensity interval training (HIT) with low compared with normal glycogen availability. Recent evidence suggests that, at least in endurance-based events, the maximal benefits of caffeine are seen at small to moderate (2–3 mg·kg-1 body mass (BM)) doses (for reviews, see Refs. (3,24)). Accordingly, in this study, we aimed to determine the effect of a low dose of caffeine (3 mg·kg-1 BM) on maximal self-selected power output during HIT commenced with either normal (NORM) or low (LOW) muscle glycogen availability. We hypothesized that even under conditions of low glycogen availability, caffeine would increase maximal self-selected power output and thereby partially rescue the reduction in training intensity observed when individuals commence HIT with low glycogen availability.
Resumo:
This thesis is a study of how the contents of volatile memory on the Windows operating system can be better understood and utilised for the purposes of digital forensic investigations. It proposes several techniques to improve the analysis of memory, with a focus on improving the detection of unknown code such as malware. These contributions allow the creation of a more complete reconstruction of the state of a computer at acquisition time, including whether or not the computer has been infected by malicious code.
Resumo:
The higher harmonic components available from large-amplitude Fourier-transformed alternating current (FT-ac) voltammetry enable the surface active state of a copper electrode in basic media to be probed in much more detail than possible with previously used dc methods. In particular, the absence of capacitance background current allows low-level Faradaic current contributions of fast electron-transfer processes to be detected; these are usually completely undetectable under conditions of dc cyclic voltammetry. Under high harmonic FT-ac voltammetric conditions, copper electrodes exhibit well-defined and reversible premonolayer oxidation responses at potentials within the double layer region in basic 1.0 M NaOH media. This process is attributed to oxidation of copper adatoms (Cu*) of low bulk metal lattice coordination numbers to surface-bonded, reactive hydrated oxide species. Of further interest is the observation that cathodic polarization in 1.0 M NaOH significantly enhances the current detected in each of the fundamental to sixth FT-ac harmonic components in the Cu*/Cu hydrous oxide electron-transfer process which enables the underlying electron transfer processes in the higher harmonics to be studied under conditions where the dc capacitance response is suppressed; the results support the incipient hydrous oxide adatom mediator (IHOAM) model of electrocatalysis. The underlying quasi-reversible interfacial Cu*/Cu hydrous oxide process present under these conditions is shown to mediate the reduction of nitrate at a copper electrode, while the mediator for the hydrazine oxidation reaction appears to involve a different mediator or active state redox couple. Use of FT-ac voltammetry offers prospects for new insights into the nature of active sites and electrocatalysis at the electrode/solution interface of Group 11 metals in aqueous media.
Resumo:
Background Diabetes foot complications are a leading cause of overall avoidable hospital admissions. Since 2006, the Queensland Diabetes Clinical Network has implemented programs aimed at reducing diabetes-related hospitalisation. The aim of this retrospective observational study was to determine the incidence of diabetes foot-related hospital admissions in Queensland from 2005 to 2010. Methods Data on all primary diabetes foot-related admissions in Queensland from 2005-2010 was obtained using diabetes foot-related ICD-10-AM (hospital discharge) codes. Queensland diabetes foot-related admission incidences were calculated using general population data from the Australian Bureau of Statistics. Furthermore, diabetes foot-related sub-group admissions were analysed. Chi-squared tests were used to assess changes in admissions over time. Results Overall, 24,917 diabetes foot-related admissions occurred, resulting in the use of 260,085 bed days or 1.4% of all available Queensland hospital bed days (18,352,152). The primary reasons for these admissions were foot ulcers (49.8%), cellulitis (20.7%), peripheral vascular disease (17.8%) and osteomyelitis (3.8%). The diabetes foot-related admission incidence among the general population (per 100,000) reduced by 22% (103.0 in 2005, to 80.7 in 2010, p < 0.001); bed days decreased by 18% (1,099 to 904, p < 0.001). Conclusion Diabetes foot complications appear to be the primary reason for 1.4 out of every 100 hospital beds used in Queensland. There has been a significant reduction in the incidence of diabetes foot-related admissions in Queensland between 2005 and 2010. This decrease has coincided with a corresponding decrease in amputations and the implementation of several diabetes foot clinical programs throughout Queensland.