936 resultados para Free will
Resumo:
Prior to the GFC, Brisbane and Perth were experiencing the highest increases in median residential house prices, compared to the other major Australian cities, due to strong demand for both owner occupied and investment residential property. In both these cities, a major driver of this demand and subsequent increases in residential property prices was the strong resources sector. With the onset of the GFC in 2008, the resources and construction sectors in Queensland contracted significantly and this had both direct and indirect impacts on the Brisbane residential property market. However, this impact was not consistent across Brisbane residential property sectors. The affect on houses and units differed, as did the impact based on geographic location and suburb value. This paper tracks Brisbane residential property sales listings, sales and returns over the period February 2009 to July 2010 and provides an analysis of the residential market for 24 Brisbane suburbs. These suburbs cover main residential areas of Brisbane and are based on an equal number of low, medium and high socioeconomic areas of Brisbane. This assessment of socio-economic status for the suburbs is based on both median household income and median house price. The analysis will cover both free standing residential property and residential units/townhouses/villas. The results will show how each of these residential property sub markets have performed following the GFC.
Resumo:
The Wikipedia has become the most popular online source of encyclopedic information. The English Wikipedia collection, as well as some other languages collections, is extensively linked. However, as a multilingual collection the Wikipedia is only very weakly linked. There are few cross-language links or cross-dialect links (see, for example, Chinese dialects). In order to link the multilingual-Wikipedia as a single collection, automated cross language link discovery systems are needed – systems that identify anchor-texts in one language and targets in another. The evaluation of Link Discovery approaches within the English version of the Wikipedia has been examined in the INEX Link the-Wiki track since 2007, whilst both CLEF and NTCIR emphasized the investigation and the evaluation of cross-language information retrieval. In this position paper we propose a new virtual evaluation track: Cross Language Link Discovery (CLLD). The track will initially examine cross language linking of Wikipedia articles. This virtual track will not be tied to any one forum; instead we hope it can be connected to each of (at least): CLEF, NTCIR, and INEX as it will cover ground currently studied by each. The aim is to establish a virtual evaluation environment supporting continuous assessment and evaluation, and a forum for the exchange of research ideas. It will be free from the difficulties of scheduling and synchronizing groups of collaborating researchers and alleviate the necessity to travel across the globe in order to share knowledge. We aim to electronically publish peer-reviewed publications arising from CLLD in a similar fashion: online, with open access, and without fixed submission deadlines.
Resumo:
Taxes are an important component of investing that is commonly overlooked in both the literature and in practice. For example, many understand that taxes will reduce an investment’s return, but less understood is the risk-sharing nature of taxes that also reduces the investment’s risk. This thesis examines how taxes affect the optimal asset allocation and asset location decision in an Australian environment. It advances the model of Horan & Al Zaman (2008), improving the method by which the present value of tax liabilities are calculated, by using an after-tax risk-free discount rate, and incorporating any new or reduced tax liabilities generated into its expected risk and return estimates. The asset allocation problem is examined for a range of different scenarios using Australian parameters, including different risk aversion levels, personal marginal tax rates, investment horizons, borrowing premiums, high or low inflation environments, and different starting cost bases. The findings support the Horan & Al Zaman (2008) conclusion that equities should be held in the taxable account. In fact, these findings are strengthened with most of the efficient frontier maximising equity holdings in the taxable account instead of only half. Furthermore, these findings transfer to the Australian case, where it is found that taxed Australian investors should always invest into equities first through the taxable account before investing in super. However, untaxed Australian investors should invest their equity first through superannuation. With borrowings allowed in the taxable account (no borrowing premium), Australian taxed investors should hold 100% of the superannuation account in the risk-free asset, while undertaking leverage in the taxable account to achieve the desired risk-return. Introducing a borrowing premium decreases the likelihood of holding 100% of super in the risk-free asset for taxable investors. The findings also suggest that the higher the marginal tax rate, the higher the borrowing premium in order to overcome this effect. Finally, as the investor’s marginal tax rate increases, the overall allocation to equities should increase due to the increased risk and return sharing caused by taxation, and in order to achieve the same risk/return level as the lower taxation level, the investor must take on more equity exposure. The investment horizon has a minimal impact on the optimal allocation decision in the absence of factors such as mean reversion and human capital.
Resumo:
This research explores the quality and importance of the physical environment of two early learning centres on the Sunshine Coast in Queensland, utilising qualitative interviews with parents (n=4) and educators (n=4) to understand how design might impact on children’s development and a quantitative rating (the Early Childhood Physical Environment Rating Scale; ECPERS) to assess the quality of the physical built environment and infrastructure. With an average ECPERS quality rating, thematic analysis of the interviews revealed that educators and parents viewed the physical environment as important to a child’s development, although the quality of staff was predominant. Early learning centres should be ‘homely’, inviting, bright and linked to the outdoors, with participants describing how space “welcomes the child, makes them feel safe and encourages learning”. Four key themes characterised views: Emotional Connection (quality of staff and physical environment), Experiencing Design (impact of design on child development), Hub for Community Integration (relationships and resources) and Future Vision (ideal physical environment, technology and ratings). With participants often struggling to clearly articulate their thoughts on design issues, a collaborative and jargon-free approach to designing space is required. These findings will help facilitate discussion about the role and design of the physical environment in early childhood centres, with the tangible examples of ‘ideal space’ enhancing communication between architects and educators about how best to design and reconfigure space to enhance learning outcomes.
Resumo:
The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of seven published/submitted papers, of which one has been published, three accepted for publication and the other three are under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of proposing strategies for the performance control of Distributed Generation (DG) system with digital estimation of power system signal parameters. Distributed Generation (DG) has been recently introduced as a new concept for the generation of power and the enhancement of conventionally produced electricity. Global warming issue calls for renewable energy resources in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cell and micro turbine will gain substantial momentum in the near future. Technically, DG can be a viable solution for the issue of the integration of renewable or non-conventional energy resources. Basically, DG sources can be connected to local power system through power electronic devices, i.e. inverters or ac-ac converters. The interconnection of DG systems to power system as a compensator or a power source with high quality performance is the main aim of this study. Source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, distortion at the point of common coupling in weak source cases, source current power factor, and synchronism of generated currents or voltages are the issues of concern. The interconnection of DG sources shall be carried out by using power electronics switching devices that inject high frequency components rather than the desired current. Also, noise and harmonic distortions can impact the performance of the control strategies. To be able to mitigate the negative effect of high frequency and harmonic as well as noise distortion to achieve satisfactory performance of DG systems, new methods of signal parameter estimation have been proposed in this thesis. These methods are based on processing the digital samples of power system signals. Thus, proposing advanced techniques for the digital estimation of signal parameters and methods for the generation of DG reference currents using the estimates provided is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. One of the main parameters of a power system signal is its frequency. Phasor Measurement (PM) technique is one of the renowned and advanced techniques used for the estimation of power system frequency. Chapter 2 focuses on an in-depth analysis conducted on the PM technique to reveal its strengths and drawbacks. The analysis will be followed by a new technique proposed to enhance the speed of the PM technique while the input signal is free of even-order harmonics. The other techniques proposed in this thesis as the novel ones will be compared with the PM technique comprehensively studied in Chapter 2. An algorithm based on the concept of Kalman filtering is proposed in Chapter 3. The algorithm is intended to estimate signal parameters like amplitude, frequency and phase angle in the online mode. The Kalman filter is modified to operate on the output signal of a Finite Impulse Response (FIR) filter designed by a plain summation. The frequency estimation unit is independent from the Kalman filter and uses the samples refined by the FIR filter. The frequency estimated is given to the Kalman filter to be used in building the transition matrices. The initial settings for the modified Kalman filter are obtained through a trial and error exercise. Another algorithm again based on the concept of Kalman filtering is proposed in Chapter 4 for the estimation of signal parameters. The Kalman filter is also modified to operate on the output signal of the same FIR filter explained above. Nevertheless, the frequency estimation unit, unlike the one proposed in Chapter 3, is not segregated and it interacts with the Kalman filter. The frequency estimated is given to the Kalman filter and other parameters such as the amplitudes and phase angles estimated by the Kalman filter is taken to the frequency estimation unit. Chapter 5 proposes another algorithm based on the concept of Kalman filtering. This time, the state parameters are obtained through matrix arrangements where the noise level is reduced on the sample vector. The purified state vector is used to obtain a new measurement vector for a basic Kalman filter applied. The Kalman filter used has similar structure to a basic Kalman filter except the initial settings are computed through an extensive math-work with regards to the matrix arrangement utilized. Chapter 6 proposes another algorithm based on the concept of Kalman filtering similar to that of Chapter 3. However, this time the initial settings required for the better performance of the modified Kalman filter are calculated instead of being guessed by trial and error exercises. The simulations results for the parameters of signal estimated are enhanced due to the correct settings applied. Moreover, an enhanced Least Error Square (LES) technique is proposed to take on the estimation when a critical transient is detected in the input signal. In fact, some large, sudden changes in the parameters of the signal at these critical transients are not very well tracked by Kalman filtering. However, the proposed LES technique is found to be much faster in tracking these changes. Therefore, an appropriate combination of the LES and modified Kalman filtering is proposed in Chapter 6. Also, this time the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 7 proposes the other algorithm based on the concept of Kalman filtering similar to those of Chapter 3 and 6. However, this time an optimal digital filter is designed instead of the simple summation FIR filter. New initial settings for the modified Kalman filter are calculated based on the coefficients of the digital filter applied. Also, the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 8 uses the estimation algorithm proposed in Chapter 7 for the interconnection scheme of a DG to power network. Robust estimates of the signal amplitudes and phase angles obtained by the estimation approach are used in the reference generation of the compensation scheme. Several simulation tests provided in this chapter show that the proposed scheme can very well handle the source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, and synchronism of generated currents or voltages. The purposed compensation scheme also prevents distortion in voltage at the point of common coupling in weak source cases, balances the source currents, and makes the supply side power factor a desired value.
Resumo:
The idea of body weight regulation implies that a biological mechanism exerts control over energy expenditure and food intake. This is a central tenet of energy homeostasis. However, the source and identity of the controlling mechanism have not been identified, although it is often presumed to be some long-acting signal related to body fat, such as leptin. Using a comprehensive experimental platform, we have investigated the relationship between biological and behavioural variables in two separate studies over a 12-week intervention period in obese adults (total n 92). All variables have been measured objectively and with a similar degree of scientific control and precision, including anthropometric factors, body composition, RMR and accumulative energy consumed at individual meals across the whole day. Results showed that meal size and daily energy intake (EI) were significantly correlated with fat-free mass (FFM, P values ,0·02–0·05) but not with fat mass (FM) or BMI (P values 0·11–0·45) (study 1, n 58). In study 2 (n 34), FFM (but not FM or BMI) predicted meal size and daily EI under two distinct dietary conditions (high-fat and low-fat). These data appear to indicate that, under these circumstances, some signal associated with lean mass (but not FM) exerts a determining effect over self-selected food consumption. This signal may be postulated to interact with a separate class of signals generated by FM. This finding may have implications for investigations of the molecular control of food intake and body weight and for the management of obesity.
Resumo:
Recent changes in IT organisations have resulted in changes to library IT support. Concurrently, new tools and systems for service delivery, have become available, but these require a move away from the traditional ICT model. Many libraries are investigating new models, including Software as a Service (SaaS), cloud computing and open source software. This paper considers whether the adoption of these tools and environments by libraries has occurred as a result of a lack of suitable ICT solutions and support ICT organisations. It also considers what skills library staff need in order to ensure sustainability, supportability, and ultimately, success.
Resumo:
Unsteady natural convection inside a triangular cavity has been studied in this study. The cavity is filled with a saturated porous medium with non-isothermal left inclined wall while the bottom surface is isothermally heated and the right inclined surface is isothermally cold. An internal heat generation is also considered which is dependent of the fluid temperature. The governing equations are solved numerically by finite element method. The Prandtl number of the fluid is considered as 0.7 (air) while the aspect ratio and the Rayleigh number are considered as 0.5 and 105 respectively. The effect of the porosity of the medium and heat generation on the fluid flow and heat transfer have been presented as a form of streamlines and isotherms. The rate of heat transfer through three surfaces of the enclosure is also presented.
Resumo:
Software forms an important part of the interface between citizens and their government. An increasing amount of government functions are being performed, controlled, or delivered electronically. This software, like all language, is never value-neutral, but must, to some extent, reflect the values of the coder and proprietor. The move that many governments are making towards e-governance, and the increasing reliance that is being placed upon software in government, necessitates a rethinking of the relationships of power and control that are embodied in software.
Resumo:
The problem of steady subcritical free surface flow past a submerged inclined step is considered. The asymptotic limit of small Froude number is treated, with particular emphasis on the effect that changing the angle of the step face has on the surface waves. As demonstrated by Chapman & Vanden-Broeck (2006), the divergence of a power series expansion in powers of the square of the Froude number is caused by singularities in the analytic continuation of the free surface; for an inclined step, these singularities may correspond to either the corners or stagnation points of the step, or both, depending on the angle of incline. Stokes lines emanate from these singularities, and exponentially small waves are switched on at the point the Stokes lines intersect with the free surface. Our results suggest that for a certain range of step angles, two wavetrains are switched on, but the exponentially subdominant one is switched on first, leading to an intermediate wavetrain not previously noted. We extend these ideas to the problem of flow over a submerged bump or trench, again with inclined sides. This time there may be two, three or four active Stokes lines, depending on the inclination angles. We demonstrate how to construct a base topography such that wave contributions from separate Stokes lines are of equal magnitude but opposite phase, thus cancelling out. Our asymptotic results are complemented by numerical solutions to the fully nonlinear equations.
Resumo:
Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.
Resumo:
A rule-based approach for classifying previously identified medical concepts in the clinical free text into an assertion category is presented. There are six different categories of assertions for the task: Present, Absent, Possible, Conditional, Hypothetical and Not associated with the patient. The assertion classification algorithms were largely based on extending the popular NegEx and Context algorithms. In addition, a health based clinical terminology called SNOMED CT and other publicly available dictionaries were used to classify assertions, which did not fit the NegEx/Context model. The data for this task includes discharge summaries from Partners HealthCare and from Beth Israel Deaconess Medical Centre, as well as discharge summaries and progress notes from University of Pittsburgh Medical Centre. The set consists of 349 discharge reports, each with pairs of ground truth concept and assertion files for system development, and 477 reports for evaluation. The system’s performance on the evaluation data set was 0.83, 0.83 and 0.83 for recall, precision and F1-measure, respectively. Although the rule-based system shows promise, further improvements can be made by incorporating machine learning approaches.
Resumo:
The authors examine Moylan v Rickard and how the case illustrates the effectiveness of the Powers of Attorney Act 1998 (Qld) to provide remedies and other possible avenues of redress
Resumo:
Chronic venous leg ulcers are a detrimental health issue plaguing our society, resulting in long term pain, immobility and decreased quality of life for a large proportion of sufferers. The frequency of these chronic wounds has led current research to focus on the wound environment to provide important information regarding the prolonged, fluctuated or static healing patterns of these wounds. Disruption to the normal wound healing process results in release of multiple factors in the wound environment that could correlate to wound chronicity. These biochemical factors can often be detected through non-invasively sampling chronic wound fluid (CWF) from the site of injury. Of note, whilst there are numerous studies comparing acute and chronic wound fluids, there have not been any reports in the literature employing a longitudinal study in order to track biochemical changes in wound fluid as patients transition from a non-healing to healed state. Initially the objective of this study was to identify biochemical changes in CWF associated with wound healing using a proteomic approach. The proteomic approach incorporated a multi-dimensional liquid chromatography fractionation technique coupled with mass spectrometry (MS) to enable identification of proteins present in lower concentrations in CWF. Not surprisingly, many of the proteins identified in wound fluid were acute phase proteins normally expressed during the inflammatory phase of healing. However, the number of proteins positively identified by MS was quite low. This was attributed to the diverse range in concentration of protein species in CWF making it challenging to detect the diagnostically relevant low molecular weight proteins. In view of this, SELDI-TOF MS was also explored as a means to target low molecular weight proteins in sequential patient CWF samples during the course of healing. Unfortunately, the results generated did not yield any peaks of interest that were altered as wounds transitioned to a healed state. During the course of proteomic assessment of CWF, it became evident that a fraction of non-proteinaceous compounds strongly absorbed at 280 nm. Subsequent analyses confirmed that most of these compounds were in fact part of the purine catabolic pathway, possessing distinctive aromatic rings and which results in high absorbance at 254 nm. The accumulation of these purinogenic compounds in CWF suggests that the wound bed is poorly oxygenated resulting in a switch to anaerobic metabolism and consequently ATP breakdown. In addition, the presence of the terminal purine catabolite, uric acid (UA), indicates that the enzyme xanthine oxidoreductase (XOR) catalyses the reaction of hypoxanthine to xanthine and finally to UA. More importantly, the studies provide evidence for the first time of the exogenous presence of XOR in CWF. XOR is the only enzyme in humans capable of catalysing the production of UA in conjunction with a burst of the highly reactive superoxide radical and other oxidants like H2O2. Excessive release of these free radicals in the wound environment can cause cellular damage disrupting the normal wound healing process. In view of this, a sensitive and specific assay was established for monitoring low concentrations of these catabolites in CWF. This procedure involved combining high performance liquid chromatography (HPLC) with tandem mass spectrometry and multiple reaction monitoring (MRM). This application was selective, using specific MRM transitions and HPLC separations for each analyte, making it ideal for the detection and quantitation of purine catabolites in CWF. The results demonstrated that elevated levels of UA were detected in wound fluid obtained from patients with clinically worse ulcers. This suggests that XOR is active in the wound site generating significant amounts of reactive oxygen species (ROS). In addition, analysis of the amount of purine precursors in wound fluid revealed elevated levels of purine precursors in wound fluid from patients with less severe ulcers. Taken together, the results generated in this thesis suggest that monitoring changes of purine catabolites in CWF is likely to provide valuable information regarding the healing patterns of chronic venous leg ulcers. XOR catalysis of purine precursors not only provides a method for monitoring the onset, prognosis and progress of chronic venous leg ulcers, but also provides a potential therapeutic target by inhibiting XOR, thus blocking UA and ROS production. Targeting a combination of these purinogenic compounds and XOR could lead to the development of novel point of care diagnostic tests. Therefore, further investigation of these processes during wound healing will be worthwhile and may assist in elucidating the pathogenesis of this disease state, which in turn may lead to the development of new diagnostics and therapies that target these processes.