983 resultados para Long cycles
Resumo:
In Australia, eligible long day care services may apply for support at the state level to assist with the transition of children from culturally or linguistically diverse backgrounds into childcare settings. For staff in childcare services, this support comes in the form of a cultural support worker (CSW). The primary role of a CSW is to build capacity in childcare staff to support children and families as they enter the childcare program. This paper draws on interview data and documentation from multiple sources to report the perspectives of key stakeholders affiliated with a cultural support program in an Australian childcare setting. It concludes that a more flexible approach to policy that directs the work of CSWs is needed, as well as further research into ways to build capacity for cultural competence for both CSWs and childcare staff who work collaboratively to support young children as they transition to childcare.
Resumo:
This book explores the application of concepts of fiduciary duty or public trust in responding to the policy and governance challenges posed by policy problems that extend over multiple terms of government or even, as in the case of climate change, human generations. The volume brings together a range of perspectives including leading international thinkers on questions of fiduciary duty and public trust, Australia's most prominent judicial advocate for the application of fiduciary duty, top law scholars from several major universities, expert commentary from an influential climate policy think-tank and the views of long-serving highly respected past and present parliamentarians. The book presents a detailed examination of the nature and extent of fiduciary duty, looking at the example of Australia and having regard to developments in comparable jurisdictions. It identifies principles that could improve the accountability of political actors for their responses to major problems that may extend over multiple electoral cycles.
Resumo:
Orthopaedic fracture fixation implants are increasingly being designed using accurate 3D models of long bones based on computer tomography (CT). Unlike CT, magnetic resonance imaging (MRI) does not involve ionising radiation and is therefore a desirable alternative to CT. This study aims to quantify the accuracy of MRI-based 3D models compared to CT-based 3D models of long bones. The femora of five intact cadaver ovine limbs were scanned using a 1.5T MRI and a CT scanner. Image segmentation of CT and MRI data was performed using a multi-threshold segmentation method. Reference models were generated by digitising the bone surfaces free of soft tissue with a mechanical contact scanner. The MRI- and CT-derived models were validated against the reference models. The results demonstrated that the CT-based models contained an average error of 0.15mm while the MRI-based models contained an average error of 0.23mm. Statistical validation shows that there are no significant differences between 3D models based on CT and MRI data. These results indicate that the geometric accuracy of MRI based 3D models was comparable to that of CT-based models and therefore MRI is a potential alternative to CT for generation of 3D models with high geometric accuracy.
Resumo:
The Giant Long-Armed Prawn, Macrobrachium lar is a freshwater species native to the Indo-Pacific. M. lar has a long-lived, passive, pelagic marine larval stage where larvae need to colonise freshwater within three months to complete their development. Dispersal is likely to be influenced by the extensive distances larvae must transit between small oceanic islands to find suitable freshwater habitat, and by prevailing east to west wind and ocean currents in the southern Pacific Ocean. Thus, both intrinsic and extrinsic factors are likely to influence wild population structure in this species. The present study sought to define the contemporary broad and fine-scale population genetic structure of Macrobrachium lar in the south-western Pacific Ocean. Three polymorphic microsatellite loci were used to assess patterns of genetic variation within and among 19 wild adult sample sites. Statistical procedures that partition variation implied that at both spatial scales, essentially all variation was present within sample sites and differentiation among sites was low. Any differentiation observed also was not correlated with geographical distance. Statistical approaches that measure genetic distance, at the broad-scale, showed that all south-western Pacific Islands were essentially homogeneous, with the exception of a well supported divergent Cook Islands group. These findings are likely the result of some combination of factors that may include the potential for allelic homoplasy, through to the effects of sampling regime. Based on the findings, there is most likely a divergent M. lar Cook Islands clade in the south-western Pacific Ocean, resulting from prevailing ocean currents. Confirmation of this pattern will require a more detailed analysis of nDNA variation using a larger number of loci and, where possible, use of larger population sizes.
Resumo:
Significant empirical data from the fields of management and business strategy suggest that it is a good idea for a company to make in-house the components and processes underpinning a new technology. Other evidence suggests exactly the opposite, saying that firms would be better off buying components and processes from outside suppliers. One possible explanation for this lack of convergence is that earlier research in this area has overlooked two important aspects of the problem: reputation and trust. To gain insight into how these variables may impact make-buy decisions throughout the innovation process, the Sporas algorithm for measuring reputation was added to an existing agent-based model of how firms interact with each other throughout the development of new technologies. The model�s results suggest that reputation and trust do not play a significant role in the long-term fortunes of an individual firm as it contends with technological change in the marketplace. Accordingly, this model serves as a cue for management researchers to investigate more thoroughly the temporal limitations and contingencies that determine how the trust between firms may affect the R&D process.
Resumo:
Limited extant research examines Latin American consumers' perceptions of holiday destinations. This article measures destination brand equity for Australia as a long-haul destination in the emerging Chilean market. Specifically, it develops a model of consumer-based brand equity (CBBE) to explain attitudinal destination loyalty. The proposed model is tested using data from a sample of Chilean travelers. The findings suggest that brand salience, brand image, and brand value are positively related to brand loyalty for Australia. Further, while brand salience for Australia is strong, as a long-haul destination the country faces significant challenges in converting awareness into intent to visit. Australia is a more compelling destination brand for previous visitors than non-visitors. This implies that a word-of-mouth recommendation from previous visitors, a key component of attitudinal loyalty, is a positive indicator of future growth opportunities for Australia's destination marketers to capitalize on.
Resumo:
A major obstacle in the development of new medications for the treatment of alcohol use disorders (AUDs) has been the lack of preclinical, oral ethanol consumption paradigms that elicit high consumption. We have previously shown that rats exposed to 20% ethanol intermittently in a two-bottle choice paradigm will consume two times more ethanol than those given continuous access without the use of water deprivation or sucrose fading (5-6 g/kg every 24 h vs 2-3 g/kg every 24 h, respectively). In this study, we have adapted the model to an operant self-administration paradigm. Long-Evans rats were given access to 20% ethanol in overnight sessions on one of two schedules: (1) intermittent (Monday, Wednesday, and Friday) or (2) daily (Monday through Friday). With the progression of the overnight sessions, both groups showed a steady escalation in drinking (3-6 g/kg every 14 h) without the use of a sucrose-fading procedure. Following the acquisition phase, the 20% ethanol groups consumed significantly more ethanol than did animals trained to consume 10% ethanol with a sucrose fade (1.5 vs 0.7 g/kg every 30 min) and reached significantly higher blood ethanol concentrations. In addition, training history (20% ethanol vs 10% ethanol with sucrose fade) had a significant effect on the subsequent self-administration of higher concentrations of ethanol. Administration of the pharmacological stressor yohimbine following extinction caused a significant reinstatement of ethanol-seeking behavior. Both 20% ethanol models show promise and are amenable to the study of maintenance, motivation, and reinstatement. Furthermore, training animals to lever press for ethanol without the use of sucrose fading removes a potential confound from self-administration studies. © 2010 Nature Publishing Group All rights reserved.
Resumo:
Twitter is now well established as the world’s second most important social media platform, after Facebook. Its 140-character updates are designed for brief messaging, and its network structures are kept relatively flat and simple: messages from users are either public and visible to all (even to unregistered visitors using the Twitter website), or private and visible only to approved ‘followers’ of the sender; there are no more complex definitions of degrees of connection (family, friends, friends of friends) as they are available in other social networks. Over time, Twitter users have developed simple, but effective mechanisms for working around these limitations: ‘#hashtags’, which enable the manual or automatic collation of all tweets containing the same #hashtag, as well allowing users to subscribe to content feeds that contain only those tweets which feature specific #hashtags; and ‘@replies’, which allow senders to direct public messages even to users whom they do not already follow. This paper documents a methodology for extracting public Twitter activity data around specific #hashtags, and for processing these data in order to analyse and visualize the @reply networks existing between participating users – both overall, as a static network, and over time, to highlight the dynamic structure of @reply conversations. Such visualizations enable us to highlight the shifting roles played by individual participants, as well as the response of the overall #hashtag community to new stimuli – such as the entry of new participants or the availability of new information. Over longer timeframes, it is also possible to identify different phases in the overall discussion, or the formation of distinct clusters of preferentially interacting participants.
Resumo:
Our research explores the design of networked technologies to facilitate local suburban communications and to encourage people to engage with their local community. While there are many investigations of interaction designs for networked technologies, most research utilises small exercises, workshops or other short-term studies to investigate interaction designs. However, we have found these short-term methods to be ineffective in the context of understanding local community interaction. Moreover we find that people are resistant to putting their time into workshops and exercises, understandably so because these are academic practices, not local community practices. Our contribution is to detail a long term embedded design approach in which we interact with the community over the long term in the course of normal community goings-on with an evolving exploratory prototype. This paper discusses the embedded approach to working in the wild for extended field research.
Resumo:
Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.
Resumo:
Growth in productivity is the key determinant of the long-term health and prosperity of an economy. The construction industry being one of major strategic importance, its productivity performance has a significant effect on national economic growth. The relationship between construction output and economy has received intensive studies, but there is lack of empirical study on the relationship between construction productivity and economic fluctuations. Fluctuations in construction output are endemic in the industry. In part they are caused by the boom and slump of the economy as a whole and in part by the nature of the construction product. This research aims to uncover how the productivity of construction sector is influenced in the course of economic fluctuations in Malaysia. Malaysia has adopted three economic policies – New Economic Policy (1971-1990), National Development Policy (1991-2000) and the National Vision Policy (2001-2010) since gaining independence in 1959. The Privatisation Master Plan was introduced in 1991. Operating within this historical context, the Malaysian construction sector has experienced four business cycles since 1960. A mixed-method design approach is adopted in this study. Quantitative analysis was conducted on the published official statistics of the construction industry and the overall economy in Malaysia between 1970 and 2009. Qualitative study involved interviews with a purposive sample of 21 industrial participants. This study identified a 32-year long building cycle appears in 1975-2006. It is superimposed with three shorter construction business cycles in 1975-1987, 1987-1999 and 1999-2006. The correlations of Construction labour productivity (CLP) and GDP per capita are statistically significant for the 1975-2006 building cycle, 1987-1999 and 1999-2006 construction business cycles. It was not significant in 1975-1987 construction business cycles. The Construction Industry Surveys/Census over the period from 1996 to 2007 show that the average growth rate of total output per employee expanded but the added value per employee contracted which imply high cost of bought-in materials and services and inefficient usage of purchases. The construction labour productivity is peaked at 2004 although there is contraction of construction sector in 2004. The residential subsector performed relatively better than the other sub-sectors in most of the productivity indicators. Improvements are found in output per employee, value added per employee, labour competitiveness and capital investment but declines are recorded in value added content and capital productivity. The civil engineering construction is most productive in the labour productivity nevertheless relatively poorer in the capital productivity. The labour cost is more competitive in the larger size establishment. The added value per labour cost is higher in larger sized establishment attributed to efficient in utilization of capital. The interview with the industrial participant reveals that the productivity of the construction sector is influenced by the economic environment, the construction methods, contract arrangement, payment chain and regulatory policies. The fluctuations of construction demand have caused companies switched to defensive strategy during the economic downturn and to ensure short-term survival than to make a profit for the long-term survival and growth. It leads the company to take drastic measures to curb expenses, downsizing, employ contract employment, diversification and venture overseas market. There is no empirical evidence supports downsizing as a necessary step in a process of reviving productivity. The productivity does not correlate with size of firm. A relatively smaller and focused firm is more productive than the larger and diversified organisation. However diversified company experienced less fluctuation in both labour and capital productivity. In order to improve the productivity of the construction sector, it is necessary to remove the negatives and flaws from past practices. The recommended measures include long-term strategic planning and coordinated approaches of government agencies in planning of infrastructure development and to provide regulatory environments which encourage competition and facilitate productivity improvement.
Resumo:
The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.