827 resultados para Long Distance Anaphora


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Australia, eligible long day care services may apply for support at the state level to assist with the transition of children from culturally or linguistically diverse backgrounds into childcare settings. For staff in childcare services, this support comes in the form of a cultural support worker (CSW). The primary role of a CSW is to build capacity in childcare staff to support children and families as they enter the childcare program. This paper draws on interview data and documentation from multiple sources to report the perspectives of key stakeholders affiliated with a cultural support program in an Australian childcare setting. It concludes that a more flexible approach to policy that directs the work of CSWs is needed, as well as further research into ways to build capacity for cultural competence for both CSWs and childcare staff who work collaboratively to support young children as they transition to childcare.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Orthopaedic fracture fixation implants are increasingly being designed using accurate 3D models of long bones based on computer tomography (CT). Unlike CT, magnetic resonance imaging (MRI) does not involve ionising radiation and is therefore a desirable alternative to CT. This study aims to quantify the accuracy of MRI-based 3D models compared to CT-based 3D models of long bones. The femora of five intact cadaver ovine limbs were scanned using a 1.5T MRI and a CT scanner. Image segmentation of CT and MRI data was performed using a multi-threshold segmentation method. Reference models were generated by digitising the bone surfaces free of soft tissue with a mechanical contact scanner. The MRI- and CT-derived models were validated against the reference models. The results demonstrated that the CT-based models contained an average error of 0.15mm while the MRI-based models contained an average error of 0.23mm. Statistical validation shows that there are no significant differences between 3D models based on CT and MRI data. These results indicate that the geometric accuracy of MRI based 3D models was comparable to that of CT-based models and therefore MRI is a potential alternative to CT for generation of 3D models with high geometric accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modelling how a word is activated in human memory is an important requirement for determining the probability of recall of a word in an extra-list cueing experiment. The spreading activation, spooky-action-at-a-distance and entanglement models have all been used to model the activation of a word. Recently a hypothesis was put forward that the mean activation levels of the respective models are as follows: Spreading � Entanglment � Spooking-action-at-a-distance This article investigates this hypothesis by means of a substantial empirical analysis of each model using the University of South Florida word association, rhyme and word norms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Significant empirical data from the fields of management and business strategy suggest that it is a good idea for a company to make in-house the components and processes underpinning a new technology. Other evidence suggests exactly the opposite, saying that firms would be better off buying components and processes from outside suppliers. One possible explanation for this lack of convergence is that earlier research in this area has overlooked two important aspects of the problem: reputation and trust. To gain insight into how these variables may impact make-buy decisions throughout the innovation process, the Sporas algorithm for measuring reputation was added to an existing agent-based model of how firms interact with each other throughout the development of new technologies. The model�s results suggest that reputation and trust do not play a significant role in the long-term fortunes of an individual firm as it contends with technological change in the marketplace. Accordingly, this model serves as a cue for management researchers to investigate more thoroughly the temporal limitations and contingencies that determine how the trust between firms may affect the R&D process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Limited extant research examines Latin American consumers' perceptions of holiday destinations. This article measures destination brand equity for Australia as a long-haul destination in the emerging Chilean market. Specifically, it develops a model of consumer-based brand equity (CBBE) to explain attitudinal destination loyalty. The proposed model is tested using data from a sample of Chilean travelers. The findings suggest that brand salience, brand image, and brand value are positively related to brand loyalty for Australia. Further, while brand salience for Australia is strong, as a long-haul destination the country faces significant challenges in converting awareness into intent to visit. Australia is a more compelling destination brand for previous visitors than non-visitors. This implies that a word-of-mouth recommendation from previous visitors, a key component of attitudinal loyalty, is a positive indicator of future growth opportunities for Australia's destination marketers to capitalize on.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes an innovative instance similarity based evaluation metric that reduces the search map for clustering to be performed. An aggregate global score is calculated for each instance using the novel idea of Fibonacci series. The use of Fibonacci numbers is able to separate the instances effectively and, in hence, the intra-cluster similarity is increased and the inter-cluster similarity is decreased during clustering. The proposed FIBCLUS algorithm is able to handle datasets with numerical, categorical and a mix of both types of attributes. Results obtained with FIBCLUS are compared with the results of existing algorithms such as k-means, x-means expected maximization and hierarchical algorithms that are widely used to cluster numeric, categorical and mix data types. Empirical analysis shows that FIBCLUS is able to produce better clustering solutions in terms of entropy, purity and F-score in comparison to the above described existing algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A major obstacle in the development of new medications for the treatment of alcohol use disorders (AUDs) has been the lack of preclinical, oral ethanol consumption paradigms that elicit high consumption. We have previously shown that rats exposed to 20% ethanol intermittently in a two-bottle choice paradigm will consume two times more ethanol than those given continuous access without the use of water deprivation or sucrose fading (5-6 g/kg every 24 h vs 2-3 g/kg every 24 h, respectively). In this study, we have adapted the model to an operant self-administration paradigm. Long-Evans rats were given access to 20% ethanol in overnight sessions on one of two schedules: (1) intermittent (Monday, Wednesday, and Friday) or (2) daily (Monday through Friday). With the progression of the overnight sessions, both groups showed a steady escalation in drinking (3-6 g/kg every 14 h) without the use of a sucrose-fading procedure. Following the acquisition phase, the 20% ethanol groups consumed significantly more ethanol than did animals trained to consume 10% ethanol with a sucrose fade (1.5 vs 0.7 g/kg every 30 min) and reached significantly higher blood ethanol concentrations. In addition, training history (20% ethanol vs 10% ethanol with sucrose fade) had a significant effect on the subsequent self-administration of higher concentrations of ethanol. Administration of the pharmacological stressor yohimbine following extinction caused a significant reinstatement of ethanol-seeking behavior. Both 20% ethanol models show promise and are amenable to the study of maintenance, motivation, and reinstatement. Furthermore, training animals to lever press for ethanol without the use of sucrose fading removes a potential confound from self-administration studies. © 2010 Nature Publishing Group All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Twitter is now well established as the world’s second most important social media platform, after Facebook. Its 140-character updates are designed for brief messaging, and its network structures are kept relatively flat and simple: messages from users are either public and visible to all (even to unregistered visitors using the Twitter website), or private and visible only to approved ‘followers’ of the sender; there are no more complex definitions of degrees of connection (family, friends, friends of friends) as they are available in other social networks. Over time, Twitter users have developed simple, but effective mechanisms for working around these limitations: ‘#hashtags’, which enable the manual or automatic collation of all tweets containing the same #hashtag, as well allowing users to subscribe to content feeds that contain only those tweets which feature specific #hashtags; and ‘@replies’, which allow senders to direct public messages even to users whom they do not already follow. This paper documents a methodology for extracting public Twitter activity data around specific #hashtags, and for processing these data in order to analyse and visualize the @reply networks existing between participating users – both overall, as a static network, and over time, to highlight the dynamic structure of @reply conversations. Such visualizations enable us to highlight the shifting roles played by individual participants, as well as the response of the overall #hashtag community to new stimuli – such as the entry of new participants or the availability of new information. Over longer timeframes, it is also possible to identify different phases in the overall discussion, or the formation of distinct clusters of preferentially interacting participants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our research explores the design of networked technologies to facilitate local suburban communications and to encourage people to engage with their local community. While there are many investigations of interaction designs for networked technologies, most research utilises small exercises, workshops or other short-term studies to investigate interaction designs. However, we have found these short-term methods to be ineffective in the context of understanding local community interaction. Moreover we find that people are resistant to putting their time into workshops and exercises, understandably so because these are academic practices, not local community practices. Our contribution is to detail a long term embedded design approach in which we interact with the community over the long term in the course of normal community goings-on with an evolving exploratory prototype. This paper discusses the embedded approach to working in the wild for extended field research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.