286 resultados para Long cycles


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Giant Long-Armed Prawn, Macrobrachium lar is a freshwater species native to the Indo-Pacific. M. lar has a long-lived, passive, pelagic marine larval stage where larvae need to colonise freshwater within three months to complete their development. Dispersal is likely to be influenced by the extensive distances larvae must transit between small oceanic islands to find suitable freshwater habitat, and by prevailing east to west wind and ocean currents in the southern Pacific Ocean. Thus, both intrinsic and extrinsic factors are likely to influence wild population structure in this species. The present study sought to define the contemporary broad and fine-scale population genetic structure of Macrobrachium lar in the south-western Pacific Ocean. Three polymorphic microsatellite loci were used to assess patterns of genetic variation within and among 19 wild adult sample sites. Statistical procedures that partition variation implied that at both spatial scales, essentially all variation was present within sample sites and differentiation among sites was low. Any differentiation observed also was not correlated with geographical distance. Statistical approaches that measure genetic distance, at the broad-scale, showed that all south-western Pacific Islands were essentially homogeneous, with the exception of a well supported divergent Cook Islands group. These findings are likely the result of some combination of factors that may include the potential for allelic homoplasy, through to the effects of sampling regime. Based on the findings, there is most likely a divergent M. lar Cook Islands clade in the south-western Pacific Ocean, resulting from prevailing ocean currents. Confirmation of this pattern will require a more detailed analysis of nDNA variation using a larger number of loci and, where possible, use of larger population sizes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Significant empirical data from the fields of management and business strategy suggest that it is a good idea for a company to make in-house the components and processes underpinning a new technology. Other evidence suggests exactly the opposite, saying that firms would be better off buying components and processes from outside suppliers. One possible explanation for this lack of convergence is that earlier research in this area has overlooked two important aspects of the problem: reputation and trust. To gain insight into how these variables may impact make-buy decisions throughout the innovation process, the Sporas algorithm for measuring reputation was added to an existing agent-based model of how firms interact with each other throughout the development of new technologies. The model�s results suggest that reputation and trust do not play a significant role in the long-term fortunes of an individual firm as it contends with technological change in the marketplace. Accordingly, this model serves as a cue for management researchers to investigate more thoroughly the temporal limitations and contingencies that determine how the trust between firms may affect the R&D process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Limited extant research examines Latin American consumers' perceptions of holiday destinations. This article measures destination brand equity for Australia as a long-haul destination in the emerging Chilean market. Specifically, it develops a model of consumer-based brand equity (CBBE) to explain attitudinal destination loyalty. The proposed model is tested using data from a sample of Chilean travelers. The findings suggest that brand salience, brand image, and brand value are positively related to brand loyalty for Australia. Further, while brand salience for Australia is strong, as a long-haul destination the country faces significant challenges in converting awareness into intent to visit. Australia is a more compelling destination brand for previous visitors than non-visitors. This implies that a word-of-mouth recommendation from previous visitors, a key component of attitudinal loyalty, is a positive indicator of future growth opportunities for Australia's destination marketers to capitalize on.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A major obstacle in the development of new medications for the treatment of alcohol use disorders (AUDs) has been the lack of preclinical, oral ethanol consumption paradigms that elicit high consumption. We have previously shown that rats exposed to 20% ethanol intermittently in a two-bottle choice paradigm will consume two times more ethanol than those given continuous access without the use of water deprivation or sucrose fading (5-6 g/kg every 24 h vs 2-3 g/kg every 24 h, respectively). In this study, we have adapted the model to an operant self-administration paradigm. Long-Evans rats were given access to 20% ethanol in overnight sessions on one of two schedules: (1) intermittent (Monday, Wednesday, and Friday) or (2) daily (Monday through Friday). With the progression of the overnight sessions, both groups showed a steady escalation in drinking (3-6 g/kg every 14 h) without the use of a sucrose-fading procedure. Following the acquisition phase, the 20% ethanol groups consumed significantly more ethanol than did animals trained to consume 10% ethanol with a sucrose fade (1.5 vs 0.7 g/kg every 30 min) and reached significantly higher blood ethanol concentrations. In addition, training history (20% ethanol vs 10% ethanol with sucrose fade) had a significant effect on the subsequent self-administration of higher concentrations of ethanol. Administration of the pharmacological stressor yohimbine following extinction caused a significant reinstatement of ethanol-seeking behavior. Both 20% ethanol models show promise and are amenable to the study of maintenance, motivation, and reinstatement. Furthermore, training animals to lever press for ethanol without the use of sucrose fading removes a potential confound from self-administration studies. © 2010 Nature Publishing Group All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Twitter is now well established as the world’s second most important social media platform, after Facebook. Its 140-character updates are designed for brief messaging, and its network structures are kept relatively flat and simple: messages from users are either public and visible to all (even to unregistered visitors using the Twitter website), or private and visible only to approved ‘followers’ of the sender; there are no more complex definitions of degrees of connection (family, friends, friends of friends) as they are available in other social networks. Over time, Twitter users have developed simple, but effective mechanisms for working around these limitations: ‘#hashtags’, which enable the manual or automatic collation of all tweets containing the same #hashtag, as well allowing users to subscribe to content feeds that contain only those tweets which feature specific #hashtags; and ‘@replies’, which allow senders to direct public messages even to users whom they do not already follow. This paper documents a methodology for extracting public Twitter activity data around specific #hashtags, and for processing these data in order to analyse and visualize the @reply networks existing between participating users – both overall, as a static network, and over time, to highlight the dynamic structure of @reply conversations. Such visualizations enable us to highlight the shifting roles played by individual participants, as well as the response of the overall #hashtag community to new stimuli – such as the entry of new participants or the availability of new information. Over longer timeframes, it is also possible to identify different phases in the overall discussion, or the formation of distinct clusters of preferentially interacting participants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our research explores the design of networked technologies to facilitate local suburban communications and to encourage people to engage with their local community. While there are many investigations of interaction designs for networked technologies, most research utilises small exercises, workshops or other short-term studies to investigate interaction designs. However, we have found these short-term methods to be ineffective in the context of understanding local community interaction. Moreover we find that people are resistant to putting their time into workshops and exercises, understandably so because these are academic practices, not local community practices. Our contribution is to detail a long term embedded design approach in which we interact with the community over the long term in the course of normal community goings-on with an evolving exploratory prototype. This paper discusses the embedded approach to working in the wild for extended field research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Growth in productivity is the key determinant of the long-term health and prosperity of an economy. The construction industry being one of major strategic importance, its productivity performance has a significant effect on national economic growth. The relationship between construction output and economy has received intensive studies, but there is lack of empirical study on the relationship between construction productivity and economic fluctuations. Fluctuations in construction output are endemic in the industry. In part they are caused by the boom and slump of the economy as a whole and in part by the nature of the construction product. This research aims to uncover how the productivity of construction sector is influenced in the course of economic fluctuations in Malaysia. Malaysia has adopted three economic policies – New Economic Policy (1971-1990), National Development Policy (1991-2000) and the National Vision Policy (2001-2010) since gaining independence in 1959. The Privatisation Master Plan was introduced in 1991. Operating within this historical context, the Malaysian construction sector has experienced four business cycles since 1960. A mixed-method design approach is adopted in this study. Quantitative analysis was conducted on the published official statistics of the construction industry and the overall economy in Malaysia between 1970 and 2009. Qualitative study involved interviews with a purposive sample of 21 industrial participants. This study identified a 32-year long building cycle appears in 1975-2006. It is superimposed with three shorter construction business cycles in 1975-1987, 1987-1999 and 1999-2006. The correlations of Construction labour productivity (CLP) and GDP per capita are statistically significant for the 1975-2006 building cycle, 1987-1999 and 1999-2006 construction business cycles. It was not significant in 1975-1987 construction business cycles. The Construction Industry Surveys/Census over the period from 1996 to 2007 show that the average growth rate of total output per employee expanded but the added value per employee contracted which imply high cost of bought-in materials and services and inefficient usage of purchases. The construction labour productivity is peaked at 2004 although there is contraction of construction sector in 2004. The residential subsector performed relatively better than the other sub-sectors in most of the productivity indicators. Improvements are found in output per employee, value added per employee, labour competitiveness and capital investment but declines are recorded in value added content and capital productivity. The civil engineering construction is most productive in the labour productivity nevertheless relatively poorer in the capital productivity. The labour cost is more competitive in the larger size establishment. The added value per labour cost is higher in larger sized establishment attributed to efficient in utilization of capital. The interview with the industrial participant reveals that the productivity of the construction sector is influenced by the economic environment, the construction methods, contract arrangement, payment chain and regulatory policies. The fluctuations of construction demand have caused companies switched to defensive strategy during the economic downturn and to ensure short-term survival than to make a profit for the long-term survival and growth. It leads the company to take drastic measures to curb expenses, downsizing, employ contract employment, diversification and venture overseas market. There is no empirical evidence supports downsizing as a necessary step in a process of reviving productivity. The productivity does not correlate with size of firm. A relatively smaller and focused firm is more productive than the larger and diversified organisation. However diversified company experienced less fluctuation in both labour and capital productivity. In order to improve the productivity of the construction sector, it is necessary to remove the negatives and flaws from past practices. The recommended measures include long-term strategic planning and coordinated approaches of government agencies in planning of infrastructure development and to provide regulatory environments which encourage competition and facilitate productivity improvement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this video, a male voice recites a script comprised entirely of jokes. Words flash on screen in time with the spoken words. Sometimes the two sets of words match, and sometimes they differ. This work examines processes of signification. It emphasizes disruption and disconnection as fundamental and generative operations in making meaning. Extending on post-structural and deconstructionist ideas, this work questions the relationship between written and spoken words. By deliberately confusing the signifying structures of jokes and narratives, it questions the sites and mechanisms of comprehension, humour and signification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The chapter reflects on the first two years of the Restart Scheme introduced by the Manpower Services Commission for Long term unemployed people in the UK from a facilitator's perspective ten years later. It examines the actual weekly program for participants with some case examples from one of the pilot centres, Crawley College, West Sussex, an area of low unemployment. The observations suggested that even in a place where there are many job vacancies, there will be a 3-4% of the population who are unable to compete for jobs and participate in the work force unless sheltered workshops and specialized training initiatives are established.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background In Booth v Amaca Pty Ltd and Amaba Pty Ltd,1 the New South Wales Dust Diseases Tribunal awarded a retired motor mechanic $326 640 in damages for his malignant pleural mesothelioma allegedly caused by exposure to asbestos through working with the brake linings manufactured by the defendants. The evidence before the Tribunal was that the plaintiff had been exposed to asbestos prior to working as a mechanic from home renovations when he was a child and loading a truck as a youth. However, as a mechanic he had been exposed to asbestos in brake linings on which he worked from 1953 to 1983. Curtis DCJ held at [172] that the asbestos from the brake linings ‘materially contributed to [the plaintiff’s] contraction of mesothelioma’. This decision was based upon acceptance that the effect of exposure to asbestos on the development of mesothelioma was cumulative and rejection of theory that a single fibre of asbestos can cause the disease...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Designing practical rules for controlling invasive species is a challenging task for managers, particularly when species are long-lived, have complex life cycles and high dispersal capacities. Previous findings derived from plant matrix population analyses suggest that effective control of long-lived invaders may be achieved by focusing on killing adult plants. However, the cost-effectiveness of managing different life stages has not been evaluated. We illustrate the benefits of integrating matrix population models with decision theory to undertake this evaluation, using empirical data from the largest infestation of mesquite (Leguminosae: Prosopis spp) within Australia. We include in our model the mesquite life cycle, different dispersal rates and control actions that target individuals at different life stages with varying costs, depending on the intensity of control effort. We then use stochastic dynamic programming to derive cost-effective control strategies that minimize the cost of controlling the core infestation locally below a density threshold and the future cost of control arising from infestation of adjacent areas via seed dispersal. Through sensitivity analysis, we show that four robust management rules guide the allocation of resources between mesquite life stages for this infestation: (i) When there is no seed dispersal, no action is required until density of adults exceeds the control threshold and then only control of adults is needed; (ii) when there is seed dispersal, control strategy is dependent on knowledge of the density of adults and large juveniles (LJ) and broad categories of dispersal rates only; (iii) if density of adults is higher than density of LJ, controlling adults is most cost-effective; (iv) alternatively, if density of LJ is equal or higher than density of adults, management efforts should be spread between adults, large and to a lesser extent small juveniles, but never saplings. Synthesis and applications.In this study, we show that simple rules can be found for managing invasive plants with complex life cycles and high dispersal rates when population models are combined with decision theory. In the case of our mesquite population, focussing effort on controlling adults is not always the most cost-effective way to meet our management objective.