339 resultados para Time analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Selecting an appropriate business process modelling technique forms an important task within the methodological challenges of a business process management project. While a plethora of available techniques has been developed over the last decades, there is an obvious shortage of well-accepted reference frameworks that can be used to evaluate and compare the capabilities of the different techniques. Academic progress has been made at least in the area of representational analyses that use ontology as a benchmark for such evaluations. This paper reflects on the comprehensive experiences with the application of a model based on the Bunge ontology in this context. A brief overview of the underlying research model characterizes the different steps in such a research project. A comparative summary of previous representational analyses of process modelling techniques over time gives insights into the relative maturity of selected process modelling techniques. Based on these experiences suggestions are made as to where ontology-based representational analyses could be further developed and what limitations are inherent to such analyses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Construction is an information intensive industry in which the accuracy and timeliness of information is paramount. It observed that the main communication issue in construction is to provide a method to exchange data between the site operation, the site office and the head office. The information needs under consideration are time critical to assist in maintaining or improving the efficiency at the jobsite. Without appropriate computing support this may increase the difficulty of problem solving. Many researchers focus their research on the usage of mobile computing devices in the construction industry and they believe that mobile computers have the potential to solve some construction problems that leads to reduce overall productivity. However, to date very limited observation has been conducted in terms of the deployment of mobile computers for construction workers on-site. By providing field workers with accurate, reliable and timely information at the location where it is needed, it will support the effectiveness and efficiency at the job site. Bringing a new technology into construction industry is not only need a better understanding of the application, but also need a proper preparation of the allocation of the resources such as people, and investment. With this in mind, an accurate analysis is needed to provide clearly idea of the overall costs and benefits of the new technology. A cost benefit analysis is a method of evaluating the relative merits of a proposed investment project in order to achieve efficient allocation of resources. It is a way of identifying, portraying and assessing the factors which need to be considered in making rational economic choices. In principle, a cost benefit analysis is a rigorous, quantitative and data-intensive procedure, which requires identification all potential effects, categorisation of these effects as costs and benefits, quantitative estimation of the extent of each cost and benefit associated with an action, translation of these into a common metric such as dollars, discounting of future costs and benefits into the terms of a given year, and summary of all cost and benefit to see which is greater. Even though many cost benefit analysis methodologies are available for a general assessment, there is no specific methodology can be applied for analysing the cost and benefit of the application of mobile computing devices in the construction site. Hence, the proposed methodology in this document is predominantly adapted from Baker et al. (2000), Department of Finance (1995), and Office of Investment Management (2005). The methodology is divided into four main stages and then detailed into ten steps. The methodology is provided for the CRC CI 2002-057-C Project: Enabling Team Collaboration with Pervasive and Mobile Computing and can be seen in detail in Section 3.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Enhancing children's self-concepts is widely accepted as a critical educational outcome of schooling and is postulated as a mediating variable that facilitates the attainment of other desired outcomes such as improved academic achievement. Despite considerable advances in self-concept research, there has been limited progress in devising teacher-administered enhancement interventions. This is unfortunate as teachers are crucial change agents during important developmental periods when self-concept is formed. The primary aim of the present investigation is to build on the promising features of previous self-concept enhancement studies by: (a) combining two exciting research directions developed by Burnett and Craven to develop a potentially powerful cognitive-based intervention; (b) incorporating recent developments in theory and measurement to ensure that the multidimensionality of self-concept is accounted for in the research design; (c) fully investigating the effects of a potentially strong cognitive intervention on reading, mathematics, school and learning self-concepts by using a large sample size and a sophisticated research design; (d) evaluating the effects of the intervention on affective and cognitive subcomponents of reading, mathematics, school and learning self-concepts over time to test for differential effects of the intervention; (e) modifying and extending current procedures to maximise the successful implementation of a teacher-mediated intervention in a naturalistic setting by incorporating sophisticated teacher training as suggested by Hattie (1992) and including an assessment of the efficacy of implementation; and (f) examining the durability of effects associated with the intervention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Literature and clinical experience suggest that some people experience atypical, complicated or pathological bereavement reactions in response to a major loss. METHOD: Three groups of community-based bereaved subjects--spouses (n = 44), adult children (n = 40), and parents (n = 36)--were followed up four times in the 13 months after a loss. A 17-item scale of core bereavement times was developed and used to investigate the intensity of the bereavement response over time. RESULTS: Cluster analysis revealed a pattern of bereavement-related symptoms approximating a syndrome of chronic grief in 11 (9.2%) of the 120 subjects. None of the respondents displayed a pattern consistent with delayed or absent grief. CONCLUSIONS: In a non-clinical community sample of bereaved people, delayed or absent grief is infrequently seen, unlike chronic grief, which is demonstrated in a minority.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With increasingly complex engineering assets and tight economic requirements, asset reliability becomes more crucial in Engineering Asset Management (EAM). Improving the reliability of systems has always been a major aim of EAM. Reliability assessment using degradation data has become a significant approach to evaluate the reliability and safety of critical systems. Degradation data often provide more information than failure time data for assessing reliability and predicting the remnant life of systems. In general, degradation is the reduction in performance, reliability, and life span of assets. Many failure mechanisms can be traced to an underlying degradation process. Degradation phenomenon is a kind of stochastic process; therefore, it could be modelled in several approaches. Degradation modelling techniques have generated a great amount of research in reliability field. While degradation models play a significant role in reliability analysis, there are few review papers on that. This paper presents a review of the existing literature on commonly used degradation models in reliability analysis. The current research and developments in degradation models are reviewed and summarised in this paper. This study synthesises these models and classifies them in certain groups. Additionally, it attempts to identify the merits, limitations, and applications of each model. It provides potential applications of these degradation models in asset health and reliability prediction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Machine downtime, whether planned or unplanned, is intuitively costly to manufacturing organisations, but is often very difficult to quantify. The available literature showed that costing processes are rarely undertaken within manufacturing organisations. Where cost analyses have been undertaken, they generally have only valued a small proportion of the affected costs, leading to an overly conservative estimate. This thesis aimed to develop a cost of downtime model, with particular emphasis on the application of the model to Australia Post’s Flat Mail Optical Character Reader (FMOCR). The costing analysis determined a cost of downtime of $5,700,000 per annum, or an average cost of $138 per operational hour. The second section of this work focused on the use of the cost of downtime to objectively determine areas of opportunity for cost reduction on the FMOCR. This was the first time within Post that maintenance costs were considered along side of downtime for determining machine performance. Because of this, the results of the analysis revealed areas which have historically not been targeted for cost reduction. Further exploratory work was undertaken on the Flats Lift Module (FLM) and Auto Induction Station (AIS) Deceleration Belts through the comparison of the results against two additional FMOCR analysis programs. This research has demonstrated the development of a methodical and quantifiable cost of downtime for the FMOCR. This has been the first time that Post has endeavoured to examine the cost of downtime. It is also one of the very few methodologies for valuing downtime costs that has been proposed in literature. The work undertaken has also demonstrated how the cost of downtime can be incorporated into machine performance analysis with specific application to identifying high costs modules. The outcome of this report has both been the methodology for costing downtime, as well as a list of areas for cost reduction. In doing so, this thesis has outlined the two key deliverables presented at the outset of the research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the author characteristics of papers published in The Australian Sociological Association (TASA) journal, the Journal of Sociology (formerly the Australian and New Zealand Journal of Sociology) between 1965 and 2008. The aim of the paper is empirically to identify trends in authorship. The review examines all articles published in the period (excluding book reviews). The rationale of the study is to reveal trends in who publishes in the journal in terms of authors’ academic rank, gender, institution, and country. A table of those who have published the greatest number of papers is also presented. Findings show that over time the gap between the proportion of males and females publishing has closed; more PhD students and research fellows are publishing in the journal in recent decades; the highest proportion of authors consistently come from the Australian National University and The University of Queensland; and most authors are located in Australia. Information such as this can inform editorial practices and serve to inform the membership and readership on the nature of the journal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vehicle detectors have been installed at approximately every 300 meters on each lane on Tokyo metropolitan expressway. Various traffic data such as traffic volume, average speed and time occupancy are collected by vehicle detectors. We can understand traffic characteristics of every point by comparing traffic data collected at consecutive points. In this study, we focused on average speed, analyzed road potential by operating speed during free-flow conditions, and identified latent bottlenecks. Furthermore, we analyzed effects for road potential by the rainfall level and day of the week. It’s expected that this method of analysis will be utilized for installation of ITS such as drive assist, estimation of parameters for traffic simulation and feedback to road design as congestion measures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A diagnosis of cancer represents a significant crisis for the child and their family. As the treatment for childhood cancer has improved dramatically over the past three decades, most children diagnosed with cancer today survive this illness. However, it is still an illness which severely disrupts the lifestyle and typical functioning of the family unit. Most treatments for cancer involve lengthy hospital stays, the endurance of painful procedures and harsh side effects. Research has confirmed that to manage and adapt to such a crisis, families must undertake measures which assist their adjustment. Variables such as level of family support, quality of parents’ marital relationship, coping of other family members, lack of other concurrent stresses and open communication within the family have been identified as influences on how well families adjust to a diagnosis of childhood cancer. Theoretical frameworks such as the Resiliency Model of Family Adjustment and Adaptation (McCubbin and McCubbin, 1993, 1996) and the Stress and Coping Model by Lazarus and Folkman (1984) have been used to explain how families and individuals adapt to crises or adverse circumstances. Developmental theories have also been posed to account for how children come to understand and learn about the concept of illness. However more descriptive information about how families and children in particular, experience and manage a diagnosis of cancer is still needed. There are still many unanswered questions surrounding how a child adapts to, understands and makes meaning from having a life-threatening illness. As a result, developing an understanding of the impact that such a serious illness has on the child and their family is crucial. A new approach to examining childhood illness such as cancer is currently underway which allows for a greater understanding of the experience of childhood cancer to be achieved. This new approach invites a phenomenological method to investigate the perspectives of those affected by childhood cancer. In the current study 9 families in which there was a diagnosis of childhood cancer were interviewed twice over a 12 month period. Using the qualitative methodology of Interpretative Phenomenological Analysis (IPA) a semi-structured interview was used to explicate the experience of childhood cancer from both the parent and child’s perspectives. A number of quantitative measures were also administered to gather specific information on the demographics of the sample population. The results of this study revealed a number of pertinent areas which need to be considered when treating such families. More importantly experiences were explicated which revealed vital phenomena that needs to be added to extend current theoretical frameworks. Parents identified the time of the diagnosis as the hardest part of their entire experience. Parents experienced an internal struggle when they were forced to come to the realization that they were not able to help their child get well. Families demonstrated an enormous ability to develop a new lifestyle which accommodated the needs of the sick child, as the sick child became the focus of their lives. Regarding the children, many of them accepted their diagnosis without complaint or question, and they were able to recognise and appreciate the support they received. Physical pain was definitely a component of the children’s experience however the emotional strain of loss of peer contact seemed just as severe. Changes over time were also noted as both parental and child experiences were often pertinent to the stage of treatment the child had reached. The approach used in this study allowed for rich and intimate detail about a sensitive issue to be revealed. Such an approach also allowed for the experience of childhood cancer on parents and the children to be more fully realised. Only now can a comprehensive and sensitive medical and psychosocial approach to the child and family be developed. For example, families may benefit from extra support at the time of diagnosis as this was identified as one of the most difficult periods. Parents may also require counselling support in coming to terms with their lack of ability to help their child heal. Given the ease at which children accepted their diagnosis, we need to question whether children are more receptive to adversity. Yet the emotional struggle children battled as a result of their illness also needs to be addressed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transportation disadvantage has been recognised to be the key source of social exclusion. Therefore an appropriate process is required to investigate and seek to resolve this problem. Currently, determination of Transportation Disadvantage is postulate based on income, poverty and mobility level. Transportation disadvantage may best regard be based on accessibility perspectives as they represent inability of the individual to access desired activities. This paper attempts to justify a process in determining transportation disadvantage by incorporating accessibility and social transporation conflict as the essence of a framework. The framework embeds space time organisation within the dimension of accessibility to identify a rigorous definition of transportation disadvantage. In developing the framework, the definition, dimension, component and measure of accessibility were scrutinised. The findings suggest the definition and dimension are the significant approach of research to evaluate travel experience of the disadvantaged. Concurrently, location accessibility measures will be incorprated to strenghten the determination of accessibility level. Literature review in social exclusion and mobility-related exclusion identified the dimension and source of transportation disadvantage. It was revealed that the appropriate approach to justify trasnportation disadvantaged is to incorporate space-time organisation within the studied components. The suggested framework is an inter-related process consisting of component of accessibility; individual, networking (transport system) and activities (destination). The integration and correlation among the components shall determine the level of transportation disadvantage. Prior findings are used to retrieve the spatial distribution of transportation disadvantaged and appropriate policies are developed to resolve the problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines a sequence of asynchronous interaction on the photosharing website, Flickr. In responding to a call for a focus on the performative aspects of online annotation (Wolff & Neuwirth, 2001), we outline and apply an interaction order approach to identify temporal and cultural aspects of the setting that provide for commonality and sharing. In particular, we study the interaction as a feature of a synthetic situation (Knorr Cetina, 2009) focusing on the requirements of maintaining a sense of an ongoing discussion online. Our analysis suggests that the rhetorical system of the Flickr environment, its appropriation by participants as a context for bounded activities, and displays of commonality, affiliation, and shared access provide for a common sense of participation in a time envelope. This, in turn, is argued to be central to new processes of consociation (Schutz, 1967; Zhao, 2004) occurring in the life world of Web 2.0 environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Television viewing time, independent of leisure-time physical activity, has cross-sectional relationships with the metabolic syndrome and its individual components. We examined whether baseline and five-year changes in self-reported television viewing time are associated with changes in continuous biomarkers of cardio-metabolic risk (waist circumference, triglycerides, high density lipoprotein cholesterol, systolic and diastolic blood pressure, fasting plasma glucose; and a clustered cardio-metabolic risk score) in Australian adults. Methods: AusDiab is a prospective, population-based cohort study with biological, behavioral, and demographic measures collected in 1999–2000 and 2004–2005. Non-institutionalized adults aged ≥ 25 years were measured at baseline (11,247; 55% of those completing an initial household interview); 6,400 took part in the five-year follow-up biomedical examination, and 3,846 met the inclusion criteria for this analysis. Multiple linear regression analysis was used and unstandardized B coefficients (95% CI) are provided. Results: Baseline television viewing time (10 hours/week unit) was not significantly associated with change in any of the biomarkers of cardio-metabolic risk. Increases in television viewing time over five years (10 hours/week unit) were associated with increases in: waist circumference (cm) (men: 0.43 (0.08, 0.78), P = 0.02; women: 0.68 (0.30, 1.05), P <0.001), diastolic blood pressure (mmHg) (women: 0.47 (0.02, 0.92), P = 0.04), and the clustered cardio-metabolic risk score (women: 0.03 (0.01, 0.05), P = 0.007). These associations were independent of baseline television viewing time and baseline and change in physical activity and other potential confounders. Conclusion: These findings indicate that an increase in television viewing time is associated with adverse cardio-metabolic biomarker changes. Further prospective studies using objective measures of several sedentary behaviors are required to confirm causality of the associations found.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Programs written in languages of the Oberon family usually contain runtime tests on the dynamic type of variables. In some cases it may be desirable to reduce the number of such tests. Typeflow analysis is a static method of determining bounds on the types that objects may possess at runtime. We show that this analysis is able to reduce the number of tests in certain plausible circumstances. Furthermore, the same analysis is able to detect certain program errors at compile time, which would normally only be detected at program execution. This paper introduces the concepts of typeflow analysis and details its use in the reduction of runtime overhead in Oberon-2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.