887 resultados para task analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimating the use of illicit drugs in the general community is an important task with ramifications for law enforcement agencies, as well as health portfolios. Australia has four ongoing drug monitoring systems, including the AIC’s DUMA program, the National Drug Strategy Household Survey, the Illicit Drug Reporting System and the Ecstasy and Related Drug Reporting System. The systems vary in methods, but broadly they are reliant upon self-report data and may be subject to selection biases. The present study employed a completely different method. By chemically analysing sewerage water, the study produced daily estimates of consumption of methamphetamine, MDMA and cocaine. Samples were collected in November 2009 and November 2010 from a municipality in Queensland, with an population of over 150,000 people. Estimates were made of the average daily dose and average daily street value per 1,000 people. On the basis of estimated dose and price, the methamphetamine market appeared considerably stronger than either MDMA or cocaine. This paper explains the strengths and weaknesses of wastewater analysis. It considers the potential value of wastewater analysis in measuring net consumption of illicit drugs and the effectiveness of law enforcement agency strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Rapid Visual Information Processing (RVIP) task, a serial discrimination task where task performance believed to reflect sustained attention capabilities, is widely used in behavioural research and increasingly in neuroimaging studies. To date, functional neuroimaging research into the RVIP has been undertaken using block analyses, reflecting the sustained processing involved in the task, but not necessarily the transient processes associated with individual trial performance. Furthermore, this research has been limited to young cohorts. This study assessed the behavioural and functional magnetic resonance imaging (fMRI) outcomes of the RVIP task using both block and event-related analyses in a healthy middle aged cohort (mean age = 53.56 years, n = 16). The results show that the version of the RVIP used here is sensitive to changes in attentional demand processes with participants achieving a 43% accuracy hit rate in the experimental task compared with 96% accuracy in the control task. As shown by previous research, the block analysis revealed an increase in activation in a network of frontal, parietal, occipital and cerebellar regions. The event related analysis showed a similar network of activation, seemingly omitting regions involved in the processing of the task (as shown in the block analysis), such as occipital areas and the thalamus, providing an indication of a network of regions involved in correct trial performance. Frontal (superior and inferior frontal gryi), parietal (precuenus, inferior parietal lobe) and cerebellar regions were shown to be active in both the block and event-related analyses, suggesting their importance in sustained attention/vigilance. These networks and the differences between them are discussed in detail, as well as implications for future research in middle aged cohorts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The export of sediments from coastal catchments can have detrimental impacts on estuaries and near shore reef ecosystems such as the Great Barrier Reef. Catchment management approaches aimed at reducing sediment loads require monitoring to evaluate their effectiveness in reducing loads over time. However, load estimation is not a trivial task due to the complex behaviour of constituents in natural streams, the variability of water flows and often a limited amount of data. Regression is commonly used for load estimation and provides a fundamental tool for trend estimation by standardising the other time specific covariates such as flow. This study investigates whether load estimates and resultant power to detect trends can be enhanced by (i) modelling the error structure so that temporal correlation can be better quantified, (ii) making use of predictive variables, and (iii) by identifying an efficient and feasible sampling strategy that may be used to reduce sampling error. To achieve this, we propose a new regression model that includes an innovative compounding errors model structure and uses two additional predictive variables (average discounted flow and turbidity). By combining this modelling approach with a new, regularly optimised, sampling strategy, which adds uniformity to the event sampling strategy, the predictive power was increased to 90%. Using the enhanced regression model proposed here, it was possible to detect a trend of 20% over 20 years. This result is in stark contrast to previous conclusions presented in the literature. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With growing population and fast urbanization in Australia, it is a challenging task to maintain our water quality. It is essential to develop an appropriate statistical methodology in analyzing water quality data in order to draw valid conclusions and hence provide useful advices in water management. This paper is to develop robust rank-based procedures for analyzing nonnormally distributed data collected over time at different sites. To take account of temporal correlations of the observations within sites, we consider the optimally combined estimating functions proposed by Wang and Zhu (Biometrika, 93:459-464, 2006) which leads to more efficient parameter estimation. Furthermore, we apply the induced smoothing method to reduce the computational burden. Smoothing leads to easy calculation of the parameter estimates and their variance-covariance matrix. Analysis of water quality data from Total Iron and Total Cyanophytes shows the differences between the traditional generalized linear mixed models and rank regression models. Our analysis also demonstrates the advantages of the rank regression models for analyzing nonnormal data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Water temperature measurements from Wivenhoe Dam offer a unique opportunity for studying fluctuations of temperatures in a subtropical dam as a function of time and depth. Cursory examination of the data indicate a complicated structure across both time and depth. We propose simplifying the task of describing these data by breaking the time series at each depth into physically meaningful components that individually capture daily, subannual, and annual (DSA) variations. Precise definitions for each component are formulated in terms of a wavelet-based multiresolution analysis. The DSA components are approximately pairwise uncorrelated within a given depth and between different depths. They also satisfy an additive property in that their sum is exactly equal to the original time series. Each component is based upon a set of coefficients that decomposes the sample variance of each time series exactly across time and that can be used to study both time-varying variances of water temperature at each depth and time-varying correlations between temperatures at different depths. Each DSA component is amenable for studying a certain aspect of the relationship between the series at different depths. The daily component in general is weakly correlated between depths, including those that are adjacent to one another. The subannual component quantifies seasonal effects and in particular isolates phenomena associated with the thermocline, thus simplifying its study across time. The annual component can be used for a trend analysis. The descriptive analysis provided by the DSA decomposition is a useful precursor to a more formal statistical analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Internet Engineering Task Force (IETF) is currently developing the next version of the Transport Layer Security (TLS) protocol, version 1.3. The transparency of this standardization process allows comprehensive cryptographic analysis of the protocols prior to adoption, whereas previous TLS versions have been scrutinized in the cryptographic literature only after standardization. This is even more important as there are two related, yet slightly different, candidates in discussion for TLS 1.3, called draft-ietf-tls-tls13-05 and draft-ietf-tls-tls13-dh-based. We give a cryptographic analysis of the primary ephemeral Diffie–Hellman-based handshake protocol, which authenticates parties and establishes encryption keys, of both TLS 1.3 candidates. We show that both candidate handshakes achieve the main goal of providing secure authenticated key exchange according to an augmented multi-stage version of the Bellare–Rogaway model. Such a multi-stage approach is convenient for analyzing the design of the candidates, as they establish multiple session keys during the exchange. An important step in our analysis is to consider compositional security guarantees. We show that, since our multi-stage key exchange security notion is composable with arbitrary symmetric-key protocols, the use of session keys in the record layer protocol is safe. Moreover, since we can view the abbreviated TLS resumption procedure also as a symmetric-key protocol, our compositional analysis allows us to directly conclude security of the combined handshake with session resumption. We include a discussion on several design characteristics of the TLS 1.3 drafts based on the observations in our analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This publication, which is the final report to the Torres Strait Cooperative Research Centre, provides an overview of all the research that was conducted as part of the Torres Strait CRC Task 1.5 - Towards Ecologically Sustainable Management of the Torres Strait Prawn Fishery The objectives of the task were: To develop cost-effective protocols to monitor and quantify the bycatch and environmental impacts of commercial prawn trawling. To monitor the status of target species using both fishery dependent and fishery independent data. To develop biological reference points for target species and undertake management strategy evaluation, in particular a risk assessment of fishing at various levels of fishing mortality. This report focuses on the second component of objective 1 and details a comparative analysis of bycatch samples collected from areas of the Torres Strait that were both closed and open to prawn trawl fishing. The report also reviews the research conducted in relation to objectives 2 and 3 which are detailed in a separate report, Stock Assessment of the Torres Strait Tiger Prawn Fishery (Penaeus esculentus).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is to develop a fully automatic lameness detection system that operates in a milking robot. The instrumentation, measurement software, algorithms for data analysis and a neural network model for lameness detection were developed. Automatic milking has become a common practice in dairy husbandry, and in the year 2006 about 4000 farms worldwide used over 6000 milking robots. There is a worldwide movement with the objective of fully automating every process from feeding to milking. Increase in automation is a consequence of increasing farm sizes, the demand for more efficient production and the growth of labour costs. As the level of automation increases, the time that the cattle keeper uses for monitoring animals often decreases. This has created a need for systems for automatically monitoring the health of farm animals. The popularity of milking robots also offers a new and unique possibility to monitor animals in a single confined space up to four times daily. Lameness is a crucial welfare issue in the modern dairy industry. Limb disorders cause serious welfare, health and economic problems especially in loose housing of cattle. Lameness causes losses in milk production and leads to early culling of animals. These costs could be reduced with early identification and treatment. At present, only a few methods for automatically detecting lameness have been developed, and the most common methods used for lameness detection and assessment are various visual locomotion scoring systems. The problem with locomotion scoring is that it needs experience to be conducted properly, it is labour intensive as an on-farm method and the results are subjective. A four balance system for measuring the leg load distribution of dairy cows during milking in order to detect lameness was developed and set up in the University of Helsinki Research farm Suitia. The leg weights of 73 cows were successfully recorded during almost 10,000 robotic milkings over a period of 5 months. The cows were locomotion scored weekly, and the lame cows were inspected clinically for hoof lesions. Unsuccessful measurements, caused by cows standing outside the balances, were removed from the data with a special algorithm, and the mean leg loads and the number of kicks during milking was calculated. In order to develop an expert system to automatically detect lameness cases, a model was needed. A probabilistic neural network (PNN) classifier model was chosen for the task. The data was divided in two parts and 5,074 measurements from 37 cows were used to train the model. The operation of the model was evaluated for its ability to detect lameness in the validating dataset, which had 4,868 measurements from 36 cows. The model was able to classify 96% of the measurements correctly as sound or lame cows, and 100% of the lameness cases in the validation data were identified. The number of measurements causing false alarms was 1.1%. The developed model has the potential to be used for on-farm decision support and can be used in a real-time lameness monitoring system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We compare two popular methods for estimating the power spectrum from short data windows, namely the adaptive multivariate autoregressive (AMVAR) method and the multitaper method. By analyzing a simulated signal (embedded in a background Ornstein-Uhlenbeck noise process) we demonstrate that the AMVAR method performs better at detecting short bursts of oscillations compared to the multitaper method. However, both methods are immune to jitter in the temporal location of the signal. We also show that coherence can still be detected in noisy bivariate time series data by the AMVAR method even if the individual power spectra fail to show any peaks. Finally, using data from two monkeys performing a visuomotor pattern discrimination task, we demonstrate that the AMVAR method is better able to determine the termination of the beta oscillations when compared to the multitaper method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digital elevation models (DEMs) have been an important topic in geography and surveying sciences for decades due to their geomorphological importance as the reference surface for gravita-tion-driven material flow, as well as the wide range of uses and applications. When DEM is used in terrain analysis, for example in automatic drainage basin delineation, errors of the model collect in the analysis results. Investigation of this phenomenon is known as error propagation analysis, which has a direct influence on the decision-making process based on interpretations and applications of terrain analysis. Additionally, it may have an indirect influence on data acquisition and the DEM generation. The focus of the thesis was on the fine toposcale DEMs, which are typically represented in a 5-50m grid and used in the application scale 1:10 000-1:50 000. The thesis presents a three-step framework for investigating error propagation in DEM-based terrain analysis. The framework includes methods for visualising the morphological gross errors of DEMs, exploring the statistical and spatial characteristics of the DEM error, making analytical and simulation-based error propagation analysis and interpreting the error propagation analysis results. The DEM error model was built using geostatistical methods. The results show that appropriate and exhaustive reporting of various aspects of fine toposcale DEM error is a complex task. This is due to the high number of outliers in the error distribution and morphological gross errors, which are detectable with presented visualisation methods. In ad-dition, the use of global characterisation of DEM error is a gross generalisation of reality due to the small extent of the areas in which the decision of stationarity is not violated. This was shown using exhaustive high-quality reference DEM based on airborne laser scanning and local semivariogram analysis. The error propagation analysis revealed that, as expected, an increase in the DEM vertical error will increase the error in surface derivatives. However, contrary to expectations, the spatial au-tocorrelation of the model appears to have varying effects on the error propagation analysis depend-ing on the application. The use of a spatially uncorrelated DEM error model has been considered as a 'worst-case scenario', but this opinion is now challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. Sig-nificant performance improvement was achieved in simulation-based error propagation analysis by applying process convolution in generating realisations of the DEM error model. In addition, typology of uncertainty in drainage basin delineations is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Metabolism is the cellular subsystem responsible for generation of energy from nutrients and production of building blocks for larger macromolecules. Computational and statistical modeling of metabolism is vital to many disciplines including bioengineering, the study of diseases, drug target identification, and understanding the evolution of metabolism. In this thesis, we propose efficient computational methods for metabolic modeling. The techniques presented are targeted particularly at the analysis of large metabolic models encompassing the whole metabolism of one or several organisms. We concentrate on three major themes of metabolic modeling: metabolic pathway analysis, metabolic reconstruction and the study of evolution of metabolism. In the first part of this thesis, we study metabolic pathway analysis. We propose a novel modeling framework called gapless modeling to study biochemically viable metabolic networks and pathways. In addition, we investigate the utilization of atom-level information on metabolism to improve the quality of pathway analyses. We describe efficient algorithms for discovering both gapless and atom-level metabolic pathways, and conduct experiments with large-scale metabolic networks. The presented gapless approach offers a compromise in terms of complexity and feasibility between the previous graph-theoretic and stoichiometric approaches to metabolic modeling. Gapless pathway analysis shows that microbial metabolic networks are not as robust to random damage as suggested by previous studies. Furthermore the amino acid biosynthesis pathways of the fungal species Trichoderma reesei discovered from atom-level data are shown to closely correspond to those of Saccharomyces cerevisiae. In the second part, we propose computational methods for metabolic reconstruction in the gapless modeling framework. We study the task of reconstructing a metabolic network that does not suffer from connectivity problems. Such problems often limit the usability of reconstructed models, and typically require a significant amount of manual postprocessing. We formulate gapless metabolic reconstruction as an optimization problem and propose an efficient divide-and-conquer strategy to solve it with real-world instances. We also describe computational techniques for solving problems stemming from ambiguities in metabolite naming. These techniques have been implemented in a web-based sofware ReMatch intended for reconstruction of models for 13C metabolic flux analysis. In the third part, we extend our scope from single to multiple metabolic networks and propose an algorithm for inferring gapless metabolic networks of ancestral species from phylogenetic data. Experimenting with 16 fungal species, we show that the method is able to generate results that are easily interpretable and that provide hypotheses about the evolution of metabolism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The metabolism of an organism consists of a network of biochemical reactions that transform small molecules, or metabolites, into others in order to produce energy and building blocks for essential macromolecules. The goal of metabolic flux analysis is to uncover the rates, or the fluxes, of those biochemical reactions. In a steady state, the sum of the fluxes that produce an internal metabolite is equal to the sum of the fluxes that consume the same molecule. Thus the steady state imposes linear balance constraints to the fluxes. In general, the balance constraints imposed by the steady state are not sufficient to uncover all the fluxes of a metabolic network. The fluxes through cycles and alternative pathways between the same source and target metabolites remain unknown. More information about the fluxes can be obtained from isotopic labelling experiments, where a cell population is fed with labelled nutrients, such as glucose that contains 13C atoms. Labels are then transferred by biochemical reactions to other metabolites. The relative abundances of different labelling patterns in internal metabolites depend on the fluxes of pathways producing them. Thus, the relative abundances of different labelling patterns contain information about the fluxes that cannot be uncovered from the balance constraints derived from the steady state. The field of research that estimates the fluxes utilizing the measured constraints to the relative abundances of different labelling patterns induced by 13C labelled nutrients is called 13C metabolic flux analysis. There exist two approaches of 13C metabolic flux analysis. In the optimization approach, a non-linear optimization task, where candidate fluxes are iteratively generated until they fit to the measured abundances of different labelling patterns, is constructed. In the direct approach, linear balance constraints given by the steady state are augmented with linear constraints derived from the abundances of different labelling patterns of metabolites. Thus, mathematically involved non-linear optimization methods that can get stuck to the local optima can be avoided. On the other hand, the direct approach may require more measurement data than the optimization approach to obtain the same flux information. Furthermore, the optimization framework can easily be applied regardless of the labelling measurement technology and with all network topologies. In this thesis we present a formal computational framework for direct 13C metabolic flux analysis. The aim of our study is to construct as many linear constraints to the fluxes from the 13C labelling measurements using only computational methods that avoid non-linear techniques and are independent from the type of measurement data, the labelling of external nutrients and the topology of the metabolic network. The presented framework is the first representative of the direct approach for 13C metabolic flux analysis that is free from restricting assumptions made about these parameters.In our framework, measurement data is first propagated from the measured metabolites to other metabolites. The propagation is facilitated by the flow analysis of metabolite fragments in the network. Then new linear constraints to the fluxes are derived from the propagated data by applying the techniques of linear algebra.Based on the results of the fragment flow analysis, we also present an experiment planning method that selects sets of metabolites whose relative abundances of different labelling patterns are most useful for 13C metabolic flux analysis. Furthermore, we give computational tools to process raw 13C labelling data produced by tandem mass spectrometry to a form suitable for 13C metabolic flux analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of the present study is to analyze Confucian understandings of the Christian doctrine of salvation in order to find the basic problems in the Confucian-Christian dialogue. I will approach the task via a systematic theological analysis of four issues in order to limit the thesis to an appropriate size. They are analyzed in three chapters as follows: 1. The Confucian concept concerning the existence of God. Here I discuss mainly the issue of assimilation of the Christian concept of God to the concepts of Sovereign on High (Shangdi) and Heaven (Tian) in Confucianism. 2. The Confucian understanding of the object of salvation and its status in Christianity. 3. The Confucian understanding of the means of salvation in Christianity. Before beginning this analysis it is necessary to clarify the vast variety of controversies, arguments, ideas, opinions and comments expressed in the name of Confucianism; thus, clear distinctions among different schools of Confucianism are given in chapter 2. In the last chapter I will discuss the results of my research in this study by pointing out the basic problems that will appear in the analysis. The results of the present study provide conclusions in three related areas: the tacit differences in the ways of thinking between Confucians and Christians, the basic problems of the Confucian-Christian dialogue, and the affirmative elements in the dialogue. In addition to a summary, a bibliography and an index, there are also eight appendices, where I have introduced important background information for readers to understand the present study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is concerned with presenting a modified theoretical approach to the study of centre-periphery relations in the Russian Federation. In the widely accepted scientific discourse, the Russian federal system under the Yeltsin Administration (1991-2000) was asymmetrical; largely owing to the varying amount of structural autonomy distributed among the federation s 89 constituent units. While providing an improved understanding as to which political and socio-economic structures contributed to federal asymmetry, it is felt that associated large N-studies have underemphasised the role played by actor agency in re-shaping Russian federal institutions. It is the main task of this thesis to reintroduce /re-emphasise the importance of actor agency as a major contributing element of institutional change in the Russian federal system. By focusing on the strategic agency of regional elites simultaneously within regional and federal contexts, the thesis adopts the position that political, ethnic and socio-economic structural factors alone cannot fully determine the extent to which regional leaders were successful in their pursuit of economic and political pay-offs from the institutionally weakened federal centre. Furthermore, this work hypothesises that under conditions of federal institutional uncertainty, it is the ability of regional leaders to simultaneously interpret various mutable structural conditions then translate them into plausible strategies which accounts for the regions ability to extract variable amounts of economic and political pay-offs from the Russian federal system. The thesis finds that while the hypothesis is accurate in its theoretical assumptions, several key conclusions provide paths for further inquiry posed by the initial research question. First, without reliable information or stable institutions to guide their actions, both regional and federal elites were forced into ad-hoc decision-making in order to maintain their core strategic focus: political survival. Second, instead of attributing asymmetry to either actor agency or structural factors exclusively, the empirical data shows that both agency and structures interact symbiotically in the strategic formulation process, thus accounting for the sub-optimal nature of several of the actions taken in the adopted cases. Third, as actor agency and structural factors mutate over time, so, too do the perceived payoffs from elite competition. In the case of the Russian federal system, the stronger the federal centre became, the less likely it was that regional leaders could extract the high degree of economic and political pay-offs that they clamoured for earlier in the Yeltsin period. Finally, traditional approaches to the study of federal systems which focus on institutions as measures of federalism are not fully applicable in the Russian case precisely because the institutions themselves were a secondary point of contention between competing elites. Institutional equilibriums between the regions and Moscow were struck only when highly personalised elite preferences were satisfied. Therefore the Russian federal system is the product of short-term, institutional solutions suited to elite survival strategies developed under conditions of economic, political and social uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This chapter provides a preliminary analysis of Australian Government’s reform agenda popularly known as ‘Closing the Gap’.” Closing the Gap” sets a commitment by all Australian governments to improve the lives of Indigenous Australians, and in particular provide a better future for indigenous children. This article discusses how the coalition of Australian Governments prepared this agenda and how this program involves Australian corporations in this task. Our observations suggest that another reform is required for the government to mandate corporate involvement and contribution to this reform agenda.