20 resultados para Bose-Einstein condensation statistical model

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduced in this paper is a Bayesian model for isolating the resonant frequency from combustion chamber resonance. The model shown in this paper focused on characterising the initial rise in the resonant frequency to investigate the rise of in-cylinder bulk temperature associated with combustion. By resolving the model parameters, it is possible to determine: the start of pre-mixed combustion, the start of diffusion combustion, the initial resonant frequency, the resonant frequency as a function of crank angle, the in-cylinder bulk temperature as a function of crank angle and the trapped mass as a function of crank angle. The Bayesian method allows for individual cycles to be examined without cycle-averaging|allowing inter-cycle variability studies. Results are shown for a turbo-charged, common-rail compression ignition engine run at 2000 rpm and full load.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reviews the application of statistical models to planning and evaluating cancer screening programmes. Models used to analyse screening strategies can be classified as either surface models, which consider only those events which can be directly observed such as disease incidence, prevalence or mortality, or deep models, which incorporate hypotheses about the disease process that generates the observed events. This paper focuses on the latter type. These can be further classified as analytic models, which use a model of the disease to derive direct estimates of characteristics of the screening procedure and its consequent benefits, and simulation models, which use the disease model to simulate the course of the disease in a hypothetical population with and without screening and derive measures of the benefit of screening from the simulation outcomes. The main approaches to each type of model are described and an overview given of their historical development and strengths and weaknesses. A brief review of fitting and validating such models is given and finally a discussion of the current state of, and likely future trends in, cancer screening models is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Developing water quality guidelines for Antarctic marine environments requires understanding the sensitivity of local biota to contaminant exposure. Antarctic invertebrates have shown slower contaminant responses in previous experiments compared to temperate and tropical species in standard toxicity tests. Consequently, test methods which take into account environmental conditions and biological characteristics of cold climate species need to be developed. This study investigated the effects of five metals on the survival of a common Antarctic amphipod, Orchomenella pinguides. Multiple observations assessing mortality to metal exposure were made over the 30 days exposure period. Traditional toxicity tests with quantal data sets are analysed using methods such as maximum likelihood regression (probit analysis) and Spearman–Kärber which treat individual time period endpoints independently. A new statistical model was developed to integrate the time-series concentration–response data obtained in this study. Grouped survival data were modelled using a generalized additive mixed model (GAMM) which incorporates all the data obtained from multiple observation times to derive time integrated point estimates. The sensitivity of the amphipod, O. pinguides, to metals increased with increasing exposure time. Response times varied for different metals with amphipods responding faster to copper than to cadmium, lead or zinc. As indicated by 30 days lethal concentration (LC50) estimates, copper was the most toxic metal (31 µg/L), followed by cadmium (168 µg/L), lead (256 µg/L) and zinc (822 µg/L). Nickel exposure (up to 1.12 mg/L) did not affect amphipod survival. Using longer exposure durations and utilising the GAMM model provides an improved methodology for assessing sensitivities of slow responding Antarctic marine invertebrates to contaminants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical time series methods have proven to be a promising technique in structural health monitoring, since it provides a direct form of data analysis and eliminates the requirement for domain transformation. Latest research in structural health monitoring presents a number of statistical models that have been successfully used to construct quantified models of vibration response signals. Although a majority of these studies present viable results, the aspects of practical implementation, statistical model construction and decision-making procedures are often vaguely defined or omitted from presented work. In this article, a comprehensive methodology is developed, which essentially utilizes an auto-regressive moving average with exogenous input model to create quantified model estimates of experimentally acquired response signals. An iterative self-fitting algorithm is proposed to construct and fit the auto-regressive moving average with exogenous input model, which is capable of integrally finding an optimum set of auto-regressive moving average with exogenous input model parameters. After creating a dataset of quantified response signals, an unlabelled response signal can be identified according to a 'closest-fit' available in the dataset. A unique averaging method is proposed and implemented for multi-sensor data fusion to decrease the margin of error with sensors, thus increasing the reliability of global damage identification. To demonstrate the effectiveness of the developed methodology, a steel frame structure subjected to various bolt-connection damage scenarios is tested. Damage identification results from the experimental study suggest that the proposed methodology can be employed as an efficient and functional damage identification tool. © The Author(s) 2014.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Regardless of the technical procedure used in signalling corporate collapse, the bottom line rests on the predictive power of the corresponding statistical model. In that regard, it is imperative to empirically test the model using a data sample of both collapsed and non-collapsed companies. A superior model is one that successfully classifies collapsed and non-collapsed companies in their respective categories with a high degree of accuracy. Empirical studies of this nature have thus far done one of two things. (1) Some have classified companies based on a specific statistical modelling process. (2) Some have classified companies based on two (sometimes – but rarely – more than two) independent statistical modelling processes for the purposes of comparing one with the other. In the latter case, the mindset of the researchers has been – invariably – to pitch one procedure against the other. This paper raises the question, why pitch one statistical process against another; why not make the two procedures work together? As such, this paper puts forward an innovative dual-classification scheme for signalling corporate collapse: dual in the sense that it relies on two statistical procedures concurrently. Using a data sample of Australian publicly listed companies, the proposed scheme is tested against the traditional approach taken thus far in the pertinent literature. The results demonstrate that the proposed dual-classification scheme signals collapse with a higher degree of accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The assessment of the direct and indirect requirements for energy is known as embodied energy analysis. For buildings, the direct energy includes that used primarily on site, while the indirect energy includes primarily the energy required for the manufacture of building materials. This thesis is concerned with the completeness and reliability of embodied energy analysis methods. Previous methods tend to address either one of these issues, but not both at the same time. Industry-based methods are incomplete. National statistical methods, while comprehensive, are a ‘black box’ and are subject to errors. A new hybrid embodied energy analysis method is derived to optimise the benefits of previous methods while minimising their flaws. In industry-based studies, known as ‘process analyses’, the energy embodied in a product is traced laboriously upstream by examining the inputs to each preceding process towards raw materials. Process analyses can be significantly incomplete, due to increasing complexity. The other major embodied energy analysis method, ‘input-output analysis’, comprises the use of national statistics. While the input-output framework is comprehensive, many inherent assumptions make the results unreliable. Hybrid analysis methods involve the combination of the two major embodied energy analysis methods discussed above, either based on process analysis or input-output analysis. The intention in both hybrid analysis methods is to reduce errors associated with the two major methods on which they are based. However, the problems inherent to each of the original methods tend to remain, to some degree, in the associated hybrid versions. Process-based hybrid analyses tend to be incomplete, due to the exclusions associated with the process analysis framework. However, input-output-based hybrid analyses tend to be unreliable because the substitution of process analysis data into the input-output framework causes unwanted indirect effects. A key deficiency in previous input-output-based hybrid analysis methods is that the input-output model is a ‘black box’, since important flows of goods and services with respect to the embodied energy of a sector cannot be readily identified. A new input-output-based hybrid analysis method was therefore developed, requiring the decomposition of the input-output model into mutually exclusive components (ie, ‘direct energy paths’). A direct energy path represents a discrete energy requirement, possibly occurring one or more transactions upstream from the process under consideration. For example, the energy required directly to manufacture the steel used in the construction of a building would represent a direct energy path of one non-energy transaction in length. A direct energy path comprises a ‘product quantity’ (for example, the total tonnes of cement used) and a ‘direct energy intensity’ (for example, the energy required directly for cement manufacture, per tonne). The input-output model was decomposed into direct energy paths for the ‘residential building construction’ sector. It was shown that 592 direct energy paths were required to describe 90% of the overall total energy intensity for ‘residential building construction’. By extracting direct energy paths using yet smaller threshold values, they were shown to be mutually exclusive. Consequently, the modification of direct energy paths using process analysis data does not cause unwanted indirect effects. A non-standard individual residential building was then selected to demonstrate the benefits of the new input-output-based hybrid analysis method in cases where the products of a sector may not be similar. Particular direct energy paths were modified with case specific process analysis data. Product quantities and direct energy intensities were derived and used to modify some of the direct energy paths. The intention of this demonstration was to determine whether 90% of the total embodied energy calculated for the building could comprise the process analysis data normally collected for the building. However, it was found that only 51% of the total comprised normally collected process analysis. The integration of process analysis data with 90% of the direct energy paths by value was unsuccessful because: • typically only one of the direct energy path components was modified using process analysis data (ie, either the product quantity or the direct energy intensity); • of the complexity of the paths derived for ‘residential building construction’; and • of the lack of reliable and consistent process analysis data from industry, for both product quantities and direct energy intensities. While the input-output model used was the best available for Australia, many errors were likely to be carried through to the direct energy paths for ‘residential building construction’. Consequently, both the value and relative importance of the direct energy paths for ‘residential building construction’ were generally found to be a poor model for the demonstration building. This was expected. Nevertheless, in the absence of better data from industry, the input-output data is likely to remain the most appropriate for completing the framework of embodied energy analyses of many types of products—even in non-standard cases. ‘Residential building construction’ was one of the 22 most complex Australian economic sectors (ie, comprising those requiring between 592 and 3215 direct energy paths to describe 90% of their total energy intensities). Consequently, for the other 87 non-energy sectors of the Australian economy, the input-output-based hybrid analysis method is likely to produce more reliable results than those calculated for the demonstration building using the direct energy paths for ‘residential building construction’. For more complex sectors than ‘residential building construction’, the new input-output-based hybrid analysis method derived here allows available process analysis data to be integrated with the input-output data in a comprehensive framework. The proportion of the result comprising the more reliable process analysis data can be calculated and used as a measure of the reliability of the result for that product or part of the product being analysed (for example, a building material or component). To ensure that future applications of the new input-output-based hybrid analysis method produce reliable results, new sources of process analysis data are required, including for such processes as services (for example, ‘banking’) and processes involving the transformation of basic materials into complex products (for example, steel and copper into an electric motor). However, even considering the limitations of the demonstration described above, the new input-output-based hybrid analysis method developed achieved the aim of the thesis: to develop a new embodied energy analysis method that allows reliable process analysis data to be integrated into the comprehensive, yet unreliable, input-output framework. Plain language summary Embodied energy analysis comprises the assessment of the direct and indirect energy requirements associated with a process. For example, the construction of a building requires the manufacture of steel structural members, and thus indirectly requires the energy used directly and indirectly in their manufacture. Embodied energy is an important measure of ecological sustainability because energy is used in virtually every human activity and many of these activities are interrelated. This thesis is concerned with the relationship between the completeness of embodied energy analysis methods and their reliability. However, previous industry-based methods, while reliable, are incomplete. Previous national statistical methods, while comprehensive, are a ‘black box’ subject to errors. A new method is derived, involving the decomposition of the comprehensive national statistical model into components that can be modified discretely using the more reliable industry data, and is demonstrated for an individual building. The demonstration failed to integrate enough industry data into the national statistical model, due to the unexpected complexity of the national statistical data and the lack of available industry data regarding energy and non-energy product requirements. These unique findings highlight the flaws in previous methods. Reliable process analysis and input-output data are required, particularly for those processes that were unable to be examined in the demonstration of the new embodied energy analysis method. This includes the energy requirements of services sectors, such as banking, and processes involving the transformation of basic materials into complex products, such as refrigerators. The application of the new method to less complex products, such as individual building materials or components, is likely to be more successful than to the residential building demonstration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Latin-american countries passed from predominantely rural to predominantely urban within few decades. The level of urbanisation in Brazil progressed from 36% in 1950, 50% in 1970, and scalating to 85% in 2005. This rapid transformation resulted in many social problems, as cities were not able to provide appropriate housing and infrastructure for the growing population. As a response, the Brazilian Ministry for Cities, in 2005, created the National System for Social Housing, with the goal to establish guidelines in the Federal level, and build capacity and fund social housing projects in the State and Local levels. This paper presents a research developed in Gramado city, Brazil, as part of the Local Social Housing Plan process, with the goal to produce innovative tools to help social housing planning and management. It proposes and test a methodology to locate and characterise/rank housing defficiencies across the city combining GIS and fractal geometry analysis. Fractal measurements, such as fractal dimension and lacunarity, are able to differentiate urban morphology, and integrated to infrastructure and socio-economical spatial indicators, they can be used to estimate housing problems and help to target, classify and schedule actions to improve housing in cities and regions. Gramado city was divided in a grid with 1,000 cells. For each cell, the following indicators were measured: average income of households, % of roads length which are paved (as a proxy for availability of infrastructures as water and sewage), fractal dimension and lacunarity of the dwellings spatial distribution. A statistical model combining those measurements was produced using a sample of 10% of the cells divided in five housing standards (from high income/low density dwellings to slum's dwellings). The estimation of the location and level of social housing deficiencies in the whole region using the model, compared to the real situation, achived high correlations. Simple and based on easily accessible and inexpensive data, the method also helped to overcome limitations of lack of information and fragmented knowledge of the area related to housing conditions by local professionals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a method for foreground/background separation of audio using a background modelling technique. The technique models the background in an online, unsupervised, and adaptive fashion, and is designed for application to long term surveillance and monitoring problems. The background is determined using a statistical method to model the states of the audio over time. In addition, three methods are used to increase the accuracy of background modelling in complex audio environments. Such environments can cause the failure of the statistical model to accurately capture the background states. An entropy-based approach is used to unify background representations fragmented over multiple states of the statistical model. The approach successfully unifies such background states, resulting in a more robust background model. We adaptively adjust the number of states considered background according to background complexity, resulting in the more accurate classification of background models. Finally, we use an auxiliary model cache to retain potential background states in the system. This prevents the deletion of such states due to a rapid influx of observed states that can occur for highly dynamic sections of the audio signal. The separation algorithm was successfully applied to a number of audio environments representing monitoring applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Species that have temperature-dependent sex determination (TSD) often produce highly skewed offspring sex ratios contrary to long-standing theoretical predictions. This ecological enigma has provoked concern that climate change may induce the production of single-sex generations and hence lead to population extirpation. All species of sea turtles exhibit TSD, many are already endangered, and most already produce sex ratios skewed to the sex produced at warmer temperatures (females). We tracked male loggerhead turtles (Caretta caretta) from Zakynthos, Greece, throughout the entire interval between successive breeding seasons and identified individuals on their breeding grounds, using photoidentification, to determine breeding periodicity and operational sex ratios. Males returned to breed at least twice as frequently as females. We estimated that the hatchling sex ratio of 70:30 female to male for this rookery will translate into an overall operational sex ratio (OSR) (i.e., ratio of total number of males vs females breeding each year) of close to 50:50 female to male. We followed three male turtles for between 10 and 12 months during which time they all traveled back to the breeding grounds. Flipper tagging revealed the proportion of females returning to nest after intervals of 1, 2, 3, and 4 years were 0.21, 0.38, 0.29, and 0.12, respectively (mean interval 2.3 years). A further nine male turtles were tracked for short periods to determine their departure date from the breeding grounds. These departure dates were combined with a photoidentification data set of 165 individuals identified on in-water transect surveys at the start of the breeding season to develop a statistical model of the population dynamics. This model produced a maximum likelihood estimate that males visit the breeding site 2.6 times more often than females (95%CI 2.1, 3.1), which was consistent with the data from satellite tracking and flipper tagging. Increased frequency of male breeding will help ameliorate female-biased hatchling sex ratios. Combined with the ability of males to fertilize the eggs of many females and for females to store sperm to fertilize many clutches, our results imply that effects of climate change on the viability of sea turtle populations are likely to be less acute than previously suspected.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The analysis of human crowds has widespread uses from law enforcement to urban engineering and traffic management. All of these require a crowd to first be detected, which is the problem addressed in this paper. Given an image, the algorithm we propose segments it into crowd and non-crowd regions. The main idea is to capture two key properties of crowds: (i) on a narrow scale, its basic element should look like a human (only weakly so, due to low resolution, occlusion, clothing variation etc.), while (ii) on a larger scale, a crowd inherently contains repetitive appearance elements. Our method exploits this by building a pyramid of sliding windows and quantifying how “crowd-like” each level of the pyramid is using an underlying statistical model based on quantized SIFT features. The two aforementioned crowd properties are captured by the resulting feature vector of window responses, describing the degree of crowd-like appearance around an image location as the surrounding spatial extent is increased.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of matching a face in a low resolution query video sequence against a set of higher quality gallery sequences. This problem is of interest in many applications, such as law enforcement. Our main contribution is an extension of the recently proposed Generic Shape-Illumination Manifold (gSIM) framework. Specifically, (i) we show how super-resolution across pose and scale can be achieved implicitly, by off-line learning of subsampling artefacts; (ii) we use this result to propose an extension to the statistical model of the gSIM by compounding it with a hierarchy of subsampling models at multiple scales; and (iii) we describe an extensive empirical evaluation of the method on over 1300 video sequences – we first measure the degradation in performance of the original gSIM algorithm as query sequence resolution is decreased and then show that the proposed extension produces an error reduction in the mean recognition rate of over 50%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In spite of over two decades of intense research, illumination and pose invariance remain prohibitively challenging aspects of face recognition for most practical applications. The objective of this work is to recognize faces using video sequences both for training and recognition input, in a realistic, unconstrained setup in which lighting, pose and user motion pattern have a wide variability and face images are of low resolution. In particular there are three areas of novelty: (i) we show how a photometric model of image formation can be combined with a statistical model of generic face appearance variation, learnt offline, to generalize in the presence of extreme illumination changes; (ii) we use the smoothness of geodesically local appearance manifold structure and a robust same-identity likelihood to achieve invariance to unseen head poses; and (iii) we introduce an accurate video sequence “reillumination” algorithm to achieve robustness to face motion patterns in video. We describe a fully automatic recognition system based on the proposed method and an extensive evaluation on 171 individuals and over 1300 video sequences with extreme illumination, pose and head motion variation. On this challenging data set our system consistently demonstrated a nearly perfect recognition rate (over 99.7%), significantly outperforming state-of-the-art commercial software and methods from the literature

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multitasking among three or more different tasks is a ubiquitous requirement of everyday cognition, yet rarely is it addressed in research on healthy adults who have had no specific training in multitasking skills. Participants completed a set of diverse subtasks within a simulated shopping mall and office environment, the Edinburgh Virtual Errands Test (EVET). The aim was to investigate how different cognitive functions, such as planning, retrospective and prospective memory, and visuospatial and verbal working memory, contribute to everyday multitasking. Subtasks were chosen to be diverse, and predictions were derived from a statistical model of everyday multitasking impairments associated with frontal-lobe lesions (Burgess, Veitch, de Lacy Costello, & Shallice, 2000b). Multiple regression indicated significant independent contributions from measures of retrospective memory, visuospatial working memory, and online planning, but not from independent measures of prospective memory or verbal working memory. Structural equation modelling showed that the best fit to the data arose from three underlying constructs, with Memory and Planning having a weak link, but with both having a strong directional pathway to an Intent construct that reflected implementation of intentions. Participants who followed their preprepared plan achieved higher scores than those who altered their plan during multitask performance. This was true regardless of whether the plan was efficient or poor. These results substantially develop and extend the Burgess et al. (2000b) model to healthy adults and yield new insight into the poorly understood area of everyday multitasking. The findings also point to the utility of using virtual environments for investigating this form of complex human cognition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article argues that it is not just trust-generating but also trust-inhibiting mechanisms that operate in teams, and that these cooperative and competitive structures of interpersonal relations of trust within teams may affect team performance. Specifically, we propose that the presence of trust-generating structures (e.g., reciprocity, trusting in the referrals of others we trust, trusting in high performers and more experienced people) and the absence of trust-inhibiting structures (e.g., not trusting in the referrals of others we trust) are more likely to be associated with successful teams. Using exponential random graph models, a particular class of statistical model for social networks, we examine three professional sporting teams from the Australian Football League for the presence and absence of these mechanisms of interpersonal relations of trust. Quantitative network results indicate a differential presence of these postulated structures of trust relations in line with our hypotheses. Qualitative comparisons of these quantitative findings with team performance measures suggest a link between trust-generating and trust-inhibiting mechanisms of trust and team performance. Further theorization on other trust-inhibiting structures of trust relations and related empirical work is likely to shed further light on these connections.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Anomaly detection techniques are used to find the presence of anomalous activities in a network by comparing traffic data activities against a "normal" baseline. Although it has several advantages which include detection of "zero-day" attacks, the question surrounding absolute definition of systems deviations from its "normal" behaviour is important to reduce the number of false positives in the system. This study proposes a novel multi-agent network-based framework known as Statistical model for Correlation and Detection (SCoDe), an anomaly detection framework that looks for timecorrelated anomalies by leveraging statistical properties of a large network, monitoring the rate of events occurrence based on their intensity. SCoDe is an instantaneous learning-based anomaly detector, practically shifting away from the conventional technique of having a training phase prior to detection. It does acquire its training using the improved extension of Exponential Weighted Moving Average (EWMA) which is proposed in this study. SCoDe does not require any previous knowledge of the network traffic, or network administrators chosen reference window as normal but effectively builds upon the statistical properties from different attributes of the network traffic, to correlate undesirable deviations in order to identify abnormal patterns. The approach is generic as it can be easily modified to fit particular types of problems, with a predefined attribute, and it is highly robust because of the proposed statistical approach. The proposed framework was targeted to detect attacks that increase the number of activities on the network server, examples which include Distributed Denial of Service (DDoS) and, flood and flash-crowd events. This paper provides a mathematical foundation for SCoDe, describing the specific implementation and testing of the approach based on a network log file generated from the cyber range simulation experiment of the industrial partner of this project.