943 resultados para Statistical physics
Resumo:
Provision of network infrastructure to meet rising network peak demand is increasing the cost of electricity. Addressing this demand is a major imperative for Australian electricity agencies. The network peak demand model reported in this paper provides a quantified decision support tool and a means of understanding the key influences and impacts on network peak demand. An investigation of the system factors impacting residential consumers’ peak demand for electricity was undertaken in Queensland, Australia. Technical factors, such as the customers’ location, housing construction and appliances, were combined with social factors, such as household demographics, culture, trust and knowledge, and Change Management Options (CMOs) such as tariffs, price,managed supply, etc., in a conceptual ‘map’ of the system. A Bayesian network was used to quantify the model and provide insights into the major influential factors and their interactions. The model was also used to examine the reduction in network peak demand with different market-based and government interventions in various customer locations of interest and investigate the relative importance of instituting programs that build trust and knowledge through well designed customer-industry engagement activities. The Bayesian network was implemented via a spreadsheet with a tick box interface. The model combined available data from industry-specific and public sources with relevant expert opinion. The results revealed that the most effective intervention strategies involve combining particular CMOs with associated education and engagement activities. The model demonstrated the importance of designing interventions that take into account the interactions of the various elements of the socio-technical system. The options that provided the greatest impact on peak demand were Off-Peak Tariffs and Managed Supply and increases in the price of electricity. The impact in peak demand reduction differed for each of the locations and highlighted that household numbers, demographics as well as the different climates were significant factors. It presented possible network peak demand reductions which would delay any upgrade of networks, resulting in savings for Queensland utilities and ultimately for households. The use of this systems approach using Bayesian networks to assist the management of peak demand in different modelled locations in Queensland provided insights about the most important elements in the system and the intervention strategies that could be tailored to the targeted customer segments.
Resumo:
Some statistical procedures already available in literature are employed in developing the water quality index, WQI. The nature of complexity and interdependency that occur in physical and chemical processes of water could be easier explained if statistical approaches were applied to water quality indexing. The most popular statistical method used in developing WQI is the principal component analysis (PCA). In literature, the WQI development based on the classical PCA mostly used water quality data that have been transformed and normalized. Outliers may be considered in or eliminated from the analysis. However, the classical mean and sample covariance matrix used in classical PCA methodology is not reliable if the outliers exist in the data. Since the presence of outliers may affect the computation of the principal component, robust principal component analysis, RPCA should be used. Focusing in Langat River, the RPCA-WQI was introduced for the first time in this study to re-calculate the DOE-WQI. Results show that the RPCA-WQI is capable to capture similar distribution in the existing DOE-WQI.
Resumo:
In this work, we consider subordinated processes controlled by a family of subordinators which consist of a power function of a time variable and a negative power function of an α-stable random variable. The effect of parameters in the subordinators on the subordinated process is discussed. By suitable variable substitutions and the Laplace transform technique, the corresponding fractional Fokker–Planck-type equations are derived. We also compute their mean square displacements in a free force field. By choosing suitable ranges of parameters, the resulting subordinated processes may be subdiffusive, normal diffusive or superdiffusive
Resumo:
In this work, we study the fractal and multifractal properties of a family of fractal networks introduced by Gallos et al (2007 Proc. Nat. Acad. Sci. USA 104 7746). In this fractal network model, there is a parameter e which is between 0 and 1, and allows for tuning the level of fractality in the network. Here we examine the multifractal behavior of these networks, the dependence relationship of the fractal dimension and the multifractal parameters on parameter e. First, we find that the empirical fractal dimensions of these networks obtained by our program coincide with the theoretical formula given by Song et al (2006 Nature Phys. 2 275). Then from the shape of the τ(q) and D(q) curves, we find the existence of multifractality in these networks. Last, we find that there exists a linear relationship between the average information dimension 〈D(1)〉 and the parameter e.
Resumo:
The objective of this study was to compare the short-term respiratory effects due to the inhalation of electronic and conventional tobacco cigarette-generated mainstream aerosols through the measurement of the exhaled nitric oxide (eNO). To this purpose, twenty-five smokers were asked to smoke a conventional cigarette and to vape an electronic cigarette (with and without nicotine), and an electronic cigarette without liquid (control session). Electronic and tobacco cigarette mainstream aerosols were characterized in terms of total particle number concentrations and size distributions. On the basis of the measured total particle number concentrations and size distributions, the average particle doses deposited in alveolar and tracheobronchial regions of the lungs for a single 2-s puff were also estimated considering a subject performing resting (sitting) activity. Total particle number concentrations in the mainstream resulted equal to 3.5 ± 0.4 × 109, 5.1 ± 0.1 × 109, and 3.1 ± 0.6 × 109 part. cm− 3 for electronic cigarettes without nicotine, with nicotine, and for conventional cigarettes, respectively. The corresponding alveolar doses for a resting subject were estimated equal to 3.8 × 1010, 5.2 × 1010 and 2.3 × 1010 particles. The mean eNO variations measured after each smoking/vaping session were equal to 3.2 ppb, 2.7 ppb and 2.8 ppb for electronic cigarettes without nicotine, with nicotine, and for conventional cigarettes, respectively; whereas, negligible eNO changes were measured in the control session. Statistical tests performed on eNO data showed statistically significant differences between smoking/vaping sessions and the control session, thus confirming a similar effect on human airways whatever the cigarette smoked/vaped, the nicotine content, and the particle dose received.
Resumo:
This article examines a social media assignment used to teach and practice statistical literacy with over 400 students each semester in large-lecture traditional, fully online, and flipped sections of an introductory-level statistics course. Following the social media assignment, students completed a survey on how they approached the assignment. Drawing from the authors’ experiences with the project and the survey results, this article offers recommendations for developing social media assignments in large courses that focus on the interplay between the social media tool and the implications of assignment prompts.
Resumo:
Using surface charts at 0330GMT, the movement df the monsoon trough during the months June to September 1990 al two fixed longitudes, namely 79 degrees E and 85 degrees E, is studied. The probability distribution of trough position shows that the median, mean and mode occur at progressively more northern latitudes, especially at 85 degrees E, with a pronounced mode that is close to the northern-most limit reached by the trough. A spectral analysis of the fluctuating latitudinal position of the trough is carried out using FFT and the Maximum Entropy Method (MEM). Both methods show significant peaks around 7.5 and 2.6 days, and a less significant one around 40-50 days. The two peaks at the shorter period are more prominent at the eastern longitude. MEM shows an additional peak around 15 days. A study of the weather systems that occurred during the season shows them to have a duration around 3 days and an interval between systems of around 9 days, suggesting a possible correlation with the dominant short periods observed in the spectrum of trough position.
Resumo:
The main objective of statistical analysis of experi- mental investigations is to make predictions on the basis of mathematical equations so as the number of experiments. Abrasive jet machining (AJM) is an unconventional and novel machining process wherein microabrasive particles are propelled at high veloc- ities on to a workpiece. The resulting erosion can be used for cutting, etching, cleaning, deburring, drilling and polishing. In the study completed by the authors, statistical design of experiments was successfully employed to predict the rate of material removal by AJM. This paper discusses the details of such an approach and the findings.
Resumo:
This is the first of two papers that map (dis)continuities in notions of power from Aristotle to Newton to Foucault. They trace the ways in which bio-physical conceptions of power became paraphrased in social science and deployed in educational discourse on the child and curriculum from post-Newtonian times to the present. The analyses suggest that, amid ruptures in the definition, role, location and meaning given 'power' historically in various 'physical' and 'social' cosmologies, the naming of 'power' has been dependent on 'physics', on the theorization of motion across 'Western' sciences. This first paper examines some (dis)continuities in regard to histories of motion and power from Aristotelian 'natural science' to Newtonian mechanics.
Resumo:
The physical mechanism through which Ei-Nino and Southern Oscillation (ENSO) tends to produce deficient precipitation over Indian continent is investigated using both observations as well as a general circulation model. Both analysis of observations and atmospheric general circulation model (AGCM) study show that the planetary scale response associated with ENSO primarily influences the equatorial Indian Ocean region. Through this interaction it tends to favour the equatorial heat source, enhance precipitation over the equatorial Indian Ocean and indirectly cause a decrease in continental precipitation through induced subsidence. This situation is further complicated by the fact the regional tropospheric quasi biennial oscillation (QBO) has a bimodal structure over this region with large amplitude over the Indian continent. While the ENSO response has a quasi-four year periodicity and tends peak during beginning of the calendar year, the QBO mode tends to peak during northern summer. Thus, the QBO mode exerts a stronger influence on the interannual variability of the monsoon. The strength of the Indian monsoon in a given year depends on the combined effect of the ENSO and the QBO mode. Sines the two oscillations have disparate time scales, exact phase information of the two modes during northern summer is important in determining the Indian summer monsoon. The physical mechanism of the interannual variations of the Indian monsoon precipitation associated with ENSO presented here is similar to the physical process that cause intraseasonal 'active', 'break' oscillations of the monsoon.
Resumo:
We study the generation of defects when a quantum spin system is quenched through a multicritical point by changing a parameter of the Hamiltonian as t/tau, where tau is the characteristic timescale of quenching. We argue that when a quantum system is quenched across a multicritical point, the density of defects (n) in the final state is not necessarily given by the Kibble-Zurek scaling form n similar to 1/tau(d nu)/((z nu+1)), where d is the spatial dimension, and. and z are respectively the correlation length and dynamical exponent associated with the quantum critical point. We propose a generalized scaling form of the defect density given by n similar to 1/(tau d/(2z2)), where the exponent z(2) determines the behavior of the off-diagonal term of the 2 x 2 Landau-Zener matrix at the multicritical point. This scaling is valid not only at a multicritical point but also at an ordinary critical point.
Resumo:
This is a summary of the beyond the Standard Model (including model building working group of the WHEPP-X workshop held at Chennai from January 3 to 15, 2008.
Resumo:
Objective: The aim of this study was to explore whether there is a relationship between the degree of MR-defined inflammation using ultra small super-paramagnetic iron oxide (USPIO) particles, and biomechanical stress using finite element analysis (FEA) techniques, in carotid atheromatous plaques. Methods and Results: 18 patients with angiographically proven carotid stenoses underwent multi-sequence MR imaging before and 36 h after USPIO infusion. T2 * weighted images were manually segmented into quadrants and the signal change in each quadrant normalised to adjacent muscle was calculated after USPIO administration. Plaque geometry was obtained from the rest of the multi-sequence dataset and used within a FEA model to predict maximal stress concentration within each slice. Subsequently, a new statistical model was developed to explicitly investigate the form of the relationship between biomechanical stress and signal change. The Spearman's rank correlation coefficient for USPIO enhanced signal change and maximal biomechanical stress was -0.60 (p = 0.009). Conclusions: There is an association between biomechanical stress and USPIO enhanced MR-defined inflammation within carotid atheroma, both known risk factors for plaque vulnerability. This underlines the complex interaction between physiological processes and biomechanical mechanisms in the development of carotid atheroma. However, this is preliminary data that will need validation in a larger cohort of patients.
Resumo:
The export of sediments from coastal catchments can have detrimental impacts on estuaries and near shore reef ecosystems such as the Great Barrier Reef. Catchment management approaches aimed at reducing sediment loads require monitoring to evaluate their effectiveness in reducing loads over time. However, load estimation is not a trivial task due to the complex behaviour of constituents in natural streams, the variability of water flows and often a limited amount of data. Regression is commonly used for load estimation and provides a fundamental tool for trend estimation by standardising the other time specific covariates such as flow. This study investigates whether load estimates and resultant power to detect trends can be enhanced by (i) modelling the error structure so that temporal correlation can be better quantified, (ii) making use of predictive variables, and (iii) by identifying an efficient and feasible sampling strategy that may be used to reduce sampling error. To achieve this, we propose a new regression model that includes an innovative compounding errors model structure and uses two additional predictive variables (average discounted flow and turbidity). By combining this modelling approach with a new, regularly optimised, sampling strategy, which adds uniformity to the event sampling strategy, the predictive power was increased to 90%. Using the enhanced regression model proposed here, it was possible to detect a trend of 20% over 20 years. This result is in stark contrast to previous conclusions presented in the literature. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
We consider the development of statistical models for prediction of constituent concentration of riverine pollutants, which is a key step in load estimation from frequent flow rate data and less frequently collected concentration data. We consider how to capture the impacts of past flow patterns via the average discounted flow (ADF) which discounts the past flux based on the time lapsed - more recent fluxes are given more weight. However, the effectiveness of ADF depends critically on the choice of the discount factor which reflects the unknown environmental cumulating process of the concentration compounds. We propose to choose the discount factor by maximizing the adjusted R-2 values or the Nash-Sutcliffe model efficiency coefficient. The R2 values are also adjusted to take account of the number of parameters in the model fit. The resulting optimal discount factor can be interpreted as a measure of constituent exhaustion rate during flood events. To evaluate the performance of the proposed regression estimators, we examine two different sampling scenarios by resampling fortnightly and opportunistically from two real daily datasets, which come from two United States Geological Survey (USGS) gaging stations located in Des Plaines River and Illinois River basin. The generalized rating-curve approach produces biased estimates of the total sediment loads by -30% to 83%, whereas the new approaches produce relatively much lower biases, ranging from -24% to 35%. This substantial improvement in the estimates of the total load is due to the fact that predictability of concentration is greatly improved by the additional predictors.