152 resultados para Dimensionless Parameter
Resumo:
Total cross sections for neutron scattering from nuclei, with energies ranging from 10 to 600 MeV and from many nuclei spanning the mass range 6Li to 238U, have been analyzed using a simple, three-parameter, functional form. The calculated cross sections are compared with results obtained by using microscopic (g-folding) optical potentials as well as with experimental data. The functional form reproduces those total cross sections very well. When allowance is made for Ramsauer-like effects in the scattering, the parameters of the functional form required vary smoothly with energy and target mass. They too can be represented by functions of energy and mass.
Resumo:
In this chapter, ideas from ecological psychology and nonlinear dynamics are integrated to characterise decision-making as an emergent property of self-organisation processes in the interpersonal interactions that occur in sports teams. A conceptual model is proposed to capture constraints on dynamics of decisions and actions in dyadic systems, which has been empirically evaluated in simulations of interpersonal interactions in team sports. For this purpose, co-adaptive interpersonal dynamics in team sports such as rubgy union have been studied to reveal control parameter and collective variable relations in attacker-defender dyads. Although interpersonal dynamics of attackers and defenders in 1 vs 1 situations showed characteristics of chaotic attractors, the informational constraints of rugby union typically bounded dyadic systems into low dimensional attractors. Our work suggests that the dynamics of attacker-defender dyads can be characterised as an evolving sequence since players' positioning and movements are connected in diverse ways over time.
Resumo:
Ecological dynamics characterizes adaptive behavior as an emergent, self-organizing property of interpersonal interactions in complex social systems. The authors conceptualize and investigate constraints on dynamics of decisions and actions in the multiagent system of team sports. They studied coadaptive interpersonal dynamics in rugby union to model potential control parameter and collective variable relations in attacker–defender dyads. A videogrammetry analysis revealed how some agents generated fluctuations by adapting displacement velocity to create phase transitions and destabilize dyadic subsystems near the try line. Agent interpersonal dynamics exhibited characteristics of chaotic attractors and informational constraints of rugby union boxed dyadic systems into a low dimensional attractor. Data suggests that decisions and actions of agents in sports teams may be characterized as emergent, self-organizing properties, governed by laws of dynamical systems at the ecological scale. Further research needs to generalize this conceptual model of adaptive behavior in performance to other multiagent populations.
Resumo:
The identification of attractors is one of the key tasks in studies of neurobiological coordination from a dynamical systems perspective, with a considerable body of literature resulting from this task. However, with regards to typical movement models investigated, the overwhelming majority of actions studied previously belong to the class of continuous, rhythmical movements. In contrast, very few studies have investigated coordination of discrete movements, particularly multi-articular discrete movements. In the present study, we investigated phase transition behavior in a basketball throwing task where participants were instructed to shoot at the basket from different distances. Adopting the ubiquitous scaling paradigm, throwing distance was manipulated as a candidate control parameter. Using a cluster analysis approach, clear phase transitions between different movement patterns were observed in performance of only two of eight participants. The remaining participants used a single movement pattern and varied it according to throwing distance, thereby exhibiting hysteresis effects. Results suggested that, in movement models involving many biomechanical degrees of freedom in degenerate systems, greater movement variation across individuals is available for exploitation. This observation stands in contrast to movement variation typically observed in studies using more constrained bi-manual movement models. This degenerate system behavior provides new insights and poses fresh challenges to the dynamical systems theoretical approach, requiring further research beyond conventional movement models.
Resumo:
It is important to detect and treat malnutrition in hospital patients so as to improve clinical outcome and reduce hospital stay. The aim of this study was to develop and validate a nutrition screening tool with a simple and quick scoring system for acute hospital patients in Singapore. In this study, 818 newly admitted patients aged above 18 years old were screened using five parameters that contribute to the risk of malnutrition. A dietitian blinded to the nutrition screening score assessed the same patients using the reference standard, Subjective Global Assessment (SGA) within 48 hours. The sensitivity and specificity were established using the Receiver Operator Characteristics (ROC) curve and the best cutoff scores determined. The nutrition parameter with the largest Area Under the ROC Curve (AUC) was chosen as the final screening tool, which was named 3-Minute Nutrition Screening (3-MinNS). The combination of the parameters weight loss, intake and muscle wastage (3-MinNS), gave the largest AUC when compared with SGA. Using 3-MinNS, the best cutoff point to identify malnourished patients is three (sensitivity 86%, specificity 83%). The cutoff score to identify subjects at risk of severe malnutrition is five (sensitivity 93%, specificity 86%). 3-Minute Nutrition Screening is a valid, simple and rapid tool to identify patients at risk of malnutrition in Singapore acute hospital patients. It is able to differentiate patients at risk of moderate malnutrition and severe malnutrition for prioritization and management purposes.
Resumo:
Controlled rate thermal analysis (CRTA) technology offers better resolution and a more detailed interpretation of the decomposition processes of a clay mineral such as sepiolite via approaching equilibrium conditions of decomposition through the elimination of the slow transfer of heat to the sample as a controlling parameter on the process of decomposition. Constant-rate decomposition processes of non-isothermal nature reveal changes in the sepiolite as the sepiolite is converted to an anhydride. In the dynamic experiment two dehydration steps are observed over the *20–170 and 170–350 �C temperature range. In the dynamic experiment three dehydroxylation steps are observed over the temperature ranges 201–337, 337–638 and 638–982 �C. The CRTA technology enables the separation of the thermal decomposition steps.
Resumo:
In this paper, a fixed-switching-frequency closed-loop modulation of a voltage-source inverter (VSI), upon the digital implementation of the modulation process, is analyzed and characterized. The sampling frequency of the digital processor is considered as an integer multiple of the modulation switching frequency. An expression for the determination of the modulation design parameter is developed for smooth modulation at a fixed switching frequency. The variation of the sampling frequency, switching frequency, and modulation index has been analyzed for the determination of the switching condition under closed loop. It is shown that the switching condition determined based on the continuous-time analysis of the closed-loop modulation will ensure smooth modulation upon the digital implementation of the modulation process. However, the stability properties need to be tested prior to digital implementation as they get deteriorated at smaller sampling frequencies. The closed-loop modulation index needs to be considered maximum while determining the design parameters for smooth modulation. In particular, a detailed analysis has been carried out by varying the control gain in the sliding-mode control of a two-level VSI. The proposed analysis of the closed-loop modulation of the VSI has been verified for the operation of a distribution static compensator. The theoretical results are validated experimentally on both single- and three-phase systems.
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.
Resumo:
This thesis details methodology to estimate urban stormwater quality based on a set of easy to measure physico-chemical parameters. These parameters can be used as surrogate parameters to estimate other key water quality parameters. The key pollutants considered in this study are nitrogen compounds, phosphorus compounds and solids. The use of surrogate parameter relationships to evaluate urban stormwater quality will reduce the cost of monitoring and so that scientists will have added capability to generate a large amount of data for more rigorous analysis of key urban stormwater quality processes, namely, pollutant build-up and wash-off. This in turn will assist in the development of more stringent stormwater quality mitigation strategies. The research methodology was based on a series of field investigations, laboratory testing and data analysis. Field investigations were conducted to collect pollutant build-up and wash-off samples from residential roads and roof surfaces. Past research has identified that these impervious surfaces are the primary pollutant sources to urban stormwater runoff. A specially designed vacuum system and rainfall simulator were used in the collection of pollutant build-up and wash-off samples. The collected samples were tested for a range of physico-chemical parameters. Data analysis was conducted using both univariate and multivariate data analysis techniques. Analysis of build-up samples showed that pollutant loads accumulated on road surfaces are higher compared to the pollutant loads on roof surfaces. Furthermore, it was found that the fraction of solids smaller than 150 ìm is the most polluted particle size fraction in solids build-up on both roads and roof surfaces. The analysis of wash-off data confirmed that the simulated wash-off process adopted for this research agrees well with the general understanding of the wash-off process on urban impervious surfaces. The observed pollutant concentrations in wash-off from road surfaces were different to pollutant concentrations in wash-off from roof surfaces. Therefore, firstly, the identification of surrogate parameters was undertaken separately for roads and roof surfaces. Secondly, a common set of surrogate parameter relationships were identified for both surfaces together to evaluate urban stormwater quality. Surrogate parameters were identified for nitrogen, phosphorus and solids separately. Electrical conductivity (EC), total organic carbon (TOC), dissolved organic carbon (DOC), total suspended solids (TSS), total dissolved solids (TDS), total solids (TS) and turbidity (TTU) were selected as the relatively easy to measure parameters. Consequently, surrogate parameters for nitrogen and phosphorus were identified from the set of easy to measure parameters for both road surfaces and roof surfaces. Additionally, surrogate parameters for TSS, TDS and TS which are key indicators of solids were obtained from EC and TTU which can be direct field measurements. The regression relationships which were developed for surrogate parameters and key parameter of interest were of a similar format for road and roof surfaces, namely it was in the form of simple linear regression equations. The identified relationships for road surfaces were DTN-TDS:DOC, TP-TS:TOC, TSS-TTU, TDS-EC and TSTTU: EC. The identified relationships for roof surfaces were DTN-TDS and TSTTU: EC. Some of the relationships developed had a higher confidence interval whilst others had a relatively low confidence interval. The relationships obtained for DTN-TDS, DTN-DOC, TP-TS and TS-EC for road surfaces demonstrated good near site portability potential. Currently, best management practices are focussed on providing treatment measures for stormwater runoff at catchment outlets where separation of road and roof runoff is not found. In this context, it is important to find a common set of surrogate parameter relationships for road surfaces and roof surfaces to evaluate urban stormwater quality. Consequently DTN-TDS, TS-EC and TS-TTU relationships were identified as the common relationships which are capable of providing measurements of DTN and TS irrespective of the surface type.
Resumo:
An element spacing of less than half a wavelength introduces strong mutual coupling between the ports of compact antenna arrays. The strong coupling causes significant system performance degradation. A decoupling network may compensate for the mutual coupling. Alternatively, port decoupling can be achieved using a modal feed network. In response to an input signal at one of the input ports, this feed network excites the antenna elements in accordance with one of the eigenvectors of the array scattering parameter matrix. In this paper, a novel 4-element monopole array is described. The feed network of the array is implemented as a planar ring-type circuit in stripline with four coupled line sections. The new configuration offers a significant reduction in size, resulting in a very compact array.
Resumo:
This paper investigates the problem of appropriate load sharing in an autonomous microgrid. High gain angle droop control ensures proper load sharing, especially under weak system conditions. However it has a negative impact on overall stability. Frequency domain modeling, eigenvalue analysis and time domain simulations are used to demonstrate this conflict. A supplementary loop is proposed around a conventional droop control of each DG converter to stabilize the system while using high angle droop gains. Control loops are based on local power measurement and modulation of the d-axis voltage reference of each converter. Coordinated design of supplementary control loops for each DG is formulated as a parameter optimization problem and solved using an evolutionary technique. The sup-plementary droop control loop is shown to stabilize the system for a range of operating conditions while ensuring satisfactory load sharing.
Resumo:
This paper presents a proposed qualitative framework to discuss the heterogeneous burning of metallic materials, through parameters and factors that influence the melting rate of the solid metallic fuel (either in a standard test or in service). During burning, the melting rate is related to the burning rate and is therefore an important parameter for describing and understanding the burning process, especially since the melting rate is commonly recorded during standard flammability testing for metallic materials and is incorporated into many relative flammability ranking schemes. However, whilst the factors that influence melting rate (such as oxygen pressure or specimen diameter) have been well characterized, there is a need for an improved understanding of how these parameters interact as part of the overall melting and burning of the system. Proposed here is the ‘Melting Rate Triangle’, which aims to provide this focus through a conceptual framework for understanding how the melting rate (of solid fuel) is determined and regulated during heterogeneous burning. In the paper, the proposed conceptual model is shown to be both (a) consistent with known trends and previously observed results, and (b)capable of being expanded to incorporate new data. Also shown are examples of how the Melting Rate Triangle can improve the interpretation of flammability test results. Slusser and Miller previously published an ‘Extended Fire Triangle’ as a useful conceptual model of ignition and the factors affecting ignition, providing industry with a framework for discussion. In this paper it is shown that a ‘Melting Rate Triangle’ provides a similar qualitative framework for burning, leading to an improved understanding of the factors affecting fire propagation and extinguishment.
Resumo:
In this thesis we are interested in financial risk and the instrument we want to use is Value-at-Risk (VaR). VaR is the maximum loss over a given period of time at a given confidence level. Many definitions of VaR exist and some will be introduced throughout this thesis. There two main ways to measure risk and VaR: through volatility and through percentiles. Large volatility in financial returns implies greater probability of large losses, but also larger probability of large profits. Percentiles describe tail behaviour. The estimation of VaR is a complex task. It is important to know the main characteristics of financial data to choose the best model. The existing literature is very wide, maybe controversial, but helpful in drawing a picture of the problem. It is commonly recognised that financial data are characterised by heavy tails, time-varying volatility, asymmetric response to bad and good news, and skewness. Ignoring any of these features can lead to underestimating VaR with a possible ultimate consequence being the default of the protagonist (firm, bank or investor). In recent years, skewness has attracted special attention. An open problem is the detection and modelling of time-varying skewness. Is skewness constant or there is some significant variability which in turn can affect the estimation of VaR? This thesis aims to answer this question and to open the way to a new approach to model simultaneously time-varying volatility (conditional variance) and skewness. The new tools are modifications of the Generalised Lambda Distributions (GLDs). They are four-parameter distributions, which allow the first four moments to be modelled nearly independently: in particular we are interested in what we will call para-moments, i.e., mean, variance, skewness and kurtosis. The GLDs will be used in two different ways. Firstly, semi-parametrically, we consider a moving window to estimate the parameters and calculate the percentiles of the GLDs. Secondly, parametrically, we attempt to extend the GLDs to include time-varying dependence in the parameters. We used the local linear regression to estimate semi-parametrically conditional mean and conditional variance. The method is not efficient enough to capture all the dependence structure in the three indices —ASX 200, S&P 500 and FT 30—, however it provides an idea of the DGP underlying the process and helps choosing a good technique to model the data. We find that GLDs suggest that moments up to the fourth order do not always exist, there existence appears to vary over time. This is a very important finding, considering that past papers (see for example Bali et al., 2008; Hashmi and Tay, 2007; Lanne and Pentti, 2007) modelled time-varying skewness, implicitly assuming the existence of the third moment. However, the GLDs suggest that mean, variance, skewness and in general the conditional distribution vary over time, as already suggested by the existing literature. The GLDs give good results in estimating VaR on three real indices, ASX 200, S&P 500 and FT 30, with results very similar to the results provided by historical simulation.
Resumo:
In this work, we investigate an alternative bootstrap approach based on a result of Ramsey [F.L. Ramsey, Characterization of the partial autocorrelation function, Ann. Statist. 2 (1974), pp. 1296-1301] and on the Durbin-Levinson algorithm to obtain a surrogate series from linear Gaussian processes with long range dependence. We compare this bootstrap method with other existing procedures in a wide Monte Carlo experiment by estimating, parametrically and semi-parametrically, the memory parameter d. We consider Gaussian and non-Gaussian processes to prove the robustness of the method to deviations from normality. The approach is also useful to estimate confidence intervals for the memory parameter d by improving the coverage level of the interval.
Resumo:
Dynamic and controlled rate thermal analysis (CRTA) has been used to characterise alunites of formula [M(Al)3(SO4)2(OH)6 ] where M+ is the cations K+, Na+ or NH4+. Thermal decomposition occurs in a series of steps. (a) dehydration, (b) well defined dehydroxylation and (c) desulphation. CRTA offers a better resolution and a more detailed interpretation of water formation processes via approaching equilibrium conditions of decomposition through the elimination of the slow transfer of heat to the sample as a controlling parameter on the process of decomposition. Constant-rate decomposition processes of water formation reveal the subtle nature of dehydration and dehydroxylation.