928 resultados para Parameter


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Areal bone mineral density (aBMD) is the most common surrogate measurement for assessing the bone strength of the proximal femur associated with osteoporosis. Additional factors, however, contribute to the overall strength of the proximal femur, primarily the anatomical geometry. Finite element analysis (FEA) is an effective and widely used computerbased simulation technique for modeling mechanical loading of various engineering structures, providing predictions of displacement and induced stress distribution due to the applied load. FEA is therefore inherently dependent upon both density and anatomical geometry. FEA may be performed on both three-dimensional and two-dimensional models of the proximal femur derived from radiographic images, from which the mechanical stiffness may be redicted. It is examined whether the outcome measures of two-dimensional FEA, two-dimensional, finite element analysis of X-ray images (FEXI), and three-dimensional FEA computed stiffness of the proximal femur were more sensitive than aBMD to changes in trabecular bone density and femur geometry. It is assumed that if an outcome measure follows known trends with changes in density and geometric parameters, then an increased sensitivity will be indicative of an improved prediction of bone strength. All three outcome measures increased non-linearly with trabecular bone density, increased linearly with cortical shell thickness and neck width, decreased linearly with neck length, and were relatively insensitive to neck-shaft angle. For femoral head radius, aBMD was relatively insensitive, with two-dimensional FEXI and threedimensional FEA demonstrating a non-linear increase and decrease in sensitivity, respectively. For neck anteversion, aBMD decreased non-linearly, whereas both two-dimensional FEXI and three dimensional FEA demonstrated a parabolic-type relationship, with maximum stiffness achieved at an angle of approximately 15o. Multi-parameter analysis showed that all three outcome measures demonstrated their highest sensitivity to a change in cortical thickness. When changes in all input parameters were considered simultaneously, three and twodimensional FEA had statistically equal sensitivities (0.41±0.20 and 0.42±0.16 respectively, p = ns) that were significantly higher than the sensitivity of aBMD (0.24±0.07; p = 0.014 and 0.002 for three-dimensional and two-dimensional FEA respectively). This simulation study suggests that since mechanical integrity and FEA are inherently dependent upon anatomical geometry, FEXI stiffness, being derived from conventional two-dimensional radiographic images, may provide an improvement in the prediction of bone strength of the proximal femur than currently provided by aBMD.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

CRTA technology offers better resolution and a more detailed interpretation of the decomposition processes of a clay mineral such as sepiolite via approaching equilibrium conditions of decomposition through the elimination of the slow transfer of heat to the sample as a controlling parameter on the process of decomposition. Constant-rate decomposition processes of non-isothermal nature reveal changes in the sepiolite as the sepiolite is converted to an anhydride. In the dynamic experiment two dehydration steps are observed over the ~20-170 and 170-350°C temperature range. In the dynamic experiment three dehydroxylation steps are observed over the temperature ranges 201-337, 337-638 and 638-982°C. The CRTA technology enables the separation of the thermal decomposition steps.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Monitoring unused or dark IP addresses offers opportunities to extract useful information about both on-going and new attack patterns. In recent years, different techniques have been used to analyze such traffic including sequential analysis where a change in traffic behavior, for example change in mean, is used as an indication of malicious activity. Change points themselves say little about detected change; further data processing is necessary for the extraction of useful information and to identify the exact cause of the detected change which is limited due to the size and nature of observed traffic. In this paper, we address the problem of analyzing a large volume of such traffic by correlating change points identified in different traffic parameters. The significance of the proposed technique is two-fold. Firstly, automatic extraction of information related to change points by correlating change points detected across multiple traffic parameters. Secondly, validation of the detected change point by the simultaneous presence of another change point in a different parameter. Using a real network trace collected from unused IP addresses, we demonstrate that the proposed technique enables us to not only validate the change point but also extract useful information about the causes of change points.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Total cross sections for neutron scattering from nuclei, with energies ranging from 10 to 600 MeV and from many nuclei spanning the mass range 6Li to 238U, have been analyzed using a simple, three-parameter, functional form. The calculated cross sections are compared with results obtained by using microscopic (g-folding) optical potentials as well as with experimental data. The functional form reproduces those total cross sections very well. When allowance is made for Ramsauer-like effects in the scattering, the parameters of the functional form required vary smoothly with energy and target mass. They too can be represented by functions of energy and mass.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this chapter, ideas from ecological psychology and nonlinear dynamics are integrated to characterise decision-making as an emergent property of self-organisation processes in the interpersonal interactions that occur in sports teams. A conceptual model is proposed to capture constraints on dynamics of decisions and actions in dyadic systems, which has been empirically evaluated in simulations of interpersonal interactions in team sports. For this purpose, co-adaptive interpersonal dynamics in team sports such as rubgy union have been studied to reveal control parameter and collective variable relations in attacker-defender dyads. Although interpersonal dynamics of attackers and defenders in 1 vs 1 situations showed characteristics of chaotic attractors, the informational constraints of rugby union typically bounded dyadic systems into low dimensional attractors. Our work suggests that the dynamics of attacker-defender dyads can be characterised as an evolving sequence since players' positioning and movements are connected in diverse ways over time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ecological dynamics characterizes adaptive behavior as an emergent, self-organizing property of interpersonal interactions in complex social systems. The authors conceptualize and investigate constraints on dynamics of decisions and actions in the multiagent system of team sports. They studied coadaptive interpersonal dynamics in rugby union to model potential control parameter and collective variable relations in attacker–defender dyads. A videogrammetry analysis revealed how some agents generated fluctuations by adapting displacement velocity to create phase transitions and destabilize dyadic subsystems near the try line. Agent interpersonal dynamics exhibited characteristics of chaotic attractors and informational constraints of rugby union boxed dyadic systems into a low dimensional attractor. Data suggests that decisions and actions of agents in sports teams may be characterized as emergent, self-organizing properties, governed by laws of dynamical systems at the ecological scale. Further research needs to generalize this conceptual model of adaptive behavior in performance to other multiagent populations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The identification of attractors is one of the key tasks in studies of neurobiological coordination from a dynamical systems perspective, with a considerable body of literature resulting from this task. However, with regards to typical movement models investigated, the overwhelming majority of actions studied previously belong to the class of continuous, rhythmical movements. In contrast, very few studies have investigated coordination of discrete movements, particularly multi-articular discrete movements. In the present study, we investigated phase transition behavior in a basketball throwing task where participants were instructed to shoot at the basket from different distances. Adopting the ubiquitous scaling paradigm, throwing distance was manipulated as a candidate control parameter. Using a cluster analysis approach, clear phase transitions between different movement patterns were observed in performance of only two of eight participants. The remaining participants used a single movement pattern and varied it according to throwing distance, thereby exhibiting hysteresis effects. Results suggested that, in movement models involving many biomechanical degrees of freedom in degenerate systems, greater movement variation across individuals is available for exploitation. This observation stands in contrast to movement variation typically observed in studies using more constrained bi-manual movement models. This degenerate system behavior provides new insights and poses fresh challenges to the dynamical systems theoretical approach, requiring further research beyond conventional movement models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is important to detect and treat malnutrition in hospital patients so as to improve clinical outcome and reduce hospital stay. The aim of this study was to develop and validate a nutrition screening tool with a simple and quick scoring system for acute hospital patients in Singapore. In this study, 818 newly admitted patients aged above 18 years old were screened using five parameters that contribute to the risk of malnutrition. A dietitian blinded to the nutrition screening score assessed the same patients using the reference standard, Subjective Global Assessment (SGA) within 48 hours. The sensitivity and specificity were established using the Receiver Operator Characteristics (ROC) curve and the best cutoff scores determined. The nutrition parameter with the largest Area Under the ROC Curve (AUC) was chosen as the final screening tool, which was named 3-Minute Nutrition Screening (3-MinNS). The combination of the parameters weight loss, intake and muscle wastage (3-MinNS), gave the largest AUC when compared with SGA. Using 3-MinNS, the best cutoff point to identify malnourished patients is three (sensitivity 86%, specificity 83%). The cutoff score to identify subjects at risk of severe malnutrition is five (sensitivity 93%, specificity 86%). 3-Minute Nutrition Screening is a valid, simple and rapid tool to identify patients at risk of malnutrition in Singapore acute hospital patients. It is able to differentiate patients at risk of moderate malnutrition and severe malnutrition for prioritization and management purposes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Controlled rate thermal analysis (CRTA) technology offers better resolution and a more detailed interpretation of the decomposition processes of a clay mineral such as sepiolite via approaching equilibrium conditions of decomposition through the elimination of the slow transfer of heat to the sample as a controlling parameter on the process of decomposition. Constant-rate decomposition processes of non-isothermal nature reveal changes in the sepiolite as the sepiolite is converted to an anhydride. In the dynamic experiment two dehydration steps are observed over the *20–170 and 170–350 �C temperature range. In the dynamic experiment three dehydroxylation steps are observed over the temperature ranges 201–337, 337–638 and 638–982 �C. The CRTA technology enables the separation of the thermal decomposition steps.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a fixed-switching-frequency closed-loop modulation of a voltage-source inverter (VSI), upon the digital implementation of the modulation process, is analyzed and characterized. The sampling frequency of the digital processor is considered as an integer multiple of the modulation switching frequency. An expression for the determination of the modulation design parameter is developed for smooth modulation at a fixed switching frequency. The variation of the sampling frequency, switching frequency, and modulation index has been analyzed for the determination of the switching condition under closed loop. It is shown that the switching condition determined based on the continuous-time analysis of the closed-loop modulation will ensure smooth modulation upon the digital implementation of the modulation process. However, the stability properties need to be tested prior to digital implementation as they get deteriorated at smaller sampling frequencies. The closed-loop modulation index needs to be considered maximum while determining the design parameters for smooth modulation. In particular, a detailed analysis has been carried out by varying the control gain in the sliding-mode control of a two-level VSI. The proposed analysis of the closed-loop modulation of the VSI has been verified for the operation of a distribution static compensator. The theoretical results are validated experimentally on both single- and three-phase systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis details methodology to estimate urban stormwater quality based on a set of easy to measure physico-chemical parameters. These parameters can be used as surrogate parameters to estimate other key water quality parameters. The key pollutants considered in this study are nitrogen compounds, phosphorus compounds and solids. The use of surrogate parameter relationships to evaluate urban stormwater quality will reduce the cost of monitoring and so that scientists will have added capability to generate a large amount of data for more rigorous analysis of key urban stormwater quality processes, namely, pollutant build-up and wash-off. This in turn will assist in the development of more stringent stormwater quality mitigation strategies. The research methodology was based on a series of field investigations, laboratory testing and data analysis. Field investigations were conducted to collect pollutant build-up and wash-off samples from residential roads and roof surfaces. Past research has identified that these impervious surfaces are the primary pollutant sources to urban stormwater runoff. A specially designed vacuum system and rainfall simulator were used in the collection of pollutant build-up and wash-off samples. The collected samples were tested for a range of physico-chemical parameters. Data analysis was conducted using both univariate and multivariate data analysis techniques. Analysis of build-up samples showed that pollutant loads accumulated on road surfaces are higher compared to the pollutant loads on roof surfaces. Furthermore, it was found that the fraction of solids smaller than 150 ìm is the most polluted particle size fraction in solids build-up on both roads and roof surfaces. The analysis of wash-off data confirmed that the simulated wash-off process adopted for this research agrees well with the general understanding of the wash-off process on urban impervious surfaces. The observed pollutant concentrations in wash-off from road surfaces were different to pollutant concentrations in wash-off from roof surfaces. Therefore, firstly, the identification of surrogate parameters was undertaken separately for roads and roof surfaces. Secondly, a common set of surrogate parameter relationships were identified for both surfaces together to evaluate urban stormwater quality. Surrogate parameters were identified for nitrogen, phosphorus and solids separately. Electrical conductivity (EC), total organic carbon (TOC), dissolved organic carbon (DOC), total suspended solids (TSS), total dissolved solids (TDS), total solids (TS) and turbidity (TTU) were selected as the relatively easy to measure parameters. Consequently, surrogate parameters for nitrogen and phosphorus were identified from the set of easy to measure parameters for both road surfaces and roof surfaces. Additionally, surrogate parameters for TSS, TDS and TS which are key indicators of solids were obtained from EC and TTU which can be direct field measurements. The regression relationships which were developed for surrogate parameters and key parameter of interest were of a similar format for road and roof surfaces, namely it was in the form of simple linear regression equations. The identified relationships for road surfaces were DTN-TDS:DOC, TP-TS:TOC, TSS-TTU, TDS-EC and TSTTU: EC. The identified relationships for roof surfaces were DTN-TDS and TSTTU: EC. Some of the relationships developed had a higher confidence interval whilst others had a relatively low confidence interval. The relationships obtained for DTN-TDS, DTN-DOC, TP-TS and TS-EC for road surfaces demonstrated good near site portability potential. Currently, best management practices are focussed on providing treatment measures for stormwater runoff at catchment outlets where separation of road and roof runoff is not found. In this context, it is important to find a common set of surrogate parameter relationships for road surfaces and roof surfaces to evaluate urban stormwater quality. Consequently DTN-TDS, TS-EC and TS-TTU relationships were identified as the common relationships which are capable of providing measurements of DTN and TS irrespective of the surface type.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An element spacing of less than half a wavelength introduces strong mutual coupling between the ports of compact antenna arrays. The strong coupling causes significant system performance degradation. A decoupling network may compensate for the mutual coupling. Alternatively, port decoupling can be achieved using a modal feed network. In response to an input signal at one of the input ports, this feed network excites the antenna elements in accordance with one of the eigenvectors of the array scattering parameter matrix. In this paper, a novel 4-element monopole array is described. The feed network of the array is implemented as a planar ring-type circuit in stripline with four coupled line sections. The new configuration offers a significant reduction in size, resulting in a very compact array.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates the problem of appropriate load sharing in an autonomous microgrid. High gain angle droop control ensures proper load sharing, especially under weak system conditions. However it has a negative impact on overall stability. Frequency domain modeling, eigenvalue analysis and time domain simulations are used to demonstrate this conflict. A supplementary loop is proposed around a conventional droop control of each DG converter to stabilize the system while using high angle droop gains. Control loops are based on local power measurement and modulation of the d-axis voltage reference of each converter. Coordinated design of supplementary control loops for each DG is formulated as a parameter optimization problem and solved using an evolutionary technique. The sup-plementary droop control loop is shown to stabilize the system for a range of operating conditions while ensuring satisfactory load sharing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a proposed qualitative framework to discuss the heterogeneous burning of metallic materials, through parameters and factors that influence the melting rate of the solid metallic fuel (either in a standard test or in service). During burning, the melting rate is related to the burning rate and is therefore an important parameter for describing and understanding the burning process, especially since the melting rate is commonly recorded during standard flammability testing for metallic materials and is incorporated into many relative flammability ranking schemes. However, whilst the factors that influence melting rate (such as oxygen pressure or specimen diameter) have been well characterized, there is a need for an improved understanding of how these parameters interact as part of the overall melting and burning of the system. Proposed here is the ‘Melting Rate Triangle’, which aims to provide this focus through a conceptual framework for understanding how the melting rate (of solid fuel) is determined and regulated during heterogeneous burning. In the paper, the proposed conceptual model is shown to be both (a) consistent with known trends and previously observed results, and (b)capable of being expanded to incorporate new data. Also shown are examples of how the Melting Rate Triangle can improve the interpretation of flammability test results. Slusser and Miller previously published an ‘Extended Fire Triangle’ as a useful conceptual model of ignition and the factors affecting ignition, providing industry with a framework for discussion. In this paper it is shown that a ‘Melting Rate Triangle’ provides a similar qualitative framework for burning, leading to an improved understanding of the factors affecting fire propagation and extinguishment.